BuildrSpace

#37. Accessible GPUs with Decentralized Compute Marketplace.

April 09, 2024 Various hosts Season 4 Episode 37
BuildrSpace
#37. Accessible GPUs with Decentralized Compute Marketplace.
Buidl +
Help us continue making great content for listeners everywhere.
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Embark on an enlightening expedition through the ever-evolving world of cloud computing with our special guest, Anil Murty from Akash Network.  Our conversation navigates the transformative landscape of cloud infrastructure, marking the pivotal moments that have led to the birth of  decentralized cloud . Discover why Akash's innovative approach is shaking up the industry, offering cost benefits and an enticing alternative to the traditional, long-term financial commitments of big-name cloud providers. Anil illuminates how Akash's vision is revitalizing the cloud's original promise of eliminating hefty upfront capital expenditures and offering scalability, reshaping it into a competitive landscape where resources are accessible to everyone.

Support the Show.

Anil Murty :

If you're somebody that wants to utilize fleet of H100s or, soon, the GH200 platform, not only do you need to commit to spending a lot of money, but in many cases, if you go to the bigger cloud providers today, you need to actually commit to a year or three years of spend with them before you can even get access to these GPUs.

Anil Murty :

And so this is sort of that inflection point, I think, where a solution like Akash really shines sort of that inflection point, I think, where a solution like Akash really shines. For example, if you were to go to akashnetworkcomgpu right now, what you would find is that you can geta H100 with an SXM interface for as low as $1.50, which, I believe, is you know, half or less than half of what you can get at many of the other places out there.

Host:

Hi there, welcome to Buidl Crypto Today. I have with me Anil from Akash. Welcome, Anil.

Anil Murty :

Hey, it's great to be here. Thanks for having me.

Host:

Great to have you.

Anil Murty :

Great to have you. So, anil, could you talk a little bit about yourself? Yeah, sure, I'll be going into that a little bit. So yeah, so my background I started out my education was in college, was in electrical engineering. So I background in electrical engineering, undergrad and then, even though I have a graduate engineering degree in electrical engineering, I actually focused on computer networks and so a lot of my coursework there was computer science classes, a little bit of electrical engineering as well, and then for the first several years I'd say probably about half of my career, which is about two decades at this point I spent my time working on embedded software so this is typically device drivers for embedded devices and so I spent a bunch of years doing that at companies like Motorola, working on consumer electronic devices and writing essentially what's called the hardware abstraction layer for devices like that.

Anil Murty :

And then at some point along that journey I got more and more, you know, towards the customer, facing portions of the business, and that's where I realized that you know, understanding the whole product lifecycle and identifying a problem in the existing customer base or, you know, sort of a gap in the market was something that I find exciting and that sort of took me towards looking at product roles as a potential transition point in my career, and somewhere along there I also ended up going to business school while I was working, and you know, put that together with the experience that I had as an engineer I ultimately ended up moving into a product role, initially working for mid-sized companies and then startups, and then macro mid-sized companies, and then, over the second half of my career as a product person, I got to work on everything from networking and hardware devices to cloud networking devices to pure cloud and then, ultimately, monitoring and telemetry type companies.

Anil Murty :

So I spent some time at companies like New Relic and HashiCorp prior to coming to Akash, prior to coming to Akash. And so, given that I had a whole bunch of background in building solutions for cloud-native products and cloud-native customers, when I was approached by the folks at Akash or at Overclock Labs, which is the parent company above Akash Network and the creator of Akash Network as well, the project really attracted me because I looked at it as a significantly new way of imagining how infrastructure gets utilized, and just looking at how the clouds have evolved over the last couple of decades, it became very clear to me that the original premise that the clouds were created for is no longer valid anymore in many cases, and what Akash was doing seemed to be in the right direction in terms of seeing where the industry was going to go in the next few years, and that's what got me excited about Akash and got me joining Overclock Labs.

Host:

I do remember Greg, one of his interviews saying that story of tech is story of cloud, and so, rightly so, and based on what your experience, you would naturally gravitate towards the Akash version of the cloud. So could you talk a little bit more about that?

Anil Murty :

Yeah, so you know, like you said, having been in the tech industry for the last couple of decades, for the first few years of my career, you know, cloud was either non-existent or very nascent and most companies were running all of their software on-prem and offering it as a service or just shipping software as binary images that their customers would utilize, and so this was like the early to mid-2000s, right. And so this was like the early to mid-2000s, right. And then when the concept of a cloud was invented back in I don't know, 2006, 2007, by Amazon, and it's really started to gain traction, probably four or five years after that, where you saw, you know, there was this inflection point where, initially, cloud was primarily targeted towards, you know, the startups and the small and medium businesses, and then, eventually, the enterprises realized that this is something that they can utilize as well. That transition probably happened somewhere around 2011, 2012, which is kind of the time you can consider the cloud going mainstream and beginning enterprise adoption from then on. The cloud going mainstream and beginning enterprise adoption from then on. I think for the first few initial years when the cloud came up, the big draw towards the cloud was that it basically gave, or it essentially leveled the playing field for startup companies. So if you were a startup company in the, you know, around the time that the dot-com boom happened, or in the early 2000s, if you wanted to build a software service and offer it to your customers, you had a fairly large upfront investment to make, and the cloud essentially enabled startups to be able to, you know, get to market much faster at a much lower cost by taking away that capital expenditure that they needed to spend on infrastructure, on servers and storage and all of that, and also for resources to manage all that infrastructure. And so that was great and you know, we've obviously had a really good run in terms of a lot of startups being able to test out products at a really low cost, find product market fit or, in some cases, not find product market fit and decide to abandon the idea, do something else, and so it really levels the playing field for them and allowed a lot of startups to disrupt the status quo and ultimately bring value to the customers.

Anil Murty :

But what's happened in the last few years, particularly as GPUs have really taken off thanks to all the demand from AI and machine learning workloads in the last few years, particularly since OpenAI's chat GPT movement.

Anil Murty :

What has happened is that we're sort of going back to the traditional ways where, given the scarcity in the availability of GPUs, particularly certain high-end models like the A100s and the H100s, and soon the GH200s and the B100s from NVIDIA, not only are these significantly more expensive, but in many cases they're just really hard to get.

Anil Murty :

In many cases they're just really hard to get, and so if you're somebody that wants to utilize, you know, a fleet of H100s or, you know, soon, the GH200 platform, not only do you need to commit to spending a lot of money, but in many cases, if you go to the bigger cloud providers today, you need to actually commit to a year or three years of spend with them before you can even get access to these GPUs.

Anil Murty :

So if you sort of go back to the start of the cloud and compare that to where we are today, the whole premise of the cloud, which is remove that capital expenditure, the upfront expenditure that you've got to do, and give you the flexibility to scale up and scale down without having to take on the uh ongoing expense that sort of goes away. Uh, if you have to commit to a year of cloud expenditure in order to get access to a certain piece of hardware and so, uh, this is sort of that inflection point, I think, where a solution like akash really shines, and that's what we've've been seeing with a lot of our customers and users as well.

Host:

Thanks for that, anil. So to follow up, a follow-up question, for that is how Akash makes it more accessible, the GPU accessibility, as you rightly pointed out. Right, it's harder to get hands-on on H100s and the higher-ups now NVIDIA is coming out with. First of all. That's my first question and I remember Craig also mentioning about Akash being suitable also for the small language models. So yeah, if you could expand on that.

Anil Murty :

Yeah, absolutely Would love to dig into that. So there's a few different things that are at play here and we kind of saw this coming, you know, a year, year and a half ago, and we're just kind of why we, you know specifically, you know, focused our but really doubled down on that strategy and it was kind of, I think, driven by two or three things, if I can frame it that way. The first was, you know, it was very clear that there was going to be a huge amount of demand for GPU workloads because of the growth in the amount of applications that are going to get built in the next few years. That was very clear. I think it became more and more clear after the chat GPT movement, but it was clear to many people even before that that there is going to be some point in time whether it was going to be six months, one year, two years, there was going to be some point in time where this was going to happen. So that's kind of number one. The second thing was what we also realized was that, uh, even though initially when the chat gpt movement happened um, open, ai was it almost seemed for a little bit, you know, maybe a month or two months, that open ai was going to be the only game in town and they were going to basically suck all the oxygen out of everything else and everybody's going to be just building an open AI. And that sort of went back again to history repeating itself, which was, if you've been around in tech for long enough, or if you have read about technology history even if you've not been around for that long, what you have seen is that there's always been points in technology history where, even if a certain technology gets invented by a really big player and is initially only available through that specific player, over time there is enough movements in communities around the world that leads to open source solutions. Movements in communities around the world that leads to open source solutions. Arguably the biggest example of that, historically speaking, is the Linux operating system. So you know, way back in the day, in the 90s, obviously Windows was the most dominant operating system out there, and today, if you look at most server workloads, as well as a lot of consumer electronics and many other services that you access through SaaS services, all of them underneath run Linux and as a result of communities that build in the open and are able to come together and create something that is, overall, going to create a better world for people that are building. It is pretty clear in our head that that is going to be the case even with AI.

Anil Murty :

If you look back at Akash, akash has always had a significant portion of its code-based open source right from day one. But what we did approximately, you know, a year and a half ago is we decided to go 100% open source, and this was way before even the chat GPT movement source, and this was way before even the chat GPT movement. And then not only did we decide to go 100% open source, we also decided to go to an open development model. So we essentially came up with this idea of building in the open, similar to what projects like the Kubernetes project does. So they have the concept of special interest groups and work groups where people essentially are able to propose ideas, talk about them in the open as part of a community and vote on things and then work together to implement certain things that make sense from a community-driven project perspective. So this is the switch that we made about a year and a half ago and we, to this day, operate the same way. So literally every single decision that gets made gets made in the open. It's documented in our GitHub repository. All of our code base sits there as well, and so we made this transition.

Anil Murty :

And then, sure enough, for the first few months after OpenAI released ChatGPT and that whole inflection point happened a few months following that you saw that there were competing open source models being released for similar types of functions or capabilities as what OpenAI was releasing. And then since then, which was about a year or a year and three months now, we have seen a whole bunch of new open source models get released. Hugging Face has been an amazing repository for all those models and everything from image generation to large language models to small language models everything between those is now almost always you can find an equivalent open source version of a closed source model, and so our strategy of being an open company aligns really well with that, and so that's worked out really well for us. And now taking those two things and then marrying it with one of the questions that you asked, which is how does small language models fit and how does large language models fit? Essentially, the way things come together really nicely for us is given that we have been a crypto native company as well, or a crypto native project as well.

Anil Murty :

We obviously have a blockchain based mechanism for matching supply with demand. So, essentially, the way Akash works for folks that are not familiar with it is that we're essentially a two-sided marketplace. On the one side, you have supply, which is compute supply, whether it's regular compute or accelerated compute in the form of GPUs. Now All of this supply is available on the network in terms of individual providers, so a single provider can have a single server, they can have 10,000 servers, they can have 100,000 servers any number and each of these providers are independent entities and no single person owns the entire infrastructure. So, even though Overclock Labs is the company or the organization that created our cash network, we don't own all the infrastructure. We own a teeny, tiny portion of it. We're one of the providers on the network, and there's over 75 of those providers today.

Anil Murty :

And then on the other side of this is people that want to deploy applications onto that compute infrastructure, and the way the matching of these two sides happens is through a blockchain, and the reason we use a blockchain for that is because number one, it lets us be able to do this in a very automated fashion, so being able to easily create a smart contract between somebody that wants a certain resource and somebody that has that resource to give can be done very easily in blockchains, using smart contracts or using programmatic ways, and so that's what we've implemented is a two-sided marketplace where you can get the best possible resource in terms of price and performance for the workload that you want to run. And so, given that we have a crypto background, we have a natural affinity or we have a good portion of our community, that is consists of people that have had GPU mining equipment mining equipment so if you look back to you know 2017, 2018, 2019, 2020 as well similar to how NVIDIA has seen a huge boost from AI workloads in the last one year, two years prior to that, the prior inflection point that NVIDIA had was from GPU mining. Some of the people that were around in the GPU space then would probably remember that and so there's a whole bunch of GPU capacity sitting in miners even now, whether it was for Bitcoin mining or it was previously for Ethereum mining or any others, and a lot of these chains have either transitioned away from proof of work type blockchains to proof of stake or, in case of Bitcoin, it's getting more and more expensive, with each having to be able to mine Bitcoin. So there is, as a result, a lot of GPU capacity that's out there that you know. People have already invested the money into that they would love to monetize, and so, while those GPUs may not be the most latest and the greatest GPUs, given that they've been around for four, five, six years, they still serve in many cases to be a really good platform for being able to do inference.

Anil Murty :

So, while you may not be able to train the largest of the models on these older GPUs, many of them work really great for inference. For example, one of the most common GPUs that we get requests for inference today is the RTX 4090, believe it or not and what people have found is that the price to performance ratio of an RTX 4090 is really good when you're trying to do basic inference, whether it's running something like LAMA or LL for language responses as a natural language processing engine. You know wanting to do image generation using stable diffusion or any of the other image generation models out there, they work as a really good platform for that type of stuff. So that's where sort of you know us being able to match all of this demand from the crypto and mining communities towards people that want to do small language models or just pure inference on models with fewer parameters. It works great.

Anil Murty :

Now, when you think about the higher-end GPUs, which is primarily people that want to be able to run models with tens of billions of parameters or want to be able to do large-scale training, what we have found here is that we are able to actually bring in crypto-driven incentives. So we have the concept of a community pool within our protocol that has several million dollars of money available for us to deploy as part of community incentives, and so what we're able to do is we're able to actually source a lot, lot of these iron gpus as well and offer them at a significantly competitive price relative to anybody else that's on out of the market. So, for example, if you were to go to akashnetwork gpu right now, what you would find is um that you can get a h100 with an sxm interface, but as low as a dollar and50, which I believe is half or less than half of what you can get at many of the other places out there. So I hope that answers some of the questions that you have.

Host:

Yeah, yeah, that definitely answers my questions and brings in more questions for me, actually, that I was thinking while you were talking about this. So I have used a cost service in the past and it's it's amazing in terms of I hosted a, hosted a blog also, so everything is kind of containerized. They're nice templates, it's very easy to use and I was going to get to the ease of use for non-crypto native users. So that was like a year ago and now things might have improved even more. So any improvements there in terms of, like, how the GPUs or the GPU marketplace works, because that's relatively new right?

Anil Murty :

Yeah, great question. So, yeah, so the GPU marketplace was launched. I mean time flies right. So we actually launched the GPU marketplace in beta, I think around June of last year, may to June, may, june last year, may 23. And then, you know, ga'd it, I think a month or two after that. So it's been around for about six or seven months now, but, yeah, we're quickly coming up on almost a year.

Anil Murty :

Yeah, so, from the perspective of being able to use GPUs or request GPU resources on the network, the way we have implemented GPU support is to match it exactly the way regular CPU resources work, and so any sort of deployments that you did on regular non-accelerated compute I don't know how long ago that was, maybe a year ago or so or maybe two years ago you'll find that the deployment workflow is exactly the same even with GPUs. So, just like how you can write this thing called a stack definition language file, or an SDL file, as we refer to it, which is effectively like a Docker compose file for those that are infrastructure nerds listening to this and what you do there is you basically say, hey, these are the services that I want to be running, and a service could be a backend service. It could be a frontend service, it could be a machine learning workload, it could be an inference app, whatever you like, and so you can have multiple of these services specified inside that file, and then, for each of the services, you specify something called a compute profile, which is basically saying these are the resources, or this is the amount of resources that I think the service is going to need in order to operate. So the compute profile typically is you know, I need six CPUs, I need one GPU, I need a gigabyte of RAM, I need, you know, two gigabytes of storage. So you specify all these things and then submit this job onto the network, and then what you get back is a whole bunch of bids from various providers. Each bid typically consists of, you know, information about the provider, where is it located, what's been the uptime on the provider for the last seven days, what's the name of the provider and then, of course, what is the cost. So what is this provider going to charge you for running this workload for one month? And so you get all these bids back, and then you go ahead and accept one of the bids and then, the moment you do that, the workload gets deployed onto the specific provider and what you get back is an endpoint that gives you access into the container running container instance, and if you expose certain ports, then you know. If one of those ports happens to be a port 80 or 443, then you have essentially a HTTP you know interface into that as well. So the entire workflow is exactly the same as what it was with CPUs. Nothing's changed, so that should be totally familiar if you go try that.

Anil Murty :

The other aspect of that which you asked about was how do we make it easy for non-crypto people to be able to access this? And that's a really good question because obviously a majority, a big share of the AI workloads today are being built and deployed by folks that are not crypto-native right, and so, to that end, there's a few things that are ongoing within Akash and within Overclock Labs. First and foremost, as you probably know from past conversations and from following Akash, we have a fairly vibrant ecosystem and a fairly vibrant community. So one aspect of our community is that there is a bunch of people that are actually building solutions on top of Akash. So, similar to how you know, when AWS and Azure and all of these services took off, you had a bunch of people building you know monitoring solutions, building things like Roku or you know Vercel or these kinds of things that utilize AWS compute underneath, or Vercel or these kinds of things that utilize AWS compute underneath.

Anil Murty :

There is a bunch of teams that are building similar solutions to utilize Akash compute underneath. In fact, one of those teams the name of the team was CloudMOS. They were called CloudMOS because they're essentially built on the Cosmos or they're part of the Cosmos ecosystem, and they were primarily targeting Akash compute as the platform that they would build on top of. They call themselves CloudMOS. That team was actually acquired by Oracle Oclabs about seven or eight months ago and they're actually part of our team now, and so they built this client that takes you know our basic APIs and our CLI and implements essentially a UI on top of that to make it easy to deploy. And so now that those folks are part of our team, we've rebranded that to consoleakashnetwork.

Anil Murty :

If you go to consoleakashnetwork, what you'll see is what looks like a simpler version of AWS console, but specifically for Akash. So that's already there. So you can check out consoleakashnetwork and you can see what that looks like. What you will see in the next few months is us work on, you know, more curated workflows for AI there's already like a bunch of templates out there, but even better curated workflows and also potentially look at offering a credit card based interface and not just a crypto and a wallet driven interface as well. So that's one aspect of it. The second aspect of it is there is other teams out there, so there's a team called spheron that has built a ui app that already has a credit card based interface that can be utilized for deployments and then separate from teams that are directly building on us.

Anil Murty :

We're also in talks with you know, partnerships talks with certain Web2 companies.

Anil Murty :

So these are companies that have built essentially AI inference platforms, right? So these are companies that are built like a UI and API layer that allows people to be able to utilize open source models and abstracts away all the infrastructure components from that whole experience, right? So whether you're utilizing AWS underneath or Azure underneath or Akash underneath, all of that is abstracted away and what you, as a data scientist or a machine learning engineer, get is you get this API interface or UI interface where you can just say, hey, this is the model I'd like to run, I would like to run it really fast, or I'd like to run it medium or slow and either run the model right now and give me the outputs or give me a programmatic interface where I can request that the model be run with this data set and with these parameters so I can tune the model as well. So we have several talks with companies that you'll be hearing about in the next few months that are Web2 companies that have built these kinds of platforms that are going to be using Akash computer.

Host:

Yeah, so you're fully realizing the SkyCloud concept looks, looks like you know, with this uh full realization of that, where you can define your uh compute parameters and then it uh does what it does in the background and as you, as you described, like the fast, slow and you know it depends upon the type of jobs you're running, right and time and the time of the day. Then you can have that price selectivity as well. So it sounds fantastic.

Anil Murty :

Yep, amazing. The flexibility of Akash, I think, is that it lets you not just be able to choose the kinds of compute resources and make the tradeoffs between price to performance that is applicable to your specific application performance that is applicable to your specific application but it also gives you the option of choosing to be as decentralized or not decentralized as you want to be. So let's say, you use Akash for a few months and you decide that these three of the 75 providers are the ones that I like the most and those are the only ones I want to be deploying to. You can programmatically set it up so that you always default to those providers. Or, if you're somebody that is a completely you know you're a hardcore decentralization fan, or you know you can be choosing a different provider every single day and build your application to do that. So that flexibility in being able to decide what path you want to choose is essentially what I think makes Akash really unique, and so this brings me to like, like a bigger, bigger question.

Host:

Akash was talking about decentralized cloud before all this deep in narrative, right. So we've, like, I had my interview with with Greg almost a year, I had my interview with Greg almost a year, two years ago, and so we were talking about these things, and so where do you see the conversions now of AI and crypto, like in the broader scheme of things?

Anil Murty :

Yeah, that's been a really hot topic for the last few months, hasn't it? So you're absolutely right. Basically, what we have seen, at least in the last several months, is something that we have been passionate about for several years now. Greg and adam, way longer than I have this idea of decentralizing the infrastructure, or the compute infrastructure and the cloud, which you know in many ways, is a public utility at this point. That's something that really has taken on a narrative for this specific crypto cycle coming up, and so, as with all narratives you know, similar to you know, in the last crypto cycle, DeFi and NFTs and you know a few other things were pretty hot and everybody wanted to jump on them. You have a bunch of people trying to jump on this decentralized physical infrastructure narrative or the deep end narrative now, and a bunch of people trying to claim that they are quote unquote decentralized compute marketplaces. What's been interesting to watch is that many of these projects actually don't in the absolute worst case, some of them don't have an actual product underneath and they're just talking about things that you know, in many ways are just copying messages from uh projects like akash and others that have been at this for several years now in the, and that's sort of in the worst case scenario where they don't really have I haven't really built anything, but they're just talking about it. And then, in the best case scenario, is projects that have legitimately built something but they've not taken the effort to truly think about decentralization at the core. So they may have gone and acquired compute from one or two or three sources and then offering that as a decentralized solution. You know, that's not the true definition of decentralization. That's just taking a regular good old approach of you know going and sourcing compute, but just sourcing it from multiple sources yourself, right? So I think that's not. It's not I'm not saying it's a bad solution. It works. It's better than the first one, which is just claiming things when you don't really have anything, but it's also not really decentralized. What's also interesting to see about a lot of these solutions is that they're all closed source, so they don't open up. They definitely don't open up the source code for others to look at, similar to what Akash has done but they don't even open up their metrics.

Anil Murty :

In case of Akash, you can actually go to a web page called statsakashnetwork. What this is is basically it shows you all the statistics of things that are happening on Akash. Every single second, every single minute, Every time a block is created. What you can see there is you can see the total number of providers on the network. You can see the total amount of compute in terms of GPU, CPU storage memory. You can see the total number of leases being created.

Anil Murty :

A lease is basically when one workload gets deployed onto one provider. That's typically a lease, so it's like one application being deployed and you can see the total number of compute resources being spent, amount of money being spent on the network, and all of these metrics are basically stored on the blockchain. So it's not something that we as overclock labs or as anybody in the community, can go and sort of spoof or mess with or fake in any way, because anybody can query these parameters from the blockchain and prove us wrong if we try to do that or fake in any way, because anybody can query these parameters from the blockchain and Roo was wrong if we try to do that. So in that sense, I don't know of any other project out there other than Akash that is fully open source, fully decentralized and exposes all of its statistics on a blockchain for anyone to query within the compute deep end marketplace. So I don't know of any other compute deep end project that is doing those three things, and if there is, then I would love to learn about it.

Host:

Yeah, well said, I myself haven't, even though I've interviewed some folks in competing with you guys or in healthy competition, but I haven't seen this clear statistics from anyone so far, so that's good to see. So, as far as the convergence of AI and crypto goes, I mean, clearly there is one solid use case that Akash is building for, which is providing GPU accessibility, which is at the inference level as well models, and you have more efficient models coming, and then you have inferencing getting better on on commodity hardware. You know you can definitely see that uh utilization of gpus even going higher. Right, I'm currently looking at the stats and I do see a lot of utilization happening here month over month, so that's good to see. Okay, one question that's uh, one of the last questions is and I've started doing this with my speakers is um, what advice would you give to somebody who comes next on my show and and this is you could say regarding something in the crypto sphere that you have learned so far?

Anil Murty :

And actually I, just before I answer that, I just realized I didn't answer the previous question completely, so I'll just quickly answer that as well. I didn't touch quite on the AI crypto narrative, so I talked a lot about, you know, the new projects coming up and how Akash potentially is different from those, but really the AI versus AI across crypto narrative view, if you want to call it. That makes complete sense to me, because one of the biggest things that people talk about in the non-crypto world today with regards to AI is how AI is being controlled by a handful of companies, right? So there is this huge outcry among a lot of people that you know a few companies have enough compute capacity and are capable of acquiring a lot of compute capacity into the future, and these are the companies that are going to be able to train the best models, run the best models and all of that. And I think this is where crypto really makes sense to me, because it's the one way that we can build systems in the open, allow easy or programmatic aggregation of capacity compute capacity, the way Akash is doing and be able to crowdsource the development of not just the development of models but also, you know, the accessibility to models as well as compute in an open fashion, and so being able to do this with crypto is a lot more easier to make programmatic and a lot more easier to make you know sort of source from a community or crowdsource than it is to do without crypto, and that's why that thing makes complete sense to me.

Anil Murty :

Now to answer your last question, which is you know what sort of advice I might have for the next person that comes along, I think the biggest advice I would give to someone and this is coming from me as someone who was not in crypto before I started working on Akash and joined Overclock Labs is when you think about a crypto project.

Anil Murty :

I know there's a lot of people out there that build crypto projects with the pure intention of shilling a token or making a quick buck and calling it a day.

Anil Murty :

I think it'd be really nice to see more people think about the real utility of crypto and how it can be applied specifically to areas of our life that require decentralization or require things that need an incentive mechanism to make them more like a public utility, without actually making them a public utility in the sense of, you know, making a fixed cost or a fixed price and no competition. So I think what crypto is really good at is being able to sustain innovation while, at the same time, leveling the playing field and giving people access to technology that otherwise would not have access to them, while at the same time, you know, allowing entrepreneurs to be able to generate wealth, retain value or capture a certain portion of value that they create, without having to get democratized completely or without having to get commoditized completely. So I think that's kind of my overall advice is to think of solutions that could really uniquely be solved only with crypto, as opposed to being solved by a web to solution, and just do crypto for the sake of it.

Host:

Great Thanks, anil, chatting with you and folks here listening. Where can they, where can they find you?

Anil Murty :

Yeah, so, akash, you can find us at akashnetwork on the web. And then I'm on Twitter. My handle is underscore Anil underscore Murti underscore. And then I'm also on LinkedIn and the usual places that you'll find someone Awesome.

Host:

Thank you.

Anil Murty :

Thanks you.

Cloud Computing Evolution and Akash Network
Cloud Accessibility and GPU Challenges
Open Source Strategy in AI Development
Akash Network
Decentralized AI Cloud Evolution
Networking With Anil and Akash