#34. Decentralized Serverless Compute.

November 02, 2023 Season 4 Episode 35
#34. Decentralized Serverless Compute.
More Info
#34. Decentralized Serverless Compute.
Nov 02, 2023 Season 4 Episode 35

Tom Trowbridge  a well known serial entrepreneur and  co-founder of Fluence Labs, talks about why world deserves a better decentralized serverless compute platform.

 Get Tangem  Hardware Wallet:
Use  discount code "BUIDL" to get 10% off!! on your online order.

Tangem Wallet a swiss based hardware wallet company . Learn more

Fluence is hiring .
Check jobs at fluence :

Support the Show.

Buidl +
Help us continue making great content for listeners everywhere.
Starting at $3/month
Show Notes Transcript Chapter Markers

Tom Trowbridge  a well known serial entrepreneur and  co-founder of Fluence Labs, talks about why world deserves a better decentralized serverless compute platform.

 Get Tangem  Hardware Wallet:
Use  discount code "BUIDL" to get 10% off!! on your online order.

Tangem Wallet a swiss based hardware wallet company . Learn more

Fluence is hiring .
Check jobs at fluence :

Support the Show.

Tom : 0:00
Fluence Labs is a decentralized compute platform. It's a decentralized serverless compute platform, and what that basically means is that all of the calculations, all of the computation that runs behind any type of analysis or application, can be run on Fluence and can be run in a serverless way, which basically means you're only paying for the cycles that you use. And serverless, to be clear, is the fastest growing component of the cloud ecosystem. It's about a $9 billion market now, projected to grow to $55 billion by 2030.

Host: 1:02
Hey there, I'm excited to interview Tom. He's a well-known co-founder of Hadara technologist, a serial entrepreneur, and he's here today to talk about his new venture. Welcome, Tom.

Tom: 1:16
Thank you. It's great to be here. I appreciate being on.

Host: 1:19
Awesome, Tom. Could you start with just describing some of the life experiences and some of your personal experiences that led you into computing as a field?

Tom: 1:34
Yeah, listen, I think that I've been interested in technology pretty much. I started technology interest right after college and I was doing technology, but on the banking side. That was the telecom deregulation of the United States where the rules changed and so it allowed something called CELIX, which are competitive local exchange carriers, to unbundle the telecommunications kind of infrastructure that the incumbents had, and so there's a huge financing boom around that. And around the same time satellite technology had matured to the level that they were first being used for kind of, let's just say, commercial communications and commercial usage, and so there are both satellite networks being launched. There were different types of telecommunications companies. There was also Internet in, you know, in 96 was also taking off. So broadband was being put in the ground, cables were being put in the ground across the ocean, people digging streets up to provide, you know, dsl, which is basically kind of broadband over copper wires, which was kind of an interim technology, if you will. So it was very from right after college was very focused on technology with telecommunications kind of connectivity side of it all, and that has remained kind of a focus of mine ever since. But that was really my first start and all, and then fast forward through a variety of different jobs, kind of relayed technology one way or another, or investing, and I had the opportunity to join Manson Lehman and help found their Hashgraph, which is a layer one blockchain, and there's not technically a blockchain, so just it's a direct, basically graph, but it's effectively the same thing as a blockchain. It serves the same purpose as the same use and that has been a big success with doing kind of well more transactions, any other layer one out there, and that was a real introduction to the industry in 2016 for me, and so was in that for for a couple of years before kind of moving off and joining fluency as a founder. So Fluence Labs is a decentralized compute platform and, if you're going to get a little more technical, it's a decentralized serverless compute platform, and what that basically means is that all of the calculations, all of the computation that runs behind any type of analysis or application can be run on Fluence and can be run in a serverless way, which basically means you're only paying for the cycles that you use, and so it's you know it is, and serverless, to be clear, is the fastest growing component of the cloud ecosystem and that's about a $9 billion market now projected to grow to $55 billion by 2030.

Tom: 4:37
Brief interruption to tell you this. We have partnered with Tangem Wallet, a Swiss based hardware wallet company. You all have heard the phrase not your keys, not your coins. It wasn't until FTX and other exchanges collapsed that we understood how important self custody is for your crypto. That's where hardware wallets come in. Tangem is an amazing hardware wallet that makes storing your crypto super easy with a completely new and secure approach. Tangem is a credit car style wallet with a chip that works with your mobile device. The amazing thing is that the private keys generated on the chip itself, tangem app itself, doesn't store any assets and everything is on the hardware wallet and it kind of interacts with the app via NFC, which is the same tech as Apple Pay. Go, get yourself one of these bad boys at 10% discount when you use the promo code BIDDLE. It's B-U-I-D-L BIDDLE.

Tom : 5:38
Back to the show. Companies love the ability to just not instead of renting a server. Let me back up. There's three ways to do compute. You can either have servers in your own facility, or you can have a data center where you rent servers at a data center at Amazon. Or you can say hey, guess what? I only want to use a server, have a server used when I need it. And that last bit is called serverless, and it's pretty interesting to companies because that way they don't have to have pay for dedicated resources all the time when they don't often need it. And so you think of lots of different types of analysis where companies need to do computation but where that's not a consistent, predictable flow of computation that has to happen, but spikes and drops or eds and blows, and when you have that type of burst, decomposition. That is what serverless makes sense for. And the more the world becomes, the faster the world becomes and the more kind of on demand the world becomes and the more unpredictable the world becomes, serverless becomes more and more relevant, and so fluence allows that to happen in a decentralized way, which means that you're not relying on an Amazon or on an Azure, and that is critically important and I can kind of go into why that is. The point here first is to first define decentralized, because we also say, and quite clearly, that we are a platform and we are not a marketplace. And that's very important, because a marketplace of compute just means that there's a bunch of options to choose from. Okay, right, If I can, the web is a marketplace. First, let's define what decentralized means. And the reason we call fluence and why fluence is a platform and not a marketplace or not a, is because a marketplace does not mean decentralized. And so I can go to the web right now and choose Google, amazon or Azure right, that doesn't make it decentralized. I'm just picking a provider. So putting an interface in front of a developer and saying, guess what, here's a marketplace, pick a provider that does not make anything decentralized at all. So what leads to decentralized? What we think is decentralized is when you have resiliency and an actual platform where it has full tolerance, where the network provides that resiliency. So the way the fluence platform works is you select the criteria you want for your provider. You select the maximum price, maybe geography, maybe you want to be carbon neutral, maybe you have other criteria, and the platform takes care of allocating the compute job to a provider, and if that provider goes down, it fails over to another provider. That is critical, and that's the hard bit, right. The easy bit is getting a bunch of people to put through a portal and have a client choose one or the other based on price or whatever. To me, that is not a decentralized. That is not adding a fraction of the value of a proper decentralized compute system. And the reason why ours is important is because that means it's very hard to stop it, and so the resiliency is great. And the math is you don't need to have a triple nine data center. If you have three data centers that are 80% and then there's redundancy between them or they fail over to them, you're a triple nine Just by the math, and so you end up having SLA at both the network level and the provider level. But it ends up allowing a number of different levels of data centers to effectively, as a network, compete with the most resilient, expensive data centers in the planet, which we think is really important.

Host: 9:44
Fascinating. So you're saying that the resilience is built into the protocol and, unlike some other players in the space like Akash and others, it's not just connecting you to a marketplace, but it's also how should I say it Like there was an interesting paper from Berkeley called Sky Computing. So I see this very similar to that sort of concept where, say, your job or your workload, kind of, is spread across different clouds and the system.

Tom: 10:15
Yeah, no, 100%. We think that this is a multi, is that kind of a multi cloud solution Is another kind of word, a term that explains it a little bit. The other point is that to build on what you were saying is versus just a marketplace of different bare metal providers is that we have a software stack which clearly allows this resiliency right, and that software stack is critical in making it easy for people to use, build and deploy on influence, and so we're effectively working to build a alternative open cloud platform, and so that's what this really is. And if you think about Ethereum as a platform equivalent to banks and finance, we think Fluence is a platform equivalent to the cloud, and we get there step by step by step, but I can kind of give you more of the roadmap. But importantly, it's not just this, it's not just that there's multiple providers, it's how the resiliency works between them, and then the software stack on top of it, which is what allows applications to be built easily and resilient.

Host: 11:30
Yeah, it seems like, yeah, the whole industry is like, if you look at the application layer, you know companies like Snowflake, Databricks, all that. They are becoming a multi-platform applications and so it's very interesting that you guys are in the space and, from what I hear, it looks like your focus is first the enterprise space, which is quite fascinating. I don't know any player that is kind of following that sort of a strategy. Could you talk about some of the challenges I know there's a lot of technical challenges that are involved in this and how are you working towards it, how we are overcoming those challenges.

Tom: 12:18
Well, I guess the first thing I'd say is that one of the just clarify, also for everybody, is Influence first is off-chain compute, and so this is not an on-chain compute. So on-chain compute is smart contracts right, that's right and that's a terrific, you know, perfect use case for financial transactions. But for everything else, smart contracts are slow, huge consensus overheads required, scalability is an issue, etc. So let's just make sure that point is clear. Summit, like file coin, is an off-chain storage but it's validated and Incented on chain. That's the same kind of concept of of fluence, where the computation happens off-chain but everything else happens on chain. And so, to get to your point, your question on the technical challenges, one of the technical challenges has been the validation, verification, and so come up with a couple of different ways to have proves and to demonstrate and have provability that computation happened, and so we can offer to Customers that not only are you far cheaper than the cloud, but you can have proofs that the compute actually happened and happened at a particular time, etc. And so there's a wide number of regulatory use cases that are relevant for that, you know there's, there's a number of ways in which I think that's going to become more, more relevant, it was particularly in AI. You can prove a model was run on the correct data. You can prove the. You know the correct. You can prove the right question was served. I mean you can. You can prove all these different steps, which is pretty important, and so I think that is another kind of critical piece of it which, again, you need to. You can't do it if you're just a marketplace, because then someone else is running that compute on there with. Well, you're without the network benefit, without the software benefit that you have, so that Computation may or may not happen. But then I think trust is required that that entity is running it and the provability becomes a bigger issue.

Host: 14:28
Yeah, that is amazing. So, Wow. So provability of computation, that's a seems like a completely new paradigm, so will it? Is that? First of two questions I have regarding that is will there be any performance trade-offs With, with generating these proofs on, say, each and every computation, or each and or the competition that that the client wants or the customer wants proofs on? That is one thing, and the second of all, what I was thinking is super interesting. Actually, you know, I'll come back to the second question. I kind of lost my chain of thought there. So, yeah, is there any performance Trade-offs there?

Tom: 15:15
as I've now know, but importantly, that the proofs differ based on the type of computation being performed in, the type of kind of process happening, and so it doesn't. What it doesn't mean is that it is like a blockchain where every single transaction is auditable and discoverable, but it does mean that there are a variety of different methods depending on what happens, and so there is ek. There's ek Kind of opportunities for some types of computation improves. There's also Sampling. There's a kind of a number of different types of provability that we have depending on the kind of process of that computation, how it's being run.
Speaker 2: 16:00
No, are you guys yet when you are in the journey in fluence labs at the moment? Speaker 1: 16:06
Well, I guess right. So where we in the journey right now, we have Raised about 15 million dollars. We're backed by some kind of really top-notch investors, which is terrific. We're getting very close to launching mainnet, to replant, to launch, I think, in early 2024. So that is kind of our real, our real target, and so we're sort of just getting kind of the final strokes of the minimum viable Kind of launches and then minimum kind of number of features that will make it usable. But but get it out. And so we know this this is a project that that the teams in building since 2017. So it's been a long time coming, but we're very, getting very close to it and Excited to get this out early next year.

Host: 16:56
Wonderful. You know, can you touch a little bit upon on the architecture of the platform? You know, I was just skimming through it a little bit and seems like there is a peer-to-peer layer, then there is IPFS also plays a role in there, like how all different Components work.

Tom: 17:15
Well, I mean, I guess there's there's a couple of different pieces to it. So first there is the, there's a, there's a marketplace that underlies it all and you notice, I kind of was critical of marketplaces before, but there's a marketplace below all of this which is, I think, important. And then we have basically cloud Equivalent parts of the staff, and so that the marketplace is probably you're thinking of in terms of the peer-to-peer element of it. We just call it distributed, now decentralized, but we have a language which is called marine, which is effectively an AWS Lambda equivalent, and then we have a language called aqua, which is equivalent sort of AWS step functions. It really allows the composition, takes care of the, the coordination of of applications and abstracts out a lot of the difficult work in building natively decentralized applications. And then all the tools like load balancing, routing, scaling, orchestration and deployment is aqua lives, which is another kind of sort of sub subversion or sub and a category of aqua. And so that's why this has taken a while is that the building of these different Elements that all interact with each other. And so, as you're building all these things at once, one thing changes you have to change, you know, make those changes throughout everything. So it is a, it is a, but that's also why you get the robust hang on features and and attributes, as referencing earlier.

Host: 18:51
I know you mentioned the blockchain aspect, so my assumption is is there any aspect of tokenization? Is it even required in this particular case or in certain use cases?

Tom: 19:04
It is is required for trust mainly, and so the Fluence coin, flt, is what hosts will need to stake or have staked to their servers in order for customers to know they can be trusted. And so you know, there's no brand here and right. This is more of a network you're going to. So what we the way the architecture is set up is that if you've a compute job but say is gonna cost a million dollars, that host will need to stake multiple millions in order to prove the trust that they will not, they will actually complete that job, and so that is Importantly the stake is done in the Fluence coin. It can be done by investors to host, so the hosts don't have to do it themselves. But the other relevant part is that it's also interesting from a A network kind of size perspective where, if you can project or predict or have a view as To the amount of volume of compute on the network, you can, you can model out the amount of stake that will be required to Basically secure that amount of compute on the network, and that we think is a pretty powerful and important tool to have. So that that is, the coin is a critical piece of the ecosystem, for sure, the coin is also used to reward the hosts for adding their compute to the network. So in day one we may not have what we call useful compute on the network, but there will be a proof of compute and so the compute providers that join will prove they have compute by running Calculations and they will be paid Influence coin as soon as compute is offered on the network for providers that actually are providing that useful compute. They will earn you actual revenue, stable coin revenue from the people, that from the developers, from the companies that are using a compute and that the computer providers set the price. If it's competitive people that there is algorithm chooses them, they earn that stable coin, which is terrific, should cover their cost, should be a good business. They are rewarded on top of that with fluence coin as well, more so than if they were just providing the pure compute. And so we specifically are designing this network so that for real usage and we want and we also want a stable revenue stream for compute providers, that is a fee revenue stream, so they are don't have to basically take the fee on volatile, the crypto volatility, if they don't want to, but they also have crypto upside by earning the fluence coin, which I think is attractive, and so we find that to be a you know, from talking to all the kind of your minors or other compute providers or storage providers and network. Many have had challenges by only living in a crypto ecosystem, which Is great. When the markets up, when the markets down and all your costs are in the in, your revenue is in a token. That's gone down a lot. You have a tough time and so we're kind of highly aware of that. I'm trying to take those learnings in the design of our network and platform.

Host: 22:35
Who would, in this case, can qualify as a provider like an equivalent to a validator in, in, say, blockchain only networks, like in this case? Will it be dependent on the type of job, as you mentioned earlier? Like, suppose, if it's a million dollar job they are assigned so they have to have like multiples of that kind of state. So will there be then different tiers of sort of providers on this sort of network?

Tom: 23:03
I'm listen. We expect this to be Institutional level compute and so we do not think this is people with the laptop joining because the laptop isn't being used for whatever. Now someone may try to do it. I just don't like much. Like my bitcoin. You can your laptop up to the bitcoin network. You're not gonna know it's not gonna be very productive for you. So we don't think that is the type of service that the provider, the customers you want, are gonna be interested in. So we're talking primarily to people that are running data centers that have access compute capacity. These may not be the amazon or google data centers, but there's there's a huge number of data centers out there that that have unused or less used cpu capacity. They're happy to have a way to monetize that, because they don't have a front end effectively to be able to do that, and the certainly don't have a serverless compute offering, which is what fluent offers, and so, even if they are selling Some services and renting servers to people, they haven't set up a serverless compute offering. That's a complex and ever, and so we're giving them basically another product that they can offer.

Host: 24:25
Yeah, my next question was around use case of that kind of clarifies that you mentioned in the past that, like you guys are not focusing on the GPU as such any, and I'm trying to remember the reason behind it and Could you, could you elaborate a bit more as to? It may be at the start of cpu and then do you guys have any plans in the future to go to other Like the GPU?

Tom: 24:54
Well, yeah, I mean listen, I think that right now Almost all the computer takes place in the world is on cpu's. Now gpu usage is growing, you know straight up. So if you, depending on your view of how quickly I takes over the world, will, you may see gpu's being dominant at some point in the future. But we know that's not next year, we know it's not the next five years. It will take time. And so cpu's are what's actually being. Everything is running on now and that is not going to go away. It may not grow as quickly what you know. The serverless market is going from nine to fifty five billion people predicted over the next couple, five, five years, seven years. So you know gpu market is going to keep growing as well. But that is just fine, but for us. But the point here is the architecture built Apply equally to cpu's. So the extent you know we launched this. Our next phase will be we will be kind of designing this with the gpu infrastructure in mind and the hard work has been done. It is not, it is not happen overnight to just make this work easy for gpu's, but but also the hard stuff is behind us. So that will be a phase two Is to happen for gpu's, but, but I actually think it's okay, because gpu's are being mainly used for a model. Training is terrific. Our use, we think, relates to AI, is in the data pipelines, which is the cleaning of the data prior to it getting into models, and I'm very happy to partner with the groups that are focused just on the training of the models, and we can be the part that cleans the data before and we can be the part that afterwards, that proves the model was run on the correct data, et cetera. So plenty of roles for us to play here. We don't have to compete with everybody right out of the gates.

Host : 26:54
Yeah, and, as you mentioned, the total addressable market right now still is like I mean, yeah, of course GPU is increasing, but then you have this entire market which is CPU bound and not all tasks can be parallelized like in the way in GPUs, right, like so you're, you know, you're retracing tasks, matrix multiplication tasks okay, there's some growth in there, but there is also this huge market of like CPU compute that needs to be cracked open. So you know, I'm glad you guys are. You have your mindset on that market initially so wonderful. You know I was looking at your peer programming, aqua. Could you talk a little bit more about it? How is it? How does it help, say, developers who want to get into, you know, this kind of new wave of decentralized peer-to-peer infrastructure, serverless infrastructure?

Tom: 27:53
Well, first of all, I'd say that what we've built is very suitable for enterprises. I'm not sure enterprise will be our first customers. We're having some conversations, no doubt, but I think native Web 3 understand what we're doing even better than enterprises. So I think the scale of what we can offer is appropriate to enterprises, and we have discussions with them where we don't even mention the word blockchain or crypto and they just like the services of verifiability and of low cost. So our attributes don't have to even mention that. How we get there isn't really relevant, but I'm not. I don't want to say right now that our first customers will be enterprise, because that is, as I know from Hadera. That's a long road. But what I say with relation to Aqua is that it's another language. Everyone you know it says why they're building a language. But there's no language out there which abstracts out the difficulty of native peer-to-peer application creation and so, for example, you have to write peer lookup, peer discovery, peer failover every time if you're creating an application. Aqua abstracts it all out and so you just have peer write peer lookup or peer discovery in a line and there you are, and so it massively simplifies the complexity that is a requirement of distributed applications, and we think that is very valuable. And as people move to more truly decentralized applications, the ability to quickly to write them without having to keep recreating the complicated backends is a huge advantage. And that that will be. That will result in kind of more focus on the front-ends, more focus on the use case, more focus on kind of the interesting parts of the applications, not the kind of frustrating backends after keeping rebuilt again and again and again.

Host: 29:49
You know I was also looking at Aqua. I had a glimpse at Aqua and deadlock free. I mean that's pretty significant. How do you achieve that? Because deadlocks, race conditions, all that with multi-threading, distributed networks, I mean that's like the biggest caveat right now in the space, like with all these issues coming and people you know, even you know having tough time even debugging these sorts of things with multi-cores on a single chip. I mean that's a problem is becoming more and more prevalent. So that is quite interesting that. So it says pie calculus semantics. So are you guys using a different formal sort of methodology of how to do that?

Tom: 30:38
I give all the credit in that to Dimitri or CTO and the real kind of you know, the architect behind all of this technology and has really thought it through, designed it and built it. So he has all the credit for that and this is part of you know what he's been envisioning and building bit by bit over the past kind of you know, six plus years. So I give him all the credit for that. And probably easier to worthwhile having a. You could have a whole conversation with him on every different element that he chose to include in Y, but I'll leave that for him to kind of go into depth on Upcoming episode. For sure, thanks for that. I guess the only thing is that there's one thing I'd like to say at the end, or two things, but one thing I want to say which doesn't just the question is you know two things about Y influence and I think there are the two things here I want to talk about. One is negative, one is positive, and so the negative is that, whether it's fluency or something else, that we don't have a decentralized compute opportunity and option. We end up with three companies effectively controlling the web and all of web. Three is accepted as smart contracts, that the finance is still run on Amazon, primarily for Google or Microsoft, and so that is not a durable setup. And the other issue with that is that large companies inevitably have real government relationships and even, potentially, control, and this dates back to railroads right, it goes all the way back. It's just the nature of commerce and governments is they get close. The bigger the companies are, the more they have to relate to government. So we just need to have a resilient, decentralized system just to help us all basically ensure that we are not subject to whatever type of government bias happens to come at any particular time. But the opportunity here as well is that these ecosystems can only innovate so fast, and there's no. I wouldn't put my money in any one company trying to beat Amazon or Google, but what Fluence does is creates an open ecosystem for developers to add useful code, useful modules, useful programs. They can then be compensated for that, and we think that leads to a tremendous amount of innovation. So the only way you can actually out compete any of these top companies in the world is if you have the whole global developer community incentive to work together to do that and do it all by their own incentives, and so that is, I think, the only chance we have also of building kind of the next kind of step function up of interesting, compelling products is if you enable that global developer ecosystem to both profit from it and also make it easy by abstracting out the complexities of the back ends so everyone can be focused on the innovative front ends and building on each other. And I can talk about that in a lot more detail, but those are the kind of critical two pieces of Fluence and I would love it to be Fluence. If it's not Fluence, then someone else is able to do that. Equally good for the world, equally good for humanity. We're just the people I think are furthest along at this point in achieving the platform that allows that to happen.

Host: 34:05
Well said, tom. I mean, this is definitely a very noble cause and I would love, through this channel, get a lot of people involved, a lot of developers, especially in Southeast Asia. So where can these folks find, you guys? Where can they go, discover the community and be part of this new paradigm in computing?

Tom: 34:28
Well, you can certainly appreciate that and you can find us on Twitter. You can find us on Telegram, discord, kind of you name it. I guess the easiest way is to find us. The website is fluencenetwork, discord is fluencechat and Twitter is at Fluence underscore project. So a couple of different opportunities there and Twitter is the same Fluence underscore project. So those are sort of the. Those are probably the easiest ones to remember and to join and definitely you can join our Telegram channel and we provide kind of regular updates and community calls once every month roughly giving updates to what, how we're, the, where the build is and kind of what our plans are.

Host: 35:14
Wonderful. Thank you, Tom. It was a pleasure having you and I would love to have you again in the future and really looking forward to this one.

Tom: 35:27
And your thank you. Your viewers aren't going to be able to see this, but I think you're going to appreciate the Fluence t-shirt. I don't know if you've seen this before, but can you, can you see?

Host: 35:34
that? Yeah, I see it, it's it's middle. Yes, that is wonderful. It's fluence in detail.

Tom: 35:40
I see that fluence in detail.

Host: 35:44
Yes, I do see it. Yes, awesome that is. I want that t-shirt.

Tom: 35:48
We got it. We got it, we're doing, we're sort of toe branding. Who knew?

Host: 35:52
Yeah, wonderful. Thank you, tom. 

Decentralized Compute Platform Fluence Labs
Technical Challenges in Off-Chain Computing
Fluence Coin and Tokenization in Network
Fluence Network