Transcript: Generative AI at Scale — Who Pays the Compute Bill?
In Part 3 of Render Network’s biweekly series on AI and decentralized compute, leaders from Render Network, Scrypted Inc., THINK, and Manifest AIs tackled one of the most urgent questions in tech: who pays the compute bill as generative AI scales?
The conversation explored the economics behind AI, covering payment rails, tokenized models, real-time bidding, the cloud cost crisis, decentralized infrastructure, hybrid workflows, and the race to democratize access to GPUs.
If you missed the first two parts, catch up here: Decentralized Compute, AI, and Privacy and Beyond the Model: Decentralization, AI, and the Trust Layer.
What follows is the full transcript from that discussion.
Player1Taco (00:00:00):
And
Edward Katzin (00:00:01):
Hello? Hello
Player1Taco (00:00:02):
Eric and I just sent him a text, so I think he’s connecting. Okay. We can give this the two minute
Sunny Osahn (00:00:11):
Wait. Yeah, let’s do that. Another two minutes then let’s just hang quiet for a second.
Paul Roales (00:00:18):
Yeah.
Mikey Anderson (00:00:19):
Me and Tim Cotton, were going to freestyle rap during that time. Is that acceptable or no? 100%
Player1Taco (00:00:24):
As right. Only Travis. That’s right. I think the only thing I’ve got guys is Hamilton, so
Tim Cotten (00:00:31):
I’m sorry. We’re okay with that. I does the best dude. Oh, Finn, son of Miranda and up stuck then forgotten in the middle of a forgotten spot. The Caribbean Providence impoverished.
Sunny Osahn (00:00:43):
Yeah, so Tim, please don’t do that again. I’m joking. That was exceptional. I will definitely call upon your hidden talent another time. But what we will do, guys, I think we’ll get started now. Eric will join us when he’s here, but the manifesto is there. So I think what we can do is I’m just going to bring Silvia up as a co-host and then we will rock and roll. There we go. I have invited Silvia. So guys, thank you everybody, this amazing Spaces. My name is Sunny and I’m from the Render Network Foundation. The Render Network Foundation facilitates the strategic vision for the render network. Essentially we do various marketing, partnership and community initiatives and we engage a range of stakeholders from artists and creators to GPU node operators as developers and ecosystem partners. And this is Taco, our cohost today,
Player1Taco (00:02:00):
Gm. Gm. I am Player one Taco, affectionately known as just Taco. I am the CDO of Manifest, but this is one of those things I started lead a lot of our partner integrations and our front facing stuff. And so anytime I get to co-host a space outside of running Daily show stuff, I get to just sort of sit back and listen and learn and work with an amazing team of people to build out really good questions because like render providing tools and education for builders is sort of what Manifest is about. So we’re going to be getting into a deeper dive of some of these topics so that people can learn a little bit more than just scratch the surface stuff.
Sunny Osahn (00:02:47):
Yeah, yeah, exactly. That’s it. So this is our third AI related space and it’s cool. We’ve had some amazing people join and some of those amazing people are here today as well. And speaking of our guests, I think I’d like each one of you to tell us who you are, what organization you are representing, and what is it you do. So let’s start with Tim because you are the closest person to me in this list that I can see.
Tim Cotten (00:03:19):
That’s easy then. Hi everyone, my name is Tim Cotten. You can follow me at Cotten io, name is spelled with an E and I’m the founder of Scrypted and the upcoming Scrypted network. Our North Star is creating, evolving self-improving artificial intelligence. And our key intersection with Web3 is AI agents. We care very much about AI agents and we’ve helped deploy agents using our technology in the virtuals ecosystem, for instance, as well as the Eliza ecosystem. And we’re building out a consumer facing product right now using AI called Dula where anyone can go and with just a couple of clicks, make extraordinary viral content instead of having to write crazy prompts. And so to me, this discussion, generative AI at scale is very near and dear because we’re partnering up with everyone who we can because the compute needs are insane.
Sunny Osahn (00:04:23):
Awesome. Thank you for that intro Tim. Let’s have Mikey next.
Mike Anderson (00:04:28):
Hey, what’s up? I’m Mike Anderson. I’ve been building an AI since I think at least 2016. We started building, I’ve seen the whole thing from, hey, we’ve got optimization algorithms to, I was there when the United Nations and Nike and Toyota first moved from data science in a workbench to in production. And they, they’ve just seen this whole wave of AI come and really understand that the internet is designed for webpages and not smart applications. And so at Think agents, what we’ve done is we’ve standardized the way that you define an agent with an on chain agent. And then we’ve built, and we’ve got a browser now that’s just starting to go into Alpha with our builders community where you have both a builder. So we use Lang Chain and Blow Wise and LAMA Index together in a builder, and then we’ve got a browser that allows you to actually use those agents in context of a webpage. The agents have access to it and they also have access to all your Chrome extensions so that they can have their own wallet. So what we’ve essentially done is created the place where your agent can actually exist and work. And then the next couple months is going to be about integrating all of our partners. And so we actually have Sway who’s on here today has actually been downloading and using render and the different generative image tools to get the roadmap in place for integrating that into the system.
Sunny Osahn (00:05:54):
Amazing. Thank you so much, Mikey. Edward,
Edward Katzin (00:05:59):
Sunny, thank you so much and it’s a pleasure to be on the panel with everyone here today. I’m Edward Katzin, CEO and founder of Jember, who’s working very closely with Manifest and also with Render. What we’re bringing to market is I’m going to steal the branding that Taco gave us, which is we focus on the boring stuff. So you’ve got all these amazing headlines from the frontier models about how they’re building the next major generalist model and all the compute that they’re enabling for that we’re in the bowels and the bowels that we’re focusing on is how do we create autonomous agent workflows that can solve core business problems? And where we started is ensuring financial compliance across web two and Web3 and automating the financial operations workflows for that. What we’ve run into is, and Mikey, you summed it up perfectly, we want to engage with as many partners as possible and enable compute as much as possible.
(00:06:54):
And the problem that the Jember team ran into, and to be frank Sunny, we even ran into it, we were trying to integrate with the Render Network is while we can leverage amazing tools for vibe coding, we’re not able to leverage similar tools for vibe infrastructure, so to speak. So the thing that the Jember team had built out are a set of autonomous agent workflows that can solve for core financial compliance, but also for core infrastructure compliance and cross cloud infrastructure management is what we’ve been doing and what we’re really excited about and a topic we’re looking forward to exploring with everyone on this panel is in the world of generative ai, one, which entity is paying for the compute, but then the vision that we see, and I would love to hear Mikey and others and Paul chime in on this, is autonomous agents are going to be here definitely within the next five years, if not much sooner. And how are these agents buying and paying for compute? And that’s the infrastructure that we want to build and make it available for all our partners.
Sunny Osahn (00:07:55):
Okay, excellent. That doesn’t sound boring to me at all. Let’s have Eric next.
Eric Bravick (00:08:04):
Hey, so guys in the middle of an internet outage. So I’m on wifi now or sorry, wireless now. What was the question? I kind of missed it.
Sunny Osahn (00:08:15):
Oh, okay. So literally, who are you and who is it you’re representing today?
Eric Bravick (00:08:21):
Oh, great. Oh, so I’m Eric Brave, I’m the CEO and founder of the Lifted Initiative. We built the Manifest Network and we’re focused on fundamental computation issues and all that boring layer one stuff that Edward already indicated.
Sunny Osahn (00:08:42):
Awesome, thank you so much for that. And we also have Paul. Paul, how
Paul Roales (00:08:48):
Are you? Hey, good, how are you? I’m good,
Sunny Osahn (00:08:50):
Thanks.
Paul Roales (00:08:51):
Yeah, no, it’s good to be here. I’m with RenderLabs. We’re part of a render ecosystem that helps build the last mile connections, the tools to help the render ecosystem be as useful as possible to everyone and give an example of some of the things we’re working on. As I hear some of the other speakers here, they talk about the billing payments and how things move around render. One of the things we’re thinking about is how compute moves around the globe, how have electricity prices changed? We talk a lot about time of day changes, how Render’s building a global network, how we tap into that compute globally. And so yeah, here with RenderLabs and happy to spend time again with everyone on the panel today.
Sunny Osahn (00:09:45):
Amazing. Thanks Paul. And last but not least, Trevor. Thanks, Sunny.
Trevor Harries-Jones (00:09:52):
Hey guys, I’m Trevor. I’m from the Render Foundation board. I think you guys all know Render, but I’ll try, in interest for those of you who don’t. We’re a community of artists who help other artists create things on the world’s largest stages, Hollywood, large scale renderings, games, concerts, and even NASA. We started as a rendering project and as AI has become such an integral part of the creation process for artists, we’ve only leaned into that. And honestly for me today is about that progress and the partners here who are helping us really achieve our vision of democratizing artist creation of content across the globe using our network.
Sunny Osahn (00:10:40):
Okay, thanks for that Trevor, since you were the last person to give your in info, how about we give you the first question. I think what we’ll do is we’ll jump into questions I’ll give or Taco and I will give a specific question to each member. I know we’ve got about an hour in this and usually it kind of runs over, so we’ll go straight into this. So Trevor, what does decentralized or hybrid compute supply chain actually look like in practice? For example, what kind of hardware is viable, what kind of software is needed and who can even supply compute today?
Trevor Harries-Jones (00:11:21):
Man, alright, great question. Say it is such a dynamic space at the moment. I think everybody here knows machine learning started very centralized and for obvious reasons when you’re talking particularly on the training side, you’re talking about loading large amounts of data into memory and being able to do that in parallel really works best with co-location. A lot of the hardware is really of the years been up optimized for that co-location, memory sharing and more. So that was really where it started. But I think the challenge with just a centralized architecture is really one, there isn’t enough of it and that’s a real blocker for innovation and with just a massive compute need, one that just won’t scale on its own. But then over and above that you don’t have localized, you near compute in that centralized approach. So edge processing really isn’t part of that centralized computation.
(00:12:36):
And along with that, a centralized model comes with all the challenges of centralized models today in who owns the data and how the data is used and shared. So what we’ve seen is really an opening up of the gates towards more and more decentralized AI. And for us it’s really, really exciting. Definitely hardware wise, there are many different hardware architectures that can do ai, but the GPU does seem to be a very strong player in this space. And of course Nvidia being dominant in that is the leader in anything ai, their enterprise cards are absolutely amazing for training. But for us, what really stands out is there is an orders of magnitude more consumer cards across the globe that can be used more and more for AI inference, for fine tuning. And even as we’re seeing week to week training as you have innovations happening in the space.
(00:13:42):
So it is very much been dominated by Nvidia and that’s really in our DNA as our artists mostly use those notes outside of that. Apple is starting to move into that, but there’s some challenges on accessing the GPU on Apple. So these things are dynamic, but when I look at just the trends, we’ve gone from very large models to those models being more and more quantized and being able to fit locally on edge devices and we’re starting to see agent elements take part and assist this architecture. For me, really one of the biggest innovations that I’ve seen most recently was moonshot’s Kimi 2 model with 384 sub experts. And honestly I love what that unlocks on decentralized in a decentralized context. So to bring it back in terms of what a decentralized network is today, it is quite limited, but it is expanding rapidly and is the area where the most opportunity is for compute given the abundance of either compute in that space.
Sunny Osahn (00:15:04):
Brilliant. Thanks so much for that Trevor. Tim, I’d love for you to share your insight on this too. Do you see a future where average users become both consumers and suppliers of compute or is there a hard line between running AI locally for privacy and plugging into these broader decentralized networks?
Tim Cotten (00:15:25):
I love this question because it really hits at the heart of whether Web3 itself is going to be useful for AI. And so first I’m going to tell you yes, that’s the TLDR and then I’m going to kind of give you background that scripted was founded by generative AI experts from major studios like Electronic Arts and Square Unix. So we come from a game development background. So we’re very familiar with both content generation for entertainment and also actually offloading things to users to create content, whether that’s in virtual worlds like MMORPGs or in AI based games. So the first challenge that I’ve found is that in a centralized system like let’s say AWS for instance, I can allocate as many within reason second tier NVIDIA cards as I need. For instance, I could get a bunch of a one hundreds and I can run a bunch of neural loads for things like staple diffusion, for instance, the open weight flux models, which are like top class right now, black Force labs all the way or even the video models that are open source.
(00:16:40):
The problem with that is when you really need to scale it and you need to move to the H100s or 200s, suddenly you’ve run into capacity blocks. We are not a hundred million dollar startup, we’re a multimillion dollar startup and there’s a huge difference there in being able to quickly scale infrastructure in a centralized fashion. And believe me, I don’t want to disparage decentralized method. It’s absolutely critical for people to have access to that when they need it. So the great hope of course has been how do we decentralize that compute? And so from our side, we look at something like render network and instead of an H100, the top class crazy cards that Nvidia offers, could we use consumer GPUs? Could we use a 4090 or 3090 or 4080? Can we use stuff that is coordinated and we’re passing costs our costs onto these users as their profit?
(00:17:36):
And the answer is yes, but it depends on whether those workloads can themselves be split up and some methods traditionally have not been splittable. For instance, video generation has generally been very difficult. We actually set out ourselves to develop a video orchestration model that could split workloads for a given video generation. Say you want 20 seconds of video but continuous and you wanted to shard that out, you wanted to actually split the workloads into pieces amongst various 4090 containers and we ended up patenting it because we had to invent that method. Whereas things like image generation are already accessible and you can just do it and that’s the low hanging fruit. So to me, I see this great value in these distributed rendered networks just like rendered. Well, it’s actually called Render Network. I love that name because it just tells you what it’s, and in order for this to succeed now we have to, it’s not just about bringing down costs, it’s about making the payments very liquid. So it’s about how quickly can I move a dollar from a customer to a dollar from compute because otherwise companies like mine have to retain larger reserves on our balance sheets to pay the upfront cost generation for the generations and then collect the money from the users. Do you see what I mean? So now the challenge is less on do the networks exist? They demonstrably do and I’m very excited about that and more how do I scale and pay those networks very effectively? That’s what we are interested in being a part of.
Sunny Osahn (00:19:23):
Okay. Brilliant answer there Tim. Yeah, that’s some really good insight. Mike, as autonomous agents become more capable and widespread, who should bear the cost of their compute? Would it be the users platforms or the agents themselves via
Mikey Anderson (00:19:39):
Economic models? Well, I think we’re going to see business models around every possible way, but as a user, if you’re not the one paying for it, it’s really likely that you’re the product and realistically your interactions are probably being packaged and sold to someone else to either advertise to you, train something to manipulate you, et cetera. So I would encourage most people to start thinking about what their AI risk profile is. Am I using this for a business that open AI can compete with? Do I have an application that an agent could build and replace that understands everything that I’ve been doing? So just be aware of what your risk is, right? If you’re using it, I mean if you’re just using it for something small where you don’t really care if you’re doing it publicly on the internet, use whatever you want. But what I would say is there is an economy that is growing around this idea of independent AI.
(00:20:33):
There’s people who want to not have everything in their life out there for AI to be trained on. Basically if those models understand you, they can manipulate you and get you to do what you want. And so we’re seeing this alternate economy, what I have on my machine is I have a machine where I can run 14 billion parameter models quite easily. A lot of the tool calling, a lot of the, I want all that stuff to happen locally and then I only want to call out to render and Venice and these different platforms when my machine isn’t capable of doing it when I need something bigger. And so I would say the default that we want is to have things be local and when not, we want them to be private and decentralized and we want to have the right partners in place to do that. Perfect answer then
Sunny Osahn (00:21:20):
I completely agree with you there Mike. Edward, could we see agents managing their own compute budgets bidding for decentralized resources or earning credits through useful work?
Edward Katzin (00:21:34):
Wow, that is a loaded question and I think Tim Cotten was alluding to this earlier in your comments there Tim. I think one of the things that Tim put the spotlight on and that your question reinforces there is how are we going to scale and pay for decentralized network services, especially in an economy that is driven by autonomous agents. And so the mechanisms that we’re seeing and leveraging from known places where this has been solved. So a place where this has been solved is in the world of Wall Street trading, high frequency algorithmic trading and how to expose services amongst the actors and the players that provide those services is very well established. It’s of course not decentralized, it’s very web two and it leverages a bunch of lawyers and contracts. So it breaks the decentralized model. So what we’re trying to achieve and what we’re doing in working with partners, and to answer your question directly is in order to enable autonomous agents to purchase their own compute and then to earn value for the resources they provide or the output and the data that they create, one, we’re going to need decentralized marketplaces and we’re seeing that come to life on networks like manifest and render where we can match compute demand with supply and we can do that based on the type of compute.
(00:22:57):
So for example, with Jember, when we need distributed offline GPU render’s perfect and the cost savings on render are obvious when we need online real-time compute, we would run that on manifest. And again, leveraging the MFX token and power, we know what that is. So that gets to my second point, sunny, which is we need tokenized and transparent payments. So it can be a native token like render, it could be a stable coin that the interfaces for the agents and the applications to pay exactly for what they consume isn’t totally there yet. So where protocols like MCP have now exposed the tools and solve the interoperability, we haven’t totally gotten to the interoperability protocols for autonomous agent transactions. So to get literal, it would be enabling an agent to programmatically budget bid and settle compute costs without any human intervention. So it’s fully to solve this what needs to appear, our programmable payment rails and the smart contracts that can run on those rails to enable that full and then automatization.
(00:24:10):
And then we all know that economies fall apart if the interest and incentives of the participants aren’t aligned. So being able to incentivize supply and also manage cost inflation in these economies is going to be key. And so there are plenty of tools in Web3 that we can leverage for this, which include staking and rewards and we’re already seeing that. So as a node operator on manifest, I can earn rewards for providing that compute and I can get paid when agents or humans or anyone consume that compute. So I think what we’re going to see happening near term is these hybrid payment strategies are going to arise. The web two enterprises are trying to figure out how to get into the space. The Web3 crypto exchanges are trying to figure out how to open the gateways to fiat. And my ultimate hope is that very quickly, especially with this new legislation that’s coming out, we’re going to see cross network and fiat on-ramps that are very compliant, very sustainable, and that’ll expose an interface that the autonomous agents can use. And my best guess is that’s going to become very real in the next 12 to 18 months and I would love anyone to debate me or challenge that.
Sunny Osahn (00:25:22):
That’s very interesting, Eric, thank you so much for that, Paul. I recently saw a tweet where a former Meta employee said the biggest bottleneck for AI infrastructure isn’t money, it’s power. So even giants like meta can’t scale fast enough because transformers cooling and raw energy supply are all tapped out. How does the Render Network or similar decentralized compute networks navigate this constraint?
Paul Roales (00:25:51):
Yeah, good question. You think about what are the costs of compute, the two big inputs are chips and the electricity to revenue chips and last few years we had some shortages in chips, but those have been easing greatly. It’s much easier today to get a leading edge consumer card like a 5090 than it was say last year during on Christmas or something. And so now it comes down to electricity and as big cloud providers hyperscalers look at do they want to train their latest edge model or do they want to make compute available to their customers and their cloud? Often they’re choosing themselves and they don’t have electricity to do both. They literally cannot get their hands on the people wants of power that they need to do both. And so consumers are starting to see that bench and the availability of some very popular chips on cloud networks and hyperscalers have been limited largely because of power.
(00:26:55):
And the thing that distributed compute like the render network has is we have a global footprint. And so as the sun goes down in one area of the globe and people start to go to sleep, power is available, power is available to utilize those chips that would be unused while people are sleeping and get those compute jobs done and do ’em at a much better price point because the power’s cheap. If you happen to live in Texas, you’re very aware of the very volatile nature of electricity costs these days. And so sometimes it costs a thousand dollars a kilowatt hour when there’s peak demand in the middle of a summer afternoon, sometimes it’s literally negative. The power company will literally pay you to consume electricity because wind farms, solar farms are overproducing what’s being consumed on the grid. And so the render network being decentralized is one of the only ones that can literally tap into electricity that has a negative cost where the power company will pay you to run the chip. And so it is a great opportunity to be very green, use power that would otherwise go to waste, use chips that would otherwise be latent, but also tap into the distributed network of a worldwide infrastructure like render. Yeah, great question.
Sunny Osahn (00:28:20):
Yeah, that was a really good answer there Paul. It’s a very interesting thing which we can go into in more depth another time. But Trevor, what do you think are the key trade-offs that teams face when choosing where to run their AI workloads in terms of availability, reliability, cost and regulatory risk?
Trevor Harries-Jones (00:28:46):
We’ve gone through this already on the rendering site and I always like to give this analogy. When you look at rendering, there are essentially two diverging tools of rendering. The one is real time, really the games engines. And when you try and render something real time today, your trade off is quality. When you view something that’s been rendered real time for a game engine, it is just not quite photorealistic. And where we’ve really focused the render network is on the other side on photorealistic items and those, your trade off is time. You’re giving up the realtime aspect, but in return you’re getting a much higher quality output and in many cases it’s done at a lower cost via this decentralized network. And when I think about AI workload, I think in very similar terms, there are use cases where time is an absolute premium and you’d probably be better suited towards centralizing, but there are a host of other cases, other use cases where time is not the number one factor, but maybe cost is where having SOC two compliance security isn’t really that important. It’s just access to a working GPU. And what I love about listening to some of these partners is I’m really seeing these types of use cases emerge that work well on decentralized that aren’t at an utmost premium, but are batch type jobs or are jobs that don’t need the level of security that perhaps you need on centralized. So the trade-offs to me are absolutely key in how we scale up the idle consumer compute that’s available for these customers today.
Sunny Osahn (00:30:47):
Okay, awesome. Thanks for that, Trevor. I’m going to let tackle take over the rest of the questions.
Player1Taco (00:30:54):
Thank you. Yeah, no, some really high points there. So for Eric, can builders realistically piece together inference across GPU marketplaces, decentralized networks and centralized cloud or how is that working today?
Eric Bravick (00:31:14):
Yeah, good question. Sometimes yes and sometimes no. It’s important to remember that we’re really early in this space comparatively speaking, and if you look at the bulk of developers, they’re really going to tend back towards the traditional solutions that have always been their stalwarts. So if you look at absolute numbers, it’s important. I guess, let me back up a second. It’s important in Web3 to understand where we sit in an absolute number standpoint. We are still a tiny, tiny, tiny percentage of the overall market. So most of the time developers are still not choosing this pathway. It’s still new, it’s still largely unknown, but things are getting better literally daily. So whereas right now the bulk of developers are not really going to piece together inference on a decentralized network, the growth in this has now hit the point where we can see the future if you’re looking right, so the tools are getting better every day, the cost savings are real, all the constraints mentioned on this panel already, availability of chips and power, those are real constraints, although somewhat artificial and I can go into that more later if we want to do a follow-up question.
(00:32:57):
This is still a tiny market, but it’s now proven enough that for certain use cases you can now put together solutions that actually work. So I think Trevor usually does a really good job of couching things and use cases, having been on many panels with them. So I’ll borrow that technique here. It’s all about your use case right now. If your use case fits the tools that are available, you absolutely can use a fully decentralized solution to get to your end state. The number of use cases that that fits are just simply pretty low right now, but tools are getting radically better. Partnerships. All of us are in here on this panel are making things better on a daily basis. So you can see the trend line now, you can see it used to be a hundredth of a percent, then it’s a 10th of a percent, then it’s a percent, and you’re like, okay, if we can get across that threshold, which I believe we have, then you can see that it’s just a matter of grinding on the tools and fitting the use cases into the tooling. So I think that’s kind of a balanced answer to your question, taco. I don’t know if it’s specific enough, but it just really depends on your use case. And it’s important for us in Web3 to remember that we’re just at the beginning of these solutions, but they’re very, very promising. In five years, you’re absolutely going to be able to do almost every use case, decentralized.
(00:34:52):
We’re just not quite there yet. As Edward mentioned, one of the big problems is actually not technical, it’s payment rails. So most companies in the world still can’t hold tokens or won’t hold tokens. They’re just economically not set up to do. So the number one thing I would say, if you want to make this a reality, I’ll actually go back to Edward’s comment. It’s about payment rails prime, primarily first rest the stuff, right? That’s technical stuff. Does that make sense?
Player1Taco (00:35:31):
It does to me. And I actually have some payment stuff that we’re going to be getting into a little bit. We’re going to shift a little bit now and sort of talk about what’s next for generative AI and will next gen generative platforms combine edge inference and decentralized orchestration. Paul, in what scenarios does it make the most sense to push generative AI closer to the edge, whether for latency, privacy, offline functionality, or even compliance
Paul Roales (00:36:05):
In all of those cases? Right. I mean we will just talk about latency real quick. I mean, I think one of the really interesting things that we’re working at render labs and that I think will be very interesting in the next few quarters is automatic routing of an agent behavior to find that best trade off point between where it’s located close to the user for performance on the right chip for generation time at the right place based on cost of electricity in that location, cost of generation. So there’s multiple trade-offs that are occurring there between making sure the compliance is right and all the other things that I mentioned are part of the picture. And I think where it becomes very, very interesting is when the agents reason about that autonomously, right? It’s not just a prescribed if in this compliance region use this data center. And so I think that’s a very interesting edge case that’s now become a great case for a distributed network like berries. How we fill that better than a hyperscaler cam that’s like, okay, you can run AWS East, AWS west and that’s it. And so I think a very interesting area that we’re actively working on at render labs to push the envelope on and make that available to the whole render ecosystem. A lot of our partners here today. Nice.
Player1Taco (00:37:40):
And I want to touch a little bit more on the privacy side with this to Tim. I know you have some stuff on this, but what are the biggest gaps holding back smooth coordination between edge devices and decentralized compute? Are we still missing key pieces like job schedulers, incentive models and shared state access or are the A two A and MT P bridging the gap of that?
Tim Cotten (00:38:05):
Yeah, so this is a really deep question, so it’s going to take me a little while to answer and I’m going to have to answer it in parts. So I’m going to work backwards a little bit on your question. The first part is our agent to agent and MCP things already solving the problem. And the answer is yes with conditions and those conditions are that it’s completely balkanized right now. And what I mean by that is that we have various very fragmented and unable to really access the resources they really need because they are incentivized right now to keep it within their own networks. And they’re all experimental. I mean there’s not a lot of economic multimillion dollar trading happening around this activity. And so I think we can break this down now into with that as the preface, these three areas you’re talking about.
(00:39:05):
The first one I’m just going to talk about is the shared state management. And really I’m going to kind of very briefly just mention that right now L1s are still way too expensive to handle this kind of economic activity. We’re completely relying on the L2s. And the issue with the L2s is that they only offer in the current versions purely deterministic proofs. And by determining proofs, all I’m saying is that we rely on pure math, which means that the problems that they can attest to have to be pure math to, and a lot of generative AI and a lot of things that are happening in the agent space are emergent, are things that involve randomness, especially like LLM or stable diffusion output. And those proofs don’t work very well for zero knowledge. In fact, there is no zero knowledge proof circuit for an SDXL staple diffusion image.
(00:40:02):
So knowing that’s number one, we, we’ve done a lot of primary research in how to deal with what we call the non-deterministic proof space. Like, hey, I’m a human. I can look at a piece of art and say it’s a cat or a dog, or I can say whether I like it or whether it’s good AI agents should have that capability. They don’t right now. I’m very interested in providing that from a computational framework that actually is kind of one of the pillars that we built our company on and it kind of leads into this, well, what’s everyone doing right now? And that’s solving payment rails. I agree with the previous speakers that payment rails are one of the biggest problems right now in this space, especially for coordination between what will hopefully be edge devices and eventual fully decentralizing compute, not just for graphics but also for ai.
(00:40:55):
And what we’re seeing right now are the X 402 payment systems from Coinbase. And this is brilliant because it’s essentially I want to pay an API and I can just send it a stable point and that’s great. Why is this important? Because as a service provider, for instance, me with our doula consumer website where you can just create viral videos based on talking gorillas, walking through the forest and doing their vlog, talking about something random with crypto that has a cost associated it. If I have to use Google VO three say that, that’s 60 cents a second. Well, if they’re paying for it in the Coinbase app using our app in Coinbase, well great. They just pay it with a stable coin and the process is so liquid and so fast that I can just pay as I’m rendered. That’s great. Not everything is as fast as that, right?
(00:42:00):
That’s not always possible. So one thing I’m also seeing is the agent commerce protocol from virtuals and this system is interesting because what they do is they do a request system where you say, Hey, I want this thing. I want this agent to go find me a good trading strategy. I want it to take my $10 and turn it into 20. I want it to draw a picture or make a video with the Hollywood agent and you pay once the product is delivered. And the problem with that, of course for the service provider is now you have a delay. Now you have to create payment pools or reserves so that you can afford the compute and to pay the providers as you’re doing the compute. And then once the work is good enough, you get paid from the a CP system, but suddenly now you need another coordinator.
(00:42:49):
And that’s where networks like Wire might be very interested. I was talking with them about this and creating payment pools amongst things like Seti and DER and others, and that way it’s more like accounting. That’s where you lead into this idea of the airline problem. I mean, for those of you who are familiar, the airlines have gone back and forth over the last couple of decades on whether to do full on accounting for every time someone has to switch airlines because of a delayed flight and they have to pay each other for this. And in the old days they had tons of accountants and then someone figured out, Hey, we can just use statistics. And then now they’ve gone back to recording everything and having giant accounting ledgers. Well, the rapid accounting ledger works really well in a decentralized fashion as long as the fees are low, as long as you can reduce that friction, that makes sense.
(00:43:45):
And that’s kind of like, okay, now we do a daily settlement or a weekly settlement more like a really rapid credit card system. And so ironically, after I’ve said all of this stuff, all the payment rails could do once we have it, and I think we’re almost there, is it could enable something like a universal job scheduler network where you could just set up your metadata, set up your auction bidding on it, submit the transaction to the network, and then one of the many networks like render network could bid on the job and then take the metadata, create the work, we’ll provide the proofs, and then suddenly that network is providing this coordination, job scheduling value, and instead of everyone having to roll their own, that seems like it would just be a useful network to exist. So those are my thoughts on the general subject. I know they’re wide and varied, but I hope that take it from, here’s the problems, here’s what payment rails are doing right now, here’s the open economic activities. How do we make a good job scheduler that everyone in this group could use for all of their various networks?
Player1Taco (00:45:02):
Nice. I know we are going to really touch, we’re going to be touching on payments next because that’s sort of what we see as a big blocker. But following on the generative AI side of things, Trevor, could hybrid models unlock whole new categories of generative apps, ar vr, now XR and real time personal assistant, and what are we seeing out there? Yeah, of course,
Trevor Harries-Jones (00:45:29):
Yes. It’s where we get so excited. If any of you saw jules’s talk at, there’s a lot of thought on the OTOY side going into how these interfaces will change on the generative side and what hybrid models will bring. And when I say hybrid, it’s going to be a combination of centralized compute of edge and local compute as well as potentially hybrid in terms of generated versus rendered. When I look at the ultimate end goal, it’s exciting. 3D experiences is ultimately the holiday and incrementally getting there through AR and VR experiences is going to need a combination of both. It just can’t be done with either at the moment. So we’re so excited as we watch these incremental unlocks happen here in almost real time week to week that they’re going to result in such amazing consumer experiences. It is honestly just the start.
Player1Taco (00:46:29):
And that was one of the really awesome parts of Render, sort of getting to see some of the technology on real time adoption of video space and stuff like that. And that was just cool stuff, especially around the penguin face and
Trevor Harries-Jones (00:46:47):
Yeah, exactly. You can see that these tools are becoming more and more useful and that penguin face replacement is exactly a hybrid model. So many more of these to come as the infrastructure really unlocks
Player1Taco (00:47:06):
Nice. And now we’re going to sort of roll into what seems to be on everyone’s mind payment rails. One of the things that I’m going to be generally putting out there before I hit some people up on questions is we had the today major announcements of Chase Bank partnering with Coinbase and integrating directly into Coinbase. The thought process I’m having here is how long until we see integrations with X 402 for agents to either have direct access through the Coinbase wallet for personal accounts into the banking industry or their own accounts to pay for compute. And so teams are rethinking whether they need their own infrastructure to rent or lease compute. So Ed, for you startups building generative AI apps, where does it make sense to use decentralized compute versus GPU APIs or even owning it themselves on these payment side?
Edward Katzin (00:48:07):
Wow, so great question and loaded, and I have to admit, I’m still reflecting on some of the comments that Tim Cotton made earlier. When you look at it right now, if I had been launching a startup pre 2005, 2007, you would’ve seen a bunch of servers under my desk and in the garage and we were literally running all the hardware out of the house where we were launching the startup pretty much since whatever, 20 10, 20 12 on, we’ve been launching on AWS Azure or GCP. So the need to own the compute is relative. If you can stand up, if you have a implementation, for example in financial services or healthcare where you have to have absolute privacy and absolute control, what you would do today is jump onto a private cloud stack. And so for example, when I worked in Wall Street, we ran all our infrastructure out of Equinix SA and other co-los is basically what we did.
(00:49:09):
So I think Mikey nailed it in the need to own the hardware, the need to own the compute really depends on the sensitivity and the confidentiality of the information. What we’ve discovered is we can solve this by deploying Jember specifically across AWS manifest and Render. So we’re shameless, we’ll leverage compute for, it’s got to be fit for purpose first and foremost. So it has to meet the computing need. So if it’s online, highly confidential secure data that we’re running, we’re going to stand that up on a private instance. But if it’s AI rendering, so for example, we ingest a lot of laws and regs and then we turn those laws in regs into machine readable rules. And then similar to what Tim described, we create deterministic algorithms to ensure the laws and regs are being properly implemented. Well, there’s nothing confidential about those laws and regs, so we’re totally fine running that compute on Render and we’ll distribute and run that compute on render all day long.
(00:50:13):
So it’s a bit of a long-winded answer Taco that I would say each team needs to really look at their own implementation, their own requirements, and then implement accordingly. Now I’m going to shamelessly jump over to what Tim was talking about, and when I look at payments number one, I think you nailed it, Tim. The payment rails are the main blocker for agent autonomy right now. And what we’re feverishly focused on is providing autonomous agents with the ability to earn spend and settle payments independently. But as we all know, today’s payment rails are built for humans and corporations. They’re not built for robots or smart contracts. So all the manual stuff around settlements, reversals, disputes, none of that works. So X 402 from Coinbase, it’s incredibly pioneering. I love it and I love that you can leverage the API off of the HTTP 402 code, but the challenge is, and you even brought it up taco, even though you’ve got collaborations like JP Morgan Bank collaborating with Coinbase, the open APIs for direct wallet funding and real-time settlement aren’t there yet.
(00:51:28):
And so Tim mentioned payment pools, which are used in many industries, they’re used in the airline industry, they’re used in the telecommunications industry, oil and gas. And the goal of the payment pool is to enable multiple parties to pool payments and automate multi-party settlement and track usage efficiently and resolve disputes, and they work incredibly well. So if we can get a smart contract and a gateway in place that effectively creates a compute or inference resource payment pool that the agents can query. And then Tim described that networks like manifest and render can post supply. So you basically get a bid as spread where agents are asking for compute and networks are bidding on compute. Then suddenly we can now evolve into the convergence of true programmable agent payments, pooled resource allocation, automated settlement, and real time market scheduling. And that’s going to unlock the independence for AI agents. And I can go on and on, but I’ll stop there. Sure. Someone else wants to talk.
Player1Taco (00:52:35):
No, you covered a lot of what, Tim? I’m sorry, just 100% dude. Yeah, no. And so Paul, this one for you, what pricing innovations could emerge and is it a spot pricing on demand subscription and how does that interlay with on chain transactions?
Paul Roales (00:53:04):
Yeah, yeah. No, I think we’re separate a lot of innovation in this. And I would add just some of the ones you mentioned, spot subscription, et cetera, the batch processing and the ability for agents to determine what sort of compute they should go after based on the cost of something that they’re looking at doing. So for example, if an agent’s reasoning through searching for your flight or tomorrow if it has six hours to try to figure out which flight it should book for you, it doesn’t necessarily have to use a very expensive compute that may be happening right now. Maybe it can wait an hour and look for cheaper compute to do that task under. And we see some of the models that Trevor was mentioning that use a lot of test time compute. We see the opportunity for a lot of the reasoning costs to be managed and be managed automatically as part of the reasoning of the agent of the model. And so I think this is very exciting when you intersect that with the payment systems that everyone’s been talking about today. So the agent can manage that payment, they start managing the payment for their own compute and all agently, very, very exciting stuff that can only be done in a distributed way, can only be done with all the ecosystem and infrastructure that we’ve been talking about and presents like radically new exciting opportunities that just keep us moving here at rendering labs. It’s so much fun to play with and think about these opportunities.
Player1Taco (00:54:47):
The add to that you sort of bring to my mind as we were just talking about job bidding through agents, there’s even agents that could then bid for energy in a way and providers could find an economic model for that. So that could become a pricing structure as infrastructure grows to meet that demand.
Paul Roales (00:55:06):
Oh, totally, totally. Very cool. Yeah.
Player1Taco (00:55:10):
Eric, are we approaching a cloud cost crisis for AI and how are teams optimizing today modeled distillation, like Paul said, batching or speculative decoding?
Eric Bravick (00:55:28):
It depends on the team and your level of access. So I am going to get maybe a little far afield on this answer kind of popped into my head here when you asked that question. We have been in for many teams a cloud cost crisis since cloud was created. It depends on who the team is and what their level of funding and access is. Lots and lots and lots of startups in web two and Web3 is no exception to this, have gone out of business or nearly so based on their scaling their cloud costs. It just becomes untenable at some point and then they lack the skills to move. These are somewhat capture models. These are artificial constraints placed on the clouds placed by the clouds on the startups. This is similar actually to the artificial GPU constraints. So we talk a lot about Nvidia, and I’m going to throw a grenade here that maybe some people aren’t happy with, but I’ll throw it anyway. There is no shortage of GPUs. There is no shortage of NVIDIA chips and there never was. There is a unequal distribution of access to Nvidia chips and that is enforced by Nvidia that is enforced by the distribution and sales model to keep prices high and utilization within a band that they find the most profitable.
(00:57:22):
So my answer to this is yes, there has been a crisis for a very long time depending on who you are, that crisis will continue and unabated until decentralized models for compute get mature enough so that the majority of workloads, the majority of use cases can be done decentralized. And there’s an effect here that I want to point out that I’m not sure if you’re outside this industry and you’re not super focused on infrastructure you won’t even realize is occurring. So there’s the primary impact of, hey, I can move my workloads decentralized. Decentralized is cheaper because of a lot of reasons and the tooling is getting better and better and better. And so I’m going to move some of my workloads. Great, that saves me some money that reduces my cloud cost crisis a little bit. And again, you mentioned a lot of the techniques.
(00:58:31):
We could do a whole talk just on all these techniques of segmenting and batching and how everything works from an ML ops standpoint. But let’s put that aside for a minute. The primary thing you need to induce is economic competition. And that is what the big clouds have essentially price fixed their way out of is they’re really not competing in the way that you think that they are behind the scenes. Everyone is based on the AWS price table that is the global standard. Every cloud provider that is at any scale is a derivative of price scale for their market. So you think there’s competition, there’s actually not that much competition. Nvidia has made sure that the way their distribution model works, there’s no competition in how you get Nvidia chips. So you get picked or you don’t. And oftentimes Nvidia will come to you as an investor or with some sort of leverage and get you upper or in some cases push you down that distribution chain effectively killing you.
(00:59:47):
So the first thing we need to do is incite competition through Web3, through getting all the GPUs we can on the network with access through tooling to pull that primary lever. But there’s a secondary lever that is kind of hidden in the matrix that you don’t see, which is once that occurs, that breaks some of the distribution strategy that the big clouds and Nvidia have for their compute resources. Once you start breaking that distribution strategy, they have to compete and they will, which means they’ll alter that distribution strategy. That second order effect is really what we want to start seeing. Once we see that in the Web3 world, we know that we’re on the path to winning because we know if you can think about it as not trying to set it up as oppositional, by the way, I’m fine with Google, I’m fine with AWS, I’m fine with centralized compute.
(01:01:01):
In fact, it manifests. Our model is to actually integrate that in and just more properly own it and control it and use it. So I’m not trying to make an argument against centralized compute here. What again it’s against is the economic distribution model, which once we see that crack, we know we’re on a path to success and very rapid growth in Web3 when it is healing it. I’ll stop there, but I could literally talk for a day about this. Don’t even get me started on alternate GPU and A PU and all these other TPU platforms that should be much more used than they are. And again, that all comes down to NVIDIA’s sales and distribution strategy. Once that starts to crack, you’ll also see 10 new technologies that again, we are partnered with already at manifest. That is soon as they’re viable, we’ll give you not only options that you could never get before for Nvidia compute will give you options for the next five technologies that you should be using in instead of going with Nvidia GPUs. And that opens up a whole new world of powerful compute for ai. So I’ll stop there. I hope that partially answered the question. No, it did
Edward Katzin (01:02:34):
Actually Taco before you jump on. So Eric, what I would add to that is we have a market equilibrium now, and you called it out perfectly around the AWS pricing table and how NVIDIA is driving its distribution and controlling the distribution with that, I think the new market opportunity that’s showing up and you described it perfectly, is a shift from the existing oligopoly to open markets. And what makes the markets open is having the resources on a decentralized network where you can post the bid as spread so the networks can post their supply and their availability. And someone mentioned the Texas Electric market, when you look in Texas, the price of energy is at that moment in time and it is highly volatile. It’s not like the AWS pricing tables where it’s locked in. You would get in a municipally controlled electrical generation. So with that, I think what you set up perfectly there, Eric, is once we can flip that equilibrium, the incumbents are going to have to adopt or lose ground and what’s required is that shift.
Eric Bravick (01:03:43):
Yeah, that’s exactly right. And I’ll make one further note that maybe is kind of an insider, maybe a more salacious detail here as well. At Manifest we created something called Project Bedrock, which is fundamental to our distribution strategy where the network has tokenized actual facilities and power plants at those facilities and then manipulates the price per kilowatt hour of the power going into those. That all exists at our L zero right now. So when you’re using any of our tech in a lot of cases, you’re actually routed to a center where all of the power optimization that goes into the costs that you pay on the token one side is already optimized for you. So for example, at those sites we can usually leverage local power plants at a third of the cost of grid even for stable base load power. So energy is key and we’ve definitely, with Project Bedrock spent a lot of time working on the energy problem and we have partners like yourself at Jim, we’re working on the payment rails problem. And then we’re also working on getting the actual compute, which is really a network hardware problem onto the network as you mentioned, because the compute has to be available to do the bid as spread. So yes, all absolutely correct, but I really wanted to get that detail out of, yeah, we’re working on the power as well and there’s some amazing opportunities to drive down costs there as well.
Edward Katzin (01:05:41):
Spot on.
Player1Taco (01:05:44):
We’re closing in on time. We had a bunch of other questions to go through, but really got to deep dive on a lot of these key topics, which was really exciting. And I want to thank on behalf of Render and Manifest, I want to thank everyone for joining us today. I want to thank all of these amazing panelists, but to close this out, sort of want to go into a quick lightning round and I’ll call you out as we go through before you have to go. But what’s one bold prediction or key insight that you leave our audience with about generative AI and who’s paying the compute bill? Mikey, I want to start with, yeah,
Mikey Anderson (01:06:25):
Render I think is our big brother. When you look at Render and when you look at the application layer that OTOY represents and the consumer layer that their customers represent, that is the pattern that I think is going to allow the market to bring all of this to fruition. Because what they actually have is they have a product that people want. And by having a product people want and they pay for, it aligns engineering teams behind what those people need and allows for fast progress. And I think right now what we’re seeing start to come to play is the 18 month old decentralized AI industry is understanding this and starting to build those layers. And so just want to shout out a group that makes sense to this within the THINK ecosystem, the Supermodels team. So supermodels team is generative AI artists and traditional artists that are building a LoRa training system to turn your art style into your agent that you own and can run. When we get more and more of these application layers that are bringing actual paid users in, I believe that’s the thing that’s going to unlock this whole thing and I’m just excited to be doing it with all these great people. So thanks for having me.
Player1Taco (01:07:32):
Okay, and then next up, Paul, if you’re still here. Nope, Tim,
Tim Cotten (01:07:45):
I mean I have a bold prediction, but I’m going to just frame it as a question to the group. Would everyone here, because we’re all essentially friends, we all have networks that are cooperative and stuff that is interesting for each other to add value for each other. Would you all be interested in helping design this universal Job Network? Because Scrypted Network, that’s what our intention really is for coordinating all these AI resources, right? Because we don’t want to be the ones providing the GPU or solving the energy problems or finding all of it. We just want to create a marketplace for AI. And I honestly think in the framing of this conversation, I personally have discovered a way to communicate the value of what we’re doing that I didn’t have before as a tool. And when I really think about the idea of just being able to provide, not the payment rails, but the auction bidding system where agents could just be hooked up to render, be hooked up to think where all of our, and use manifest, where all of these networks can play nicely, almost like in the Google SEM auction way. It seems like a really powerful paradigm and I’d be curious to see if anyone would like to help me solve some of the gap problems because there’s a lot of use case issues that I can’t just discover ourselves at Scrypted.
Edward Katzin (01:09:23):
Sign me up, Tim. I’m there.
Eric Bravick (01:09:27):
Yeah, absolutely. On behalf of Manifest, absolutely. We love partnering rather than building by ourselves. So absolutely anyone on this call who wants to partner, just reach out to Taco and yeah, we’ll be glad to partner on any of this
Player1Taco (01:09:47):
Nice Ed bold predict.
Edward Katzin (01:09:52):
So I’ve got several, and I’m going to frame them, which is that the generative AI compute market, it is being reshaped, I was going to say is going to, but it is being reshaped by automation, open competition and transparency. And we have to drive both the technological and economic innovation. And so the networks in the marketplaces that empower both machines and humans to pay bid and choose their services is what’s going to break this oligopoly that Eric was talking about. So in that context, I’ve got a few big predictions. My first one is number one, autonomous agents are going to be both consumers and payers. So within the next two to three years, the majority of generative AI workflows, especially automated tasks, are going to be paid for directly by AI agents and that’s going to be programmatically managed and we’re building for that future.
(01:10:46):
The second one is compute markets are going to transition to real-time open bidding models, and Tim alluded to the way that Google would double click on the other tools enabled the open bidding on advertising. So my assertion is the prevailing pricing table models set by hyperscalers is going to be fragmented, and instead we’re going to move to a real time bid-ask marketplace for compute, and that’s going to be powered by decentralized networks and transparent ledgers. And if a tool like scripted helps to orchestrate that, well then heck, I’m all yours. I would love to learn more about that. The other one, and I got to give Eric credit because he inspired this one in the moment, is we’re going to see the end of manufactured scarcity and cloud oligopoly. So decentralized compute networks and as Eric described by aggregating GPUs and CPUs from data centers individuals, all the things that render is doing is going to break the existing hardware distribution and pricing bottlenecks.
(01:11:44):
It’s going to erode NVIDIA’s gatekeeper role and it’s going to give us greater access given that I came from call it web two, traditional banking and payments in telco, the thing that I’m seeing is web two enterprises are shifting to hybrid and decentralized compute for cost control renders. Seeing this with the studios and the others in Hollywood that are starting to take on in leveraging the decentralized tools and infrastructure, we’re seeing the bills for private cloud runaway. We’re seeing complaints about vendor lock-in and we’re seeing the hybrid strategies get adopted. So this is happening, this is right fricking now. And then as a result of all of that, we’re going to see new business models and the business models are going to be based on micro transactions and outcome-based pricing, and that’s how they’re going to dominate. So I am so excited to be in this space and would welcome any and all collaboration to make this reality.
Player1Taco (01:12:43):
Nice. Thank you. No, and I love the call to actions for partnerships. Eric, to you lightning question key insight that you’d like to leave our audience with?
Eric Bravick (01:13:00):
Well, there’s lots of interesting stuff. I think one of the key insights may be that you’re going to see an order of magnitude shift in the abilities of AI based on the order of magnitude shift in the underlying availability of compute. As we break these models with the decentralized ecosystems that we’re talking about, if you look at the top of the stack and the emergent behaviors that come from say, offering a price that’s 10% of the oligarchy, the bully price, whatever you want to call it, ed had a better word than I did, then that unleashes orders of magnitude improvements at the top of the stack. So I’m really happy to be working on boring stuff in infrastructure because I know every time I deliver a GPU or a CPU that a startup couldn’t get from AWS or couldn’t get at the price they need, I’m unleashing an order of magnitude effect at the top of the stack. So I think if I was to provide a key insight, it would be that it would be small revolutions at the bottom of the stack, equal very large revolutions at the top of the stack, especially when accelerated by ai. And I am excited to see what all of you working at the top of the stack who might be listening to this are going to do when we make these changes in power or changes in how NVIDIA chips are accessed or bringing a whole new class of say, TPU online or something like that.
(01:15:09):
I am excited to see what’s going to happen at the top of the stack because that’s an order of magnitude shift. So we’ll leave it at that.
Player1Taco (01:15:18):
Okay, thank you. And to round this out to Trevor,
Trevor Harries-Jones (01:15:25):
This is amazing by the way. I love getting together this group every time. So I want to take it to demand prediction. I’m really excited at where video to video is going. VO three is great for a step, but it’s centralized and expensive for me. Open source video to video improvements and really a step up in the level of composability and control is the prediction I’m expected to see in the short term here. And I really think it unlocks so much on a decentralized basis that we could really see an amazing usage of our resources and our partner’s resources.
Player1Taco (01:16:12):
Nice. Thank you. Talking about resources, I just want to shout out to my co-host Sonny, and I want to thank Render and manifest their teams working in the back to make this space happen. If you’ve learned anything from this space or something’s been triggered as a builder in you, please reach out to any and all of these speakers today, Sonny. Man, it just gets better and better every week it seems like.
Sunny Osahn (01:16:42):
You know what? I love it. I think Trevor hit the nail on the head. This is such a great bunch of people we’ve got here and I’d love to thank all of our guests who joined this on our third AI specific Spaces. There’s been some great insights and overall it seems like a super optimistic outlook for the future of generative ai and I’m super excited for that. Obviously it is super early innings in this space and the next six months are going to look so different from today, let alone 12 months or five years, right? This is like a decades long effort, but all of these decentralized paradigms are kind of like a speck of dust in a large scheme of things. I think someone else mentioned something similar to that earlier, but I think patience is key because things may seem like they move slowly, but then all of a sudden, boom, we’ve got everything we’ve been asking for. It’s amazing to see. Yeah. Excellent, excellent spaces. Thank you so much for co-hosting this tackle.
Player1Taco (01:17:44):
No, definitely Sunny and Sunny, I feel remiss not asking you any hardball questions as we’ve sort of gone through this. So I have to ask you Star Wars or Star Trek.
Sunny Osahn (01:18:00):
I love that question. Star Wars or Star Trek. So I’ve recently been watching a lot of Star Trek movies. I’m a huge Star Trek fan. Shout out to Samuel Ings. He’s not here right now, but he does some amazing fan thoughts all rendered on the render network. But my kids and my wife would absolutely kill me if I did not say Star Wars. So Star Wars, I’m going to have to say it is obviously all of the render crew are going to kill me as well, so I’m dreading the next meeting.
Player1Taco (01:18:34):
No, and it is. Hey, happy wife, happy life, amazing kids, amazing life. They all go in life, but they an amazing panel and I want to thank all the panelists for joining with us today for that.
Sunny Osahn (01:18:53):
Absolutely. Thank you everyone so much for joining, and we will absolutely do this again. And a quick reminder to some of our regular followers. We do a regular Monday Spaces at the same time that this was on, so feel free to join us every single Monday where we talk about the render network ecosystem. But yeah, our next AI Spaces we will try to schedule up in two weeks time. Okay, so stay alleged for that tweet
Player1Taco (01:19:26):
And the other side of that, two hours every Monday before this space, 12:00 PM Eastern Manifest, Morpheus,
Sunny Osahn (01:19:35):
And a few other projects also have their weekly space as well. Amazing. We’ve got entertainment all day long. I love it. Amazing builders. Thank you everyone so much for joining. We will catch you all again soon. So until next time, from Taco, myself, the render network, and the rest of our guests. Thank you so much guys.
