Beyond the Model: Decentralization, AI, and the Trust Layer (Full Transcript)
03:40
Silvia Lacayo (Render Network): Hey, everyone. We’re going to get started shortly. Give us another minute or two as the rest of the panel joins. Hang tight.
04:54
Taco (Manifest Network): GM, GM everyone, I wanna thank everyone for joining us. We are getting ready as we fill out this space and get all of our speakers up. This is an amazing series that we’ve started with Render and Manifest. Last one was, I think three weeks ago because we had a holiday with the 4th of July. I hope everyone had a great time. We have an amazing panel today. We have Trevor from Render Foundation. have…
05:23
from Render Labs. will be having Edward from Jember AI. We will be having Sam Judson from Nexus joining us as well and then Eric Bravik and we should have be having Mikey Anderson from think joining us. So as we get ready to fill this out. Please like comment share if you have any questions during the entire bits of conversations.
05:53
please let us know in the comments and we’ll work to keep those up to date. This is going to be fun. Silvia, how are you doing today?
6:06
Silvia Lacayo (Render Network): I’m doing very, very well. Thank you for asking. Yeah. Happy Thursday. Indeed. Very much looking forward to this discussion. have, I’m excited to
welcome the Nexus Labs team on the panel. So looking forward to getting some deep technical insights from that team too. Welcome, Sam.
06:31
Taco (Manifest Network): Thank you. Yeah, thank you Sam and I think we have Mikey behind the THINK account today. Hey hey, this is actually the media guy Alan. I just texted Mike so he should be coming on his own account soon, but good morning to everyone and good to talk to everyone. Alan, always great to see you. Everyone forgets how hard it is running an account sometimes and you do a pretty amazing job. Trevor, how are you doing today?
06:59
Trevor Harries-Jones (Render Network): I’m doing well, Taco. It’s always great to hear your voice. I love the energy you guys bring to this.
Taco (Manifest Network): Man, it is, no, and same as yours. Yours is like a calming voice, no matter how much caffeine I’ve ingested.
Trevor Harries-Jones (Render Network): Yeah, it’s the South African in me. I can’t seem to get it excited. I was thinking it’s like the high voice.
Taco (Manifest Network): And then Paul, how are you doing today?
07:28
Paul Roales (RenderLabs): Hey, good morning. Yeah, good to be here again with everyone this morning and some new faces. You know, I think we have a great conversation lined up today. So I’m looking forward to hearing from everyone and, yeah, excited to be back today.
07:45
Silvia Lacayo (Render Network): Should we introduce ourselves and then let the speakers introduce themselves and then we jump in? Hi everyone, I’m Silvia Lacayo. I head up marketing and communications at the Render Network Foundation and I will be co-hosting slash co-moderating with my pal here, Taco.
08:15
Taco (Manifest Network): Yeah, my name is PlayerOneTaco. I am the CDO of Manifest and Degen extraordinaire. I’ve been in this space for 13 years and I get to sort of travel around the world, front lines, seeing what people are building and getting to be a part of it and help giving voice. Really the whole point of having–giving people a voice is I get to talk to really cool people and get to learn what they’re doing and you get to listen in. Next up, Paul, who are you? What do you do?
08:15
Paul Roales (RenderLabs): Hey, good morning. Yeah, so I work, Paul here with Render Labs and at Render Labs, we work to build the tools and infrastructure on top of the render network to help the community grow and expand and make the fullest use of render possible. you know, these days we’re very focused on machine learning infrastructure, which we’re going to talk lot about today, which I spent a decade inside Google and Waymo building ML tooling there. And so excited to bring that to our decentralized platform and the ecosystems that we are going to chat about today.
09:29
Taco (Manifest Network): Amazing. Thank you. And Nexus, Sam. Yeah.
09:39
Sam Judson (Nexus Labs): So I’m Sam Judson. I’m head of ZKVM engineering here at Nexus. And so we’re working on building a zero-knowledge virtual machine which is essentially a way of taking these two very closely related cryptographic technologies, zero-knowledge proofs and succinct proofs, and making it easily accessible to developers without requiring specialized cryptographic skills. And a lot of applications of the technology. One of the big ones is solving this paradoxical problem of how do you provide transparency about the way that software is operating without placing huge…
10:08
computational burdens on the verifiers or exposing the sort of private information of the data or the users that are involved in the system. And so a lot of applications of that in decentralized contexts, a lot of applications of that in sort AI ML contexts. And so naturally there’s a lot of applications of it to their intersection as well.
10:28
Taco (Manifest Network): Awesome and then I know we have in the process we have Mikey and Ed getting ready to get moved up. Hey everybody, it’s great to be back here. For those of you who don’t know me, my name is Trevor. I’m on the render network foundation board at render. We help artists.
11:08
Silvia Lacayo (Render Network): Did we glitch or is it just me?
11:12
Taco (Manifest Network): I think we just can’t hear Trevor anymore. I’m so worried it was me. Trevor, you hear us?
Trevor Harries-Jones (Render Network): I’m back. Can you guys hear me? Yep. All right, I’ll start again there. Sorry. So for those of you who don’t know me, I’m Trevor. I’m in the Render Network Foundation board. At Render, we’ve helped artists and studios render content for the biggest stages, Hollywood, NASA, the Las Vegas sphere, music sets at Coachella. And we do it by really tapping into a global pool of idle decentralized consumer GPUs.
11:42
The exciting part is we’re building on that now to do AI workloads and do them cost-effectively, securely, and at scale. That’s really the driver of what we’re talking about today.
12:00
Silvia Lacayo (Render Network): Right on. I think Mikey, you were going to talk next.
Mike Anderson (THINK): Hey, I’m Mike Anderson, contributor to think agents. What we do is we make AI you own. So that starts with being able to own your agent yourself. We use a non-punchable token that represents your agent that has its own wallet in it. And then we’ve built an entire suite of tools for editing and creating AI agents. And then we’re about to launch. We’re within days, weeks of everything coming together.
12:30
and feeling really stoked to be here and thankful to be part of a group that is actually doing legitimate decentralized AI at scale.
Silvia Lacayo (Render Network): Excellent. And Edward, do you want to give an intro?
Edward Katzin (Jember): Thank you, Silvia. Yeah, this is Edward. I’m the CEO and founder of Jember. What we’re doing at Jember is enabling the implementation of AI workflows in highly regulated industries. So the topic of trust is top of mind for us, we deliver something that’s called chain of trust choreography, which plugs into tools like what Nexus chain is building and having zero proof and other forms of authentication, being able to leverage that in context. And then when implementing that in industries like financial services, making sure that everything adheres to regulatory requirements, but is also still honoring full decentralization of privacy. And so we’re incredibly excited to be back talking to the render team.
13:28
We’ve built and deployed some initial work on render, and we’re so excited to see how this is evolving.
13:36
Silvia Lacayo (Render Network): Thank you so much. uh, taco, I think you were gonna last but not least, uh, have, uh, Eric’s up, right?
Taco (Manifest Network): I see I’m waiting for him to join. Uh, he is actually out in the middle of Idaho right now. Secure another data center. So I don’t know if he’s, if he’s able to connect or not. So I know that that’s one of the hard pieces we’re waiting on. It’s weird how we do things. We, just go out in the middle of the woods to get a data center.
14:06
to bring it online. Let’s roll right into it. Silvia, do you want to sort of cover the topics and then we’ll go right into the questions?
Silvia Lacayo (Render Network): Absolutely. And as far as being out in the field, this is DePIN after all, so no worries. All right, we’re going to talk about the convergence of AI and privacy as app developers build models, agents, and tools. We’re also going to get deep into models. So the spaces we’ve held before and conversations that you made, you know,
14:36
participate in throughout different spaces and panels and conversations don’t always go deep into the model. So we’re going to go a little bit deeper today. And that’s exciting. We’ve got a lot of technical superpowers here on the panel. So looking forward to that. We’re going to talk about why privacy and decentralization principles work hand in hand to protect users’ data. We’re going to spend the bulk of our conversation talking about real world use cases, of course, uncapped potential of decentralized compute, decentralized applications.
15:05
and how they can power a large swath of applications that builders may not know are possible. Specifically, we’ll get into how to unlock capabilities for AI builders or builders who are going to be taking advantage of AI workflows from a cost-efficient, fine-tuning sort of standpoint to scalable inference without vendor lock-in and much more. So with that, think we can go ahead and start. Paul, you are our first victim. I would like for you to start.
15:33
With an overarching being here, why decentralization networks are actually really good for us to focus on now? Why is now a good time to accelerate decentralized model training? And then you’ve previously talked to me about routers, which I found mind blowing and fascinating. So I’d love for you to dive into that as well.
Paul Roales (RenderLabs): Yeah. Yeah. Thank you. You know, there’s kind of two layers that I’ve been thinking about here this morning. Like, you know, the first layer is like,
16:02
kind of the high level sort of why of like, know, decentralized being more cost effective, being able to have tap into global scale of, you GPUs around the globe instantly being more secure. And then, you know, I think about like, you know, kind of the immediate kind of API developer experience level, which is like, you know, the technology is finally here to train models in a decentralized way to have, you know, the
16:32
tap into the global routing and utilization of models. And so, I think there’s like very compelling reasons that match up with products at the top level, again, huge cost advantages, huge scale advantages, security advantages. If you’re a decentralized sovereign agent and you want it to be hosted on a decentralized sovereign runtime and ecosystem, you don’t want it to be hosted on AWS.
17:00
matching up those things can sometimes be critical for a customer, but also sometimes they can be things that are like very nice additions and very helpful as well. so when we’re talking about the technology finally being here, I think that’s why now is like, we’re seeing very exciting developments constantly and like the ability to train models at scale at a decentralized network.
17:27
and have those models be very, very competitive or even better than a lot of other open source models. So that’s very exciting to see. And then, you know, on the inference side, you know, the ability to tap into very low latency on a global basis because you can automatically detect and understand the latency to all your different nodes. You know, what are their cost endpoints? What models are they hosting? What performance do they have? You can really get a tight fit.
17:56
between your CPU, your GPU, your storage, your network, latency, everything with the workload and then get even better performance on that end node that you end up getting placed on. And so, yeah, that’s kind of touching on the router piece that I think is very, very interesting is as you’re thinking about inference workloads, you think about many different trade-offs. And one of those is like,
18:24
you know, how do I get this back to the user the fastest with the high performance? Another may be like, how do I make this fit a certain financial model? you know, that if you’re giving away free usage on the front side, how do you make that like work economically? You know, but then also, you know, how do you have good uptime and, kind of routers sit in between all those and help you, you know, think about
18:51
doing that trade off automatically. If a service falls down, if your primary host has an outage, a router allows you to automatically fall back and pick up different usages. So the user never notices. It allows you to pick up cheaper inference when that comes online. And so yeah, I think it’s a very exciting time in decentralized infrastructure like Render.
19:20
and decentralized networks like everyone on this space has. And it makes a lot of sense at cost scale and security. And it makes a lot of sense just because the tech is finally there and it’s enabling a lot of these exciting things.
Silvia Lacayo (Render Network) Got it. So if I had to summarize, we’re literally routing outputs back faster with lower latency, but also customized for specific use cases.
19:43
and protected from single points of failure. Plus, of course, more affordable. Is that fair?
Paul Roales (RenderLabs): Absolutely. Absolutely. That is a very compelling summary. Thank you.
Silvia Lacayo (Render Network): Yeah. Mikey, I would love to get your take on this because you’re working on some exciting things. So take it away.
20:03
Mike Anderson (THINK): Yeah. So right now, if you’ve been paying attention to what Mark Zuckerberg has been talking about, he’s literally creating, uh, like server farms that are as big as the Island of Manhattan at scale is what he just said. And they’re doing it inside of tents because they’re like, we can’t wait as long as construction and permitting and environmental regulation. Like, like the squeeze on this industry to figure out how to get more flops out of AI inference and training is just unparalleled. And like right now.
20:32
because Render started by actually doing a single task that can be mathematically discrete, can be proven, they actually have this network that’s already sitting there and ready to go. What I see right now is because we’re building agents and our agents need to have access to GPUs that nobody’s watching and surveilling, that we don’t need permission to use. People want to have their own private surveillance-free AI system.
20:57
And so right now we’re, I believe that we’re like right at the edge of that Bitcoin, Ethereum moment. And what I mean by that is when Bitcoin and Ethereum came out with an economic model of proof of work, they got more, uh, more GPUs, more compute power than all of AWS and Azure combined, built the biggest computer network in the world because the incentives were aligned and there was a reason to do it. And right now getting to the point where we have verified inference and verified compute on top of these platforms.
21:27
It’s just a game changer and the money in the market is there to even do things like put GPUs in tents. I believe they’re going to accept decentralized AI soon because just every single flop is going to be needed.
Silvia Lacayo (Render Network): Your pulse on big tech moves is always on point and always relevant. So thank you so much for that. Actually want to come back to Paul and Mikey in a moment because I want to dive a little bit more into models.
21:54
But I want to hand it over to Trevor now because I want to ask about hyperscalers. They’ve obviously have for years, if not decades, they’ve got experience under their belts. But what we do on chain, in the on chain circles, it’s something that they don’t really know and a lot of builders don’t really know about. So I would love for you to talk about that. I know there’s a, when you were on the permissionless panel about Deepin and AI, the confluence of those two.
22:24
You mentioned a report that I thought was really, really interesting and very applicable and relevant to this discussion.
Trevor Harries-Jones (Render Network): Right, sure. Thanks. Not a problem. Well, for me, what was a centralized only architecture just a year ago is rapidly evolving to allow both centralized and decentralized AI. If you just look back over the past year, we saw the emergence of DeepSeq proving you didn’t need hundreds of H200s to train an intelligent model.
22:53
And more importantly, jumping out of that, the concept of test time learning, which is inference in parallel. And that ideally suits renderers architecture where we do frames in parallel. That was a great first milestone. Earlier this month, Moonshot’s Kimi K2 model really built on that. What was exciting about that model is it’s actually a mixture of experts. There are 32 billion parameters and the…
23:22
they actually split it into 384 different sub experts, which have exponentially smaller footprints. And I love that type of architecture because it shows really a progression from centralized only towards more and more decentralized. When we talk to our compute partners, they’ve identified a number of models they use today, which can be run on consumer nodes, quantized models like Flux Kontext and Hunyuan video.
23:50
So we’re starting to see just this transition. Finally, the paper you’re referencing was a Pluralis paper on decentralized consumer training. And that was really exciting because really they achieved the same level of performance on decentralized nodes in a really innovative way versus the way models were trained before. So I look at the blanket of architectural updates and
24:17
You see decentralized training, you see small model inference and test time learning all progressively unlocking what we have here potentially at Render. And the hyperscalers very much are not looking at this as a decentralized or on chain problem. They’re looking at it as a traditional cloud infrastructure problem. And we honestly believe talking to partners like some of the ones you see here today that…
24:45
It’s much more than decentralized AI. It’s onchain that will critically unlock a different modality and one that just can’t be matched centrally.
Silvia Lacayo (Render Network): Trevor, is it fair to ask, you can say no if you don’t want to talk about this, but that these hyperscalers are focused on a certain type of GPU and they’re ignoring sort of an entire different segment that we’re focused on?
Trevor Harries-Jones (Render Network): I mean, by nature of them being businesses,
25:13
they’re going to be enterprise GPUs, not consumer GPUs. And very much the shortage and the focus across the globe on these larger projects is around the high-end enterprise GPUs. So the H200s or the Blackwells, and they do trickle down. There are some less performing GPUs amongst all of these folks, but there is an absolute magnitude more of
25:43
similar performant consumer GPUs at that lower end that are idle and available as this architecture finally meets up and unlocks really those idle GPUs.
25:57
Taco (Manifest Network): Trevor, I was just at the Rays Summit in Paris and it seemed like more of a hyperscaler environment, like industry back conference. Everyone had GPUs. Everyone had high-performance GPUs, but they’re doing it at such a cost discount, it seems, to try to get a customer, but they’re almost only wanting the $100 million loads.
26:24
not the small loads and stuff like that. Does that trickle down to the smaller, not even hyperscalers, but the smaller enterprise solutions?
Trevor Harries-Jones (Render Network): Yeah, it kind of depends on who the customer is that’s using Ray. But one of our original compute clients, Ionet, really centered their architecture around Ray and have shown on their side that it is possible to use Ray to essentially route
26:53
jobs across enterprise and consumer devices. The challenge hasn’t really been around the routing, but actually the architecture, the model sizes. And that’s where we’re starting to see real differences when you think about these sub-agentic or expert models. And when you see the quantized models being intelligent enough that I think unlock it more than just Ray as a routing.
27:22
wrapper to Python.
27:27
Taco (Manifest Network): Awesome. Yeah, no, that touches on a lot of pieces. Before we move on to the next piece, I want to introduce the latest person to join the panel, Eric, if you could introduce who you are and what you do.
Erick Bravick (Manifest Network): Yeah, hi, nice to be here. So Eric Brabic, CEO of Manifest Network and Lifted Initiative. We do a lot of
27:56
stuff around infrastructure as a service and DePIN, GPUs, all the same stuff a lot of these guys are doing. Yeah, so most of you already know me, so I’ll just stop there and Taco, if you wanna know anything else, just let me know.
28:20
Taco (Manifest Network): Will do we. We got some deep questions for you next up to Mikey. AI agents and applications seem you know where all the praise when they first started and they were just chat bots. As they were getting to start to see higher utility apps, what do you see the next step beyond conversational AI bots and what’s the not only what is the big unlock, but what’s the big blocker?
28:51
Mike Anderson (THINK): Well, think people have, you know, just wildly unrealistic expectations about how fast consumers change and even consumers like B2C consumers, like how fast people are ready to change the business that they’re in. And so like right now we’re at the stage where the technology is there. You do still have to learn it. The agents can’t like, you know, self-learn yet. You know, there’s a lot of people that are talking about how this latest generation of models is now helping improve themselves. So.
29:20
We can kind of expect we’re like maybe one or two generations of, you know, of most of the model training and agent stack before, you know, just, you can just expect that it works without, you know, any like, like without even like right now it’s kind of like if you were training an intern, right? But it’s a really smart intern. Maybe they went to Harvard or something like that. But you actually have to learn how to set up their job description and how to plug them into all the enterprise systems that you would want to use. And each one of those pieces is moving faster and faster, but
29:48
We’re talking about it’s been, you know, a few quarters since agents have really been an idea. And so it’s going to take quarters for people to have what they’re expecting, like these magic, these magic agents that can do their job for them. But it’s like happening right now. And what I always tell people that are in our builders club, we’ve got about 50 builders that meet each week that are using our, our backend system to start creating like agent workflows. And that’s just kind of how we’re putting our.
30:15
We’re going through the process of learning. But I tell them is, hey, because I was there in the early days and I was writing HTML and when, you know, JavaScript started becoming part of websites, like I understood it better, right? Like I understood because I was there at the beginning and I was there for the progress. And that lets me understand tech at a very different level. So by being here now, learning how to actually build agents that puts you ahead. And like the goal right now is to be ahead. It’s kind of like that story of
30:42
It doesn’t matter if you come upon a bear, doesn’t matter how fast you are, it just matters you’re faster than your friends. Like right now in this world, like getting your agent, understanding how your agent works and learning how to map that to your real world expertise. Like that’s where you want to be. And that’s where we’re helping people get. You sort of landed right where I was going to head into next. We’re seeing a lot of new agents that are specialized coming out. And at least on my timeline.
31:10
all the agents seem to be that are being produced and pushed out by different companies are agents to build you a new business in some way, or form. Is this the current trend or what’s the next trend? Well, I people are looking for the business model, right? They’re like, okay, well, how are we going to fund this? Because it’s, as you can tell by how much these pay packages are for people at Metta, like everybody’s like, hey, we need to invest in AI. But how do you invest in AI exactly? Because it’s moving so fast.
31:40
it’s a good chance that your bets are going to be off. And so I think we’re seeing a lot of people that are trying to experiment with the different needs like, you know, content creation for marketing, marketing itself, know, sales automation, customer service automation. Like in every single one of these verticals, people are competing kind of in old school web two ways, which I just think that we’re going to get to the end of this cycle. And pretty much there’s going to be a small number of winners and most of those other folks
32:10
contributed by being lessons learned and innovations in UX. But I think right now what we’re really looking at is the big tech scalers. And then I think that there is maybe one or two clusters of decentralized groups that are starting to build full stacks. And so these full stacks, it makes sense that you’re not going to be quite as fast as the scalers. But you can see the pieces coming together as Render pulls their pieces together as we get the open source.
32:39
know, agent building stack through, through think agents as we get, you know, deployment and enterprise scale through manifest, like all these pieces start fitting together. And that’s when we really start out competing and start winning the, you know, first, the early adopter markets, but you know, like Render already proved that you can get trust at the enterprise level when you deliver a better product. And I think that’s what we’re waiting for is just that better product.
Taco (Manifest Network): All right. Nice. Speaking for better product.
33:07
going from better products to like the boring side of things. Ed, where’s the biggest commercial opportunity for AI and workflow automation? We’re seeing a lot of advanced tech use cases where big data was the big thing for a while, but now we’re starting to see businesses trying to reduce hours to not only touch on compliance audits and something that you specialize in, but like where’s the offload for the workload for mundane but essential tasks?
33:38
Edward Katzin (Jember): Yeah, great question. And I love how you introduced me as the guy to talk about the boring topics. I’m happy to dive in. So, you know, with that context and agreeing with everything that’s already been said on the panel, the big trends that we’re seeing and what’s impacting, you know, when you look at the world of Web2 or traditional finance,
34:04
is they’re all very interested in what’s happening with AI. Like everyone’s had the AI or die moment. The other thing they’re very interested in is decentralized technologies and the convergence of AI and distributed ledger is creating an immense amount of interest. But the challenge is if you’re running highly compliant workloads, and I think this was brought up earlier, you can’t do it all on a permissionless chain. It has to be permissioned. You have to honor data privacy and you have to honor data sovereignty and data security.
34:33
Edward Katzin (Jember): So to finally answer your question, Taco, the big trends that we’re seeing right now is we’re moving from seeing standalone agents. So agents that have very fixed instruction sets or fixed capabilities and leveraging everything that’s being released behind MCP, leveraging A2A protocol and the other interoperability that’s showing up. What we’re able to build now are agents that actually can invoke autonomous decision making.
35:00
And in order to enable the autonomous decision making, have to have not only a generative front end, but you have to have a very high integrity set of deterministic backends. If you get old school, it’s just a rules engine, but you need to know that the AI workflows are following the rules in the decision making. The other thing that we’re seeing is multi-agent systems. So where things were starting out, you know, and Mikey was talking about how these things have been evolving over quarters. If we look back even just a year ago,
35:29
was discrete standalone agent enabled workflows. And now what we’re building and deploying are multi agent systems that have to interact, coordinate, share context, share results. So the workflows are getting much more complex. And then since you brought it up in the world to decentralize, we have to have decentralized audit coordination. So how you can prove data lineage, how you can prove adherence to all the data, privacy data handling requirements.
35:57
And then how like for EU AI Transparency Act, we have to be able to prove what went into the models and then you can’t prove exactly how they process. Everyone knows that problem, but you can prove what came out of the model and then how the output of the model was used by the agent workflow. And that that to us is key and it gets to the next topic of this panel, which is that leads us to trust and accountability.
36:21
And so what we’re seeing is that even though you got by coding, even though you’ve got this ability to quickly stand up and deploy agents on all kinds of tools now, whether it’s cursor or lovable or any of the others, it’s getting back to the old school, I’ll call it computer science and the old school information security architectures that are going to build the kind of stack that Mikey’s talking about. And that’s what that’s what we’re really excited about. And those are the kinds of autonomous decision making agent workflows were built.
36:51
Taco (Manifest Network):Yeah, and so one of the you circled around it a lot, but you didn’t actually say the word compliance and so compliance is a big piece of a workload in making sure. A document from A to Z is a central source and so can you talk a little bit on how agents are working or people are managing agents to fit within compliance models?
Edward Katzin (Jember): Wow, yeah, this is a great question and I’ll try to keep the answer short because I could talk about this for days, but.
37:20
You know, and I would love to talk to the Nexus team about this as well. But what we’ve seen is that it’s absolutely essential to ensure audit ability. The audit ability requires knowing exactly where the agents were from. you know, basically creating a equivalent of KYC or KYB like instead of know your customer, it’s know your agent. So it’s being able to identify and authenticate the agent and it’s agents acting on behalf of humans.
37:49
Edward Katzin (Jember): So those agents have basically the equivalent of a power of attorney, if you look at it in a commercial context. So from a compliance perspective, you’re giving these agents this incredible commercial and economic power. And we need to be able to prove everything that they did, the context that they did it in, that gets down to literally getting back the telemetry data and being able to get the metadata on which gateways were invoked, the IP addresses, MAC addresses, all typical stuff you’d see in the log. But then it’s putting that
38:17
I’ll pick a financial transaction as an example for compliance. When we enable financial transactions, especially when it’s in a decentralized environment, we need to know who all the transactors were to the wallet addresses. We need to know the exchanges that they’re transacting on. We need to know what kind of transaction is it? Is it a cross-border money transaction? Are they going crypto to fiat? Like, what are they doing? And then based on the context of the transaction,
38:43
we need to know what regulatory laws apply. So is this taking place in the US? Is this taking place in UK? Is this taking place in Hong Kong? And that applies the jurisdictional context. So now we know that this is the data privacy rules. These are the AI laws and regulations that have to be applied. And then what we enable is the logging and tracking of all of that data, every step in the agent workflow. And then we write that down. We encrypt it, write it down to the chain.
39:11
And then that’s available to create a decentralized audit trail. And that’s how we’re solving the problem of enabling, know, Web2, you know, traditional finance companies to extend their workflows into decentralized environments and not just their workflows, but also real transaction processing. So actually, you know, literally taking their money movement and putting it on these decentralized transactions for what most people on this space would call a traditional Web2 bank. I don’t know if that answered your question, Taco.
39:38
Taco (Manifest Network): It does. I always love talking to you a lot. And then I have a thousand more questions and I see a hand from Mikey.
39:41
Mikey Anderson (THINK): Yeah, I was just going to say, I was speaking to a group here in Seattle of a bunch of kind of like the Microsoft and kind of all the consultant people in their world. And we were talking about the transition to AI and, know, even, you know, people in these corporate jobs, even of the companies that are at the leading edge, don’t really understand what’s coming. And, uh, and when we’re talking, they’re like, well, what does this really look like? And I said, well, you know, you know, exactly what.
40:11
managing agents look like because that’s exactly what a corporation is. A corporation is a legal construct to turn people into agents that are aligned to return shareholder value. And we basically built our entire civilization around this concept. And so when you feel like, hey, I feel inhuman in my job because they’re treating me like I’m a cog in a machine, that’s what we mean by agents. And so like, as we’re teaching people how to use agents, a great way to say it is like,
40:35
hey, look at how you would structure a company. So we’ve got a group of people that are all creative. So like really high-end designers, storytellers, and there’s actually even a group that’s putting together a full movie with like Pixar and LucasFilms folks. And I’m saying like, what does it look like on the film set? Right? You’ve got a director, you’ve got your director of photography, you’ve got your script, all these different roles basically. As you’re building your agents, map what’s currently being used.
41:02
because that’s going to help you to be able to think about like, like agent coordination towards a goal. And we’re seeing these pieces start to come together. So like, for instance, in our backend editor, you can build all these agents and you can actually make the flows happen together. We’re hoping that sometime this year, actually all those flows get written to the chain as well so that you have full auditability. But like, these are the types of things that we should be thinking about is like, we’re literally mapping what we did to agents. And then at some point in time soon,
41:30
Those agents are going to look at all these agents we’re building and figure out much better ways to do it than the humans have overall. I just want to thank you guys for having me here. I got to jump into another call that I had already agreed to, but I just really appreciate this group. Such a great group of people.
41:45
Taco (Manifest Network): Mike, thank you so much for taking the time to be here. You bring, you bring a lot to the table. So everyone, we’re just for a quick reminder, please give everyone a follow that is on this panel today. The, the IQ alone of this panel is astronomical. so everyone taking a moment to out of their day from what they’re building, because everyone is here actively building things to make the world a better place. And Mikey sort of said it in a way that we all sort of think of it as, know, cogs in the machine. But these are all cogs in the machine that are people that on this panel are working to build and work for you not have you be the product anymore. SIlvia, where are we at next?
42:34
Silvia Lacayo (Render Network): Yeah, I actually want to come back to the quote unquote boring applications just in a few minutes to get Trevor’s take and really everyone else because I think we’d all agree that the boring problems is where the largest business opportunities are. But Ed had started talking about trust, improving systems and how data outputs need to be secured from a compliance
42:59
applications perspective and of course, financial transactions. But of course, there’s a broader application opportunity here. So I want to go to Sam next. Let’s talk about privacy and AI in the context of decentralization. I think we’d all agree that in some ways, web3 is the opposite of privacy preserving if you sort of think about the way that anyone can look up anyone else’s transactions onchain outside of maybe like
43:27
bundled dark pool type of transactions. That’s the way the system was designed. And by nature, distributed networks aren’t necessarily set up for SOC 2, Type 2 or HIPAA compliance standards. But zero knowledge proofs are a mechanism for helping to preserve privacy while meeting the requirements of certain protocol applications to unlock value. Talk to us about how we should think about this in practical terms for AI developers considering decentralized networks.
43:58
Sam Judson (Nexus Labs): Sure. So the first thing I’d say is, know where developers should start. And it’s very trite because it’s the answer of where developers should always start when they think about how they introduce cryptography and privacy in systems is thinking about trust, right? Cryptography is fundamentally about mitigating the need for trust. And so what does your environment sort of provide in terms of mitigating trust relationships? If you’re working in a decentralized context, you’re going to get permissionless consensus. But as noted, you’re not going to get privacy most of the time.
44:28
And thinking concretely about, especially when you’re thinking about something as complex as AI and decentralization and all of the moving parts, fundamentally, what is the particular trust relationship that I am trying to sort of mitigate? And so you can walk through these. We just heard at length about how auditors is one of the biggest ones, right? Auditors, regulators, third parties who are interested in confirming.
44:57
that your agents, your system are behaving appropriately in the system, are behaving appropriately, you can pull a technology like zero knowledge verifiable computation and apply it to that problem to try and make it so that they don’t need to trust you. Instead, you can affirmatively prove to them the correctness of your behavior, the correctness of your executions within the sort of broader decentralized environment.
45:25
But this is a broader pattern. It’s not just about third parties like auditors and regulators. So, you know, the second party is in an economic system, your, you know, your counterparties, your competitors, maybe even. There’s a lot of interest, right, in creating environments that sort of require participants in the system to establish their correctness using cryptographic techniques. And, you know, even further down the line, right, there’s
45:55
when you’re talking about ML and AI in particular, there’s a lot of sort of sources of supply in some sense, right? So a big one that we’ve been talking about is supply of compute. Another big one is supply of data. And similarly, there are trust relationships that you might want to mitigate there. So when you’re talking about compute, you might want to use, say again, verifiable computation to make sure that the nodes that are supplying compute are doing the right thing. You might want to use confidential computing, know, TEEs, MPC.
46:24
to guarantee that the nodes that are supplying the compute don’t get access to your model weights or don’t get access to your inputs. If you’re talking about the data side, the trust relationships probably flipped. You might have somebody who says, I’ll give you data to use to train your model, but I want certain guarantees about my privacy. So maybe I need some sort federated learning or differential privacy, or I want to make sure that you’re.
46:48
you know, not misusing it. So I want you to do verifiable computation to show that you’re sort of handling my data correctly. So yeah, it’s kind of trite because it’s kind of, you know, it’s not sort of specific to AI. It’s not specific to decentralization, but for developers, the sort of first thing they need to think about in practice is what is the actual problem that I’m trying to solve? Like, who am I trying to not trust? Who am I trying to establish I’m doing something correctly to? Who am I trying to protect their data or their information? And then there’s this sort of
47:17
suite of cryptographic tools that have been developed that you can then turn around and actually apply. And oftentimes they compose extremely nicely so that you can sort of solve multiple of these problems at once.
47:38
Taco (Manifest Network): You touched a little bit on the privacy and ownership of inputs. Yet we are also seeing a large scale sort of attack on the output of AI in being stored in, you know,with OpenAI all outputs being stored in a separate database outside. What do you think is more damaging or more privacy based, the input or the output?
47:57
Sam Judson (Nexus Labs) I guess I’ll chicken out and say that I think the answer is incredibly domain specific. Think it just in the reason why I think it gets very domain specific is because I think
48:12
there’s sort of this intrinsic bias to say that it’s kind of like the inputs in some sense, because like, if I get your inputs and I know what model you used, I can probably, you know, model, like non-determinism of models, then turn around and get the outputs. But I think that there’s like, also then, it depends on what risk you’re concerned with. Like maybe that’s where the competitive risk is. But if you’re thinking about risk more from like a,
48:40
like whistleblower auditing situation, that if the outputs are what’s actually sort of driving what you’re doing within some system or driving what your agents are doing, then there’s maybe more risk on that side because those outputs become the inputs into what’s actually sort of executed in a broader economic context. It’s a little bit of a, know, it’s, I think it’s hard to say conclusively one or the other. It’s gonna depend on the domain. It’s gonna depend on your risk model.
49:09
But there are, I think, again, cryptographic techniques that can help you on both sides. And the main thing is about making sure that you have everything set up appropriately and the cryptographic and privacy preserving elements of your system integrated at the right level so that you’re actually getting the security guarantees that you’re hoping to provide for yourself or whoever has given you data, given you compute
49:35
is relying on your outputs, whatever it is that you need to sort of be behaving correctly and in a trustworthy way within the broader system.
49:46
Silvia Lacayo (Render Network): Sam, are there any applications that have come across your virtual desk, I guess, that have surprised you as you’ve been working on this with Nexus Labs? And if not, if nothing’s super surprising, is there one use case that’s further ahead than others from the ones you’ve mentioned?
Sam Judson (Nexus Labs): So it’s funny. I wouldn’t say there’s all that much that’s surprised me. And I think probably the right reason for thinking about that is that, like, you know, zero-knowledge proofs and sort of more broadly this notion of verifiable computation has been around a very long time in an academic context. And in an academic context, you know, people love to dream up applications of this stuff and, you know, they write the paper where they have the nice, like, know, paragraph blurb of, here’s the problem that we’re gonna solve with this thing.
50:41
And so the problem space for ZKVMs in particular is less about finding applications. I mean, obviously things change over time and so oftentimes you sort of have to remap to new environments with obviously decentralized environments being a really big one. But it’s really more about the sort of practical utility of these tools. How do I actually build something that you can integrate effectively into your software stack? In terms of applications that I think are kind of further ahead.
51:11
I mean, there’s kind of, think like, there’s sort of two ways I think of viewing that problem. So one is you can look at sort of these web three native applications, like say like a roll up or something like that, where just because of the fact that like you’re operating in an environment that is sort of cryptography, like both in both senses of the term crypto native, there’s sort of
51:37
very few barriers to entry in terms of introducing the cryptographic technology beyond just the technology being effective itself. And so you’re seeing a lot of effective real-world deployments of the technology already. And then on the other side, you can sort of flip and look at like what are the more traditional sort of enterprise-y applications that are like sort of more ripe for the taking than maybe they’ve been in the past. One of my favorites, I’ve written, I actually wrote a blog post about like,
52:06
vibe coding the sort of client side of this, was there’s this provision in HIPAA, the US, the federal healthcare privacy law around sort of de-identification of medical data, where if you sort of de-identify medical data, where you remove these like 18 particular types of information, then it allows you to freely sort of distribute the sort of resultant quote unquote anonymized data set.
52:35
without taking on significant liability risk because the law itself basically says if you’ve done this, we give you the benefit of the doubt that you’ve properly anonymized folks’ data and therefore they can’t be harmed by its release or distribution. This is a really nice application of verifiable computation because you can do the dataset cleaning within this like enterprise within the ZKVM. And you get like a really succinct privacy preserving proof that you can attach to this sort of anonymized data set that sort of establishes its compliance and can sort of be carried along with the data set. oh, hey, somebody sent me a data set. I don’t have to worry that I suddenly got PHI that brings some legal risk with it. I have this proof that I can quickly verify that this data set was produced by passing it through a like HIPAA compliant de-identifier.
53:32
So that’s like not a decentralized example, although it is kind of an AI example because of obviously you’re thinking there about kind of where do we get data and how do we sort of safely manage data that we can feed into ML systems. And I think that there’s a lot of sort of quote unquote old school applications of the technology like that, that we’re now in a position to make much more practical to sort of just like take code off the shelf and run it because we have environments like ZKVMs
54:00
that provide verifiable computation without requiring any sort of custom cryptographic engineering on the part of developers.
Silvia Lacayo (Render Network): So it’s actually comforting to hear that nothing here is surprising because I think we can tie this back really nicely to those boring opportunities we were talking about before. So I’m going to make my way back there to some of the panelists who first talked. And by the way, I don’t think 3D rendering is boring at all, but it’s real and it’s tangible. So we’ll get back to that.
54:30
This actually feels like a really nice time to talk to Eric and have him explain to us how Manifest enables trusted execution environments and how Manifest enables trusted sovereign compute for the entire stack.
54:50
Eric Bravick (Manifes Network): Yeah, happy to talk through that a little bit. Yeah, great thoughts from the panel so far. There’s a lot to follow up on there actually.
54:59
And maybe I can weave this together with what Manifest is doing, because it gets a little bit complex here, but I’ll try to move it along even so. yeah, so we started at the very fundamental bottom of the infrastructure layer and asked, how could we build the sovereign AI compute
55:29
for the future, know, kind of skating to where the puck was going to be rather than where it is now. So we started at the bottom and a lot of these problems, so let me fork just for a second here. A lot of these problems that we’ve been talking about, we tend as humans to talk about them in one dimension at a time, meaning that we describe a
55:58
say a ZK solution for in ensuring that a workflow works correctly. And that may touch multiple dimensions, but usually in the real world in a business case, there’s other dimensions that need to be considered at the same time. So for example, I love the example of the PHI,
56:27
space because that’s a space we actually have some customers in and we’ve worked in that space for a while. The issue with the entire PHI debate is I can create a ZK system that protects your data. I can create all of that exactly as it was just described. But still, when it comes to targeting an individual person and
56:57
and re-identifying them, I can use all these same AI tools to re-identify them again. So, that’s a multi, there’s a multi-dimensional problem here to this space. So, when we started thinking about sovereign compute, we wanted to start at the bottom, work our way up and deal with the multi-dimensional nature of the problem.
57:26
We knew that we would have very strong partners in decentralized infrastructure. We knew we would have very strong partners in ZK, et cetera, et cetera, et cetera. We wanted to be the meeting place where all of them could come together. All of the stacks could come together on trusted compute. And fundamentally for us, that starts with an on-chain proof of authority.
57:57
So we’re one of the few proof of authority chains out there. So when you look at our layer zero, which is all the fundamental infrastructure, GPUs, CPUs, memory, disk, everything that you would normally get from a cloud provider, that’s all run by proof of authority that is stamped in an L1, which is on our chain. So we start with the concept.
58:27
I you as a user establish your cryptographic identity, you establish your keys, then you establish a proof of authority over a group of computation. That allows you to do things like what Jember is doing, which is taking in complex multi-dimensional workflows.
58:55
where some of that data is highly private, that needs to go inside your POA manifest where you control the hardware. And that can go to the extent of you literally having the hardware physically under your control as well. And then parsing that out in a router format and say, in the example of Jember, sending the non-private workloads out to render, right?
59:24
So these problems are multi-dimensional. So we start with a POA. We also support multiple forms of trusted execution environments. We have partners and are gaining more partners all the time that are bringing their trusted execution environments into the stack. That’s a big strategy thing for us. We know we’re not gonna solve this problem entirely.
59:53
I think we’re partnered with almost everybody on this panel in some way or another, if not everybody. And we think that’s really our strength is we’re the meeting place for all the protocols to come together and solve multi-dimensional problems with sovereign infrastructure that they control through a POA.
01:00:21
And as I mentioned, even to the extent that if you’re seriously, seriously worried about control, and many of our customers are, they actually want a way to move the hardware into their data center, acquire the hardware themselves, run it through their own pen testing. But after that, they don’t want to manage the stack. They don’t want to actually manage it because that’s where all the labor is.
01:00:48
It’s not racking and stacking and pen testing it. It’s the operational labor over time. So when you add the manifest stack to your hardware and establish a POA with it, you can actually run a manifest data center or no set of node or cluster or neighborhood inside your own data center and then just rent through the network what would be in your data center, for example, but you don’t have to take care of the stack.
01:01:18
So there’s a lot more to that. This is a PhD level topic, so I could go on for days, but that’s fundamentally the summary.
01:01:25
Silvia Lacayo (Render Network): Eric, I don’t know that you’re specifically talking about this, but I just learned the term infrastructure as code this morning, I think. Yeah. So that’s exactly, I think, how I think about it. The hour went by really quickly, and I would love now for us to lead into a lightning round.
01:01:46
Eric, let’s just start with you. You can talk–any topic is fair game for any panelists but let’s wrap up thoughts. What would you like to leave the rest of us with? Real quick.
01:01:59
Eric Bravick (Manifest Network): Real quick. This is an amazing space and we’re doing amazing things. I do want to caution that similar to Mikey, I will say it’s vastly misunderstood what’s known and what’s not known, what capabilities are real and what’s
01:02:16
which ones aren’t yet, but don’t let that get in the way of progress for anyone listening. This is an amazing time to be doing this. Everyone should be trying to look at decentralization, crypto tech, AI, and the fusion of all of this. It is the future for a lot, a lot of things, even to control centralized infrastructure. So.
01:02:42
I would just leave with a little bit of encouragement to say, you know, get out there, build whatever you want to build and try to use decentralized tech to do it. 100 % agree. Let’s work backwards.
01:02:52
Silvia Lacayo (Render Network): Sam, what are your parting thoughts for the audience? I think what I would say is that there’s kind of this interesting, I think, position that cryptography has been in.
01:03:09
Sam Judson (Nexus Labs): with respect to sort of its relationship to society for the last few decades where a lot of the focus has been on surveillance and a lot of the focus has been on sort of how do you preserve privacy against sort of external threats? And one of the things that I think is really interesting about sort of the…
01:03:32
Advancements and sort of practicality for zero knowledge proofs and some other of these sort of programmable cryptography sort of primitives is they actually provide raise to instead of viewing sort of, you know, the classic like privacy versus accountability trade off as sort of privacy and accountability being intention. They actually provide you ways to try and sort of resolve that tension by maintaining privacy while still giving.
01:04:00
sort of sufficient accountability, whether or not you’re talking about purely decentralized context, or you’re talking about, know, proven in old school, like, you know, government regulator that you’re doing something that’s compliant with some law that was written decades ago. And I don’t think that that’s specific to decentralization. I don’t think it’s specific to AI, but the way that these fields are sort of moving more and more of sort of economic and
01:04:29
like communal activity into computation. It makes it sort of the natural space to be an early adopter of that technology to try and actually be able to build systems that are both privacy preserving and accountable and trustworthy for participants at the same time. Yeah, thank you for tying that together. And I would agree, of course.
01:04:47
Silvia Lacayo (Render Network): Ed, do you want to share your parting thoughts with us?
01:04:58
Edward Katzin (Jember): Yeah, I guess my quick parting thoughts are: I agree with everything that Sam, Eric, and the other panelists have said. And when we look at it, a future of autonomous agents isn’t a far distant future. It’s right around the corner. And so when we look at it, autonomous agents are here. They are coming together as scalable multi-agent systems. They’re unleashing incredible productivity. And what we’ve discovered is that in order to really deliver this in market, we need decentralized infrastructure. We love working with partners like Render.
01:05:27
The convergence of AI and distributed ledger with zero knowledge proof and permissioned systems is exactly where we are. And it’s a huge opportunity. And I can’t be more excited to continue the conversation.
01:05:40
Silvia Lacayo (Render Network) Thank you for that. And by the way, we did not ask our partners here to name us, but love the love anyway. So thank you. Trevor, anything you want to leave us with?
01:05:56
Trevor Harries-Jones (Render Network): Yeah, as I said, I’m really excited by these architectural discoveries that will unlock decentralized AI. And by what that means to us doing our own compute network and these partners. But just wanted to say, don’t sleep on the rendering side. The team is still doing an amazing job and working tirelessly to grow that side. And I’d point everyone to the monthly foundation updates the team does to get a read for all the developments there and all the exciting stuff happening on this project.
01:06:23
Silvia Lacayo (Render Network) Yeah, we couldn’t be more excited to share what’s coming next.
Paul, take us away here with the final thoughts.
01:06:31
Paul Roales (RenderLabs): Yeah, no, I mean, as we kind of started at the top line business case this morning and then, you know, worked our way down through the stack through many layers, it’s just amazing how all these layers are really coming together to provide some like great new capabilities and exciting developments in the ecosystem. You know, when I spend an hour with our partners,
01:06:52
hearing about what they’re doing and building and how this all fits together, it just gets very exciting. So, yeah, thanks for the session today, everyone. Yeah, we look forward to continuing to work with you. If you’re out there in the community and you see how something you’re building fits into a piece of this, we’d love to work with you. So please reach out.
01:07:18
Silvia Lacayo (Render Network) Taco, last thoughts from you.
01:07:21
Taco (Manifest Network): I feel both smarter and dumber at the same time. I have a thousand questions for everyone, even the people I know. I really sort of dig into the compliance side of things, especially around HIPAA and the fact that people are working really hard to preserve that data in a privacy mode rather than in a public database side.
01:07:47
really sort of excites me. I don’t know, is one, this has been a great series so far and I’m excited for this and that we get to continue this series. This is just a fun day. So there’s a lot of building and I think Mikey says it a lot, Eric says it a lot, Ed says it a lot, Trevor says it a lot, Paul says it a lot. Just go out and try. Don’t be scared of these things that are out there.
01:08:18
You know I personally you know everyone’s like oh we use chat GPT. I try to point everyone towards Venice for the privacy side of things. And I’m and we’re starting to see models catch up like decentralized models, open source models start to catch up and exceed what these centralized models are doing that are just turning people and data into products. So just go out and try things out.
01:08:47
is my biggest word of advice.
01:08:50
Silvia Lacayo (Render Network): 100% agree. I wish we had three more hours to talk about all of these topics, but I think keeping it short forces us to really be succinct. And also, it’s an opportunity to have another one of these. So we look forward to, and in fact, plan on having our next AI spaces or AI dedicated spaces in two weeks. So stay tuned for that. Thank you so much, everyone. Thank you so much, panelists.
01:09:16
I came away smarter and hope that the audience did as well. Invite the audience, the listeners to give us feedback, reach out to us if there’s a topic you want to hear more of or guests you want to see, let us know. With that, we are going to wrap it up. Thank you so much. Thank you, everyone.
