Last week at NVIDIA’s GTC (GPU Technology Conference), the premier GPU industry event, a number of advances in high performance GPU computing were released that are central to the Render Network’s development in decentralized AI and DePIN. From the unveiling of NVIDIA’s RTX Pro 6000 Blackwell GPUs to updates across Render’s AI generation tools, compute API, and 3D rendering platform, it was a busy week for the network.
GTC 2025: AI’s Future Featuring Render
Setting the stage with a bird’s eye view of the future of AI, NVIDIA CEO Jensen Huang unveiled the latest advancements that are reshaping industries — from AI training and the world’s first open humanoid robot foundation model to developing the next generation of autonomous vehicles and the release of Blackwell RTX Pro 6000 GPUs.
As part of the NVIDIA’s RTX PRO 6000 Blackwell GPU release, Render Network Founder Jules Urbach discussed how the new line of chips are built for a range of tasks central to the Render Network’s AI and 3D creation platform, including multimodal AI inference and immersive content.
Jules shared how the new Blackwell GPUs have the ability to combine the precision of high performance GPU raytracing — used in 3D rendering and spatial computing workloads — with chipset designed for high throughput LLM (Large Language Model) inference, including a game-changing 96GB of VRAM. Through this convergence of 3D and generative AI processing on a next generation GPU architecture, Jules shared how the RTX PRO 6000 is an important building block for the neural rendering workflows of the future, paving the way towards fully intelligent and immersive media.
“With 96 GB of VRAM, the new server-edition GPUs can run complex neural rendering models within OctaneRender’s GPU path-tracer, enabling artists to tap into incredible new features and tools that blend the precision of traditional CGI augmented with frontier generative AI technology.” — Jules Urbach
The Next Leap in AI & GPU Performance
Alongside the RTX Pro 6000, NVIDIA introduced an upgrade to their Blackwell series with Blackwell’s Ultra GPU, built for AI model training and inference. Jensen Huang then closed his GPU updates with a showstopper preview of the Vera Rubin and Rubin Ultra ‘superchips’ releasing in 2026 and 2027 respectively, built to serve the increasing demand for agentic AI applications and services that are a key use case for the Render Network.
“The amount of computation we need, at this point, as a result of agentic AI, as a result of reasoning, is easily 100x more than we thought we needed this time last year.” — Jensen Huang at GTC 2025
Watch Jensen Huang’s full GTC 2025 Keynote
While NVIDIA continues developing cutting-edge AI hardware, the Render Network’s DePIN infrastructure makes these advances in GPU compute more accessible, coordinating global compute in an on-chain network, simplifying deployment with scalable on-demand GPU access, and providing an integrated creative toolset that combines the best of massively paralleled GPU rendering, AI image generation, and inference compute.
Richard Kerris on Neural Rendering & RenderCon
Discussions of emerging uses of next generation GPU technology were led by Richard Kerris in a talk “How AI is Transforming the Way We Create, Personalize, and Share Content”, exploring how AI is reshaping media production. From scriptwriting and music to CGI and digital rights management, he highlighted a range of uses where AI is making the media industry more efficient by streamlining workflows, improving content recommendations, and detecting copyright violations.
One of the standout moments of Richard’s talk was his discussion of Unification, a cutting-edge cinematic short created with the Render Network that showcases the power of next generation creative workflows. Kerris discussed how Unification combined high performance cloud GPU rendering, virtual production, and innovative machine learning techniques to push new creative boundaries that are changing how we create media.
“Highly recommend to check this out (Unification short), I will be there & I’m really looking forward to all the great stuff that’s going to happen at RenderCon.” — Richard Kerris on Unification and RenderCon
Richard concluded the discussion mentioning that he and Jules will take the stage at RenderCon to further explore breakthrough technologies for the creative industries — including neural rendering, machine learning, and new IP and provenance tools integrated on the Render Network.
In a follow up Telegram message to the Render Network community, Jules mentioned that the suite of next generation tools used to create Unification on the Render Network will be discussed with detailed behind-the-scenes looks at RenderCon — as well as avenues for further research, including:
- Live-coding real-time rendering tools on set with actors
- Testing AI video models from partners like Luma and Runway, alongside filmed 9K footage and CG render plates
- Creating hybrid pipelines to experiment with speed, quality, and AI-assisted storytelling
Render Talks on DeAI and DePIN Leading up to GTC
Setting the stage for GTC 2025, the weekend before, Render’s Paul Roales joined a Solana AI Summit panel alongside leaders in decentralized AI, Prime Intellect and Hyperbolic. The conversation focused on the potential for decentralized GPU compute and DePIN to usher in a wave of on-chain AI innovation — including fully decentralized model training, data curation, and massively scalable decentralized inference on peer-to-peer GPUs.
In tandem, Solana released a short video showcasing the Render Network as part of their DePIN pioneers series, featuring Jules and Trevor Harries-Jones. They discussed everything from the future of creative workflows, the rise of neural rendering, the path to the Holodeck, and the potential for scaling DePIN to compete with centralized cloud compute platforms.
Watch the full interview to hear Jules and Trevor discuss how the Render Network is shaping the future of immersive media and spatial computing, while also powering the rise of decentralized AI and the DePIN economy.
DeAI Day: Scaling AI Agents with Decentralized Compute
During GTC, videos from the Render Network’s involvement at ETH Denver DeAI Day were publicly released, augmenting the conversations at GTC. The talks started with a panel“Switching on AI Agents” with leaders from Aethir, Lumerin, Morpheus, Heurist, and the Spheron Foundation. On the panel, Trevor delved into how decentralized GPU networks with APIs built for decentralized inference are reshaping the AI agent economy.
The discussion around AI expanded from compute consumption to the challenges of deploying AI agents at scale, with a focus on the security and accessibility pain points that the Render Network is focused on resolving.
Complimenting the panel, Paul Roales and Trevor Harries-Jones took the stage at DeAI Day for a conversation “Generative Images at Scale” where they shared major milestones in Render Network’s efforts to support next generation AI agent applications using the network’s decentralize compute.
The talk showcased progress in expanding to emerging AI and machine learning workloads, discussing a compute API now in private beta and demoing decentralized inference across Render Network GPUs. In the demo, Paul showcased an AI agent based image generation application leveraging the Render Network’s compute platform and API. Discussing the significance of the demo, he emphasized how the ability to leverage idle decentralized compute can democratize content creation — from Hollywood-grade VFX to creative workflows driven by AI agents.
“Now the models have progressed so far that we can do very exciting stuff like what you’re seeing here today on these consumer GPUs in a distributed way… it democratizes the ability to create incredibly high-quality artifacts.”— Paul Roales
The presentations at DeAI Day showcased how the Render Network is evolving to support emerging AI image generators and agentic AI applications to complement current uses in motion graphics, immersive rendering, design, and large scale events visuals.
Render‘s New AI Generation Interface
During GTC, the Render Network also shared some major development updates, featuring an all-new AI generation interface with a number of AI model and service expansions that increase the power and utility of the network’s generative AI imaging tools for professional creators.
The new interface allows artists and developers to generate content across a broad range of generative AI text-to-image, text-to-video, image-to-image and video-to-video (coming soon) tools in parallel, within one unified platform that combines 3D rendering toolchains and AI image generators.
The new interface released with an ongoing promotion to try AI generation tools on the Render Network that generated a wave of creative excitement from the Render community, as well as new tutorials and guides.
Render also featured the ability to use the popular online photo editing tool Photopea for editing tools across image generation models on Render, simplifying and streamlining creator workflows — learn more here.

Much More is Coming at RenderCon — Join Us!
The array of talks and updates at GTC were only samples and early previews of updates that will be explored in much greater detail at RenderCon — taking place April 15th at Nya Studios in Hollywood.
Get tickets and RSVP today to shape the future of the Render Network!
RSVP at: https://lu.ma/rendercon
Other Recommended Talks from GTC
Finally, several other talks at GTC helped shape the conversation around AI-assisted content creation, decentralized compute, and AI agents. We recommend catching the replays to keep up on where gen AI is heading:
- How AI Is Unlocking a New Era of Creativity in Filmmaking — Haohong Wang, GM at TCL, echoes Jules’ insights around AI in filmmaking, exploring how tools like Runway, Sora, and MineStudio are making content creation cheaper, and faster. He believes structured 3D workflows will be key to bridging AI with traditional filmmaking.
- The Impact of AI on Filmmaking: Real-World Transformation
Groundbreaking examples from Hollywood productions and record-breaking music videos demonstrating AI’s expanding influence in storytelling. - The Future of AI: Scaling Intelligence, Open-Source Innovation, and Human-AI Collaboration
Ali Farhadi, , CEO of AI2, explores MoE architectures, speculative decoding, test-time compute, and the importance of open-source AI ecosystems. - Video Generation Models for Storytelling and World Modeling
Runway’s latest tools for accelerating video creation and improving realism. A must-watch for anyone exploring generative video pipelines. - Designing VLM-Based AI Agents for Large-Scale Video Analysis
A deep dive into using Vision Language Models (VLMs) to process video at scale. - Agentic AI at the Edge: Real-Time Sentiment Analysis for Customer Interaction
Exploring VLM-based agents in real-time edge environments, with focus on low-latency analysis and responsive customer engagement.