AI at the Frontier of Science, Art, and Society
When three of the most influential minds in rendering and artificial intelligence share a stage, the conversation naturally spans from GPU architectures to the fabric of human experience.
At RenderCon, Jules Urbach (CEO of OTOY and founder of the Render Network) joined Emad Mostaque (co-founder of Stability AI, the company behind Stable Diffusion) and Richard Kerris (VP & GM of Media and Entertainment at Nvidia) for a far-reaching discussion on the role of AI across science, art, and society.
Together, they outlined a vision of the future that’s as technical as it is philosophical: a world where computation, creativity, and human agency are inseparable.
From Generative Art to Real-Time Worlds
Urbach opened by reflecting on how quickly generative AI is reshaping creative tools and workflows. What began as niche experimentation has turned into a movement where anyone with a GPU or browser can create professional-grade imagery.
“And then you open sourced Stable Diffusion. It changed everything,” he said to Mostaque. “So many of the interesting tools we see today, like Cryo, were built on the things you put out, and it exploded the entire toolset of what could be done.”
For Kerris, that transformation is already settling into everyday use. “I think we’re going to see a lot more personalization,” he said. “The fear is going to eventually fade away […] It’s going to become a tool people use to tell stories.”
Mostaque added that this new creative infrastructure could soon support fully real-time experiences. “In the next few years,” he said, “we can have real-time video games or holodeck-type experiences, which is going to be insane.”
It set the tone for a conversation about the convergence of AI and visual computing, and how the boundary between imagination and simulation is steadily dissolving.
The Hybrid Architecture of Creativity
The panel opened its technical deep dive with a central question: could neural rendering — which involves training deep learning models — ever replace traditional ray tracing, the current standard for rendering realistic graphics?
Urbach noted that while models can now generate entire frames, artists still need context and structure to maintain visual coherence.
Mostaque answered: “The future is likely to be a combination of these two,” he said, explaining that AI models still lack fine control and consistency. The near-term path, he argued, is hybrid, pairing the precision of established 3D pipelines with neural systems that enhance lighting, texture, and realism.
Urbach added that reliability will depend on how well models remember what they create.
“If we kind of remember and not hallucinate,” he said, “the memory window needs to be really good, or you need to have RAG [Retrieval-Augmented Generation], where you retrieve the data that’s stored.”
The group agreed that this layered approach (massive world models in the cloud, personalized mini-models on the edge, and intelligent rendering between) will define how future experiences are built.
Test-Time Compute and Distributed Power
The discussion turned to the mechanics of computation itself. Mostaque described a new paradigm called test-time compute, which lets models spend more time reasoning during inference. “The longer we give it to think, the more compute sequential, […] the better the quality output will be,” he said.
It’s a shift toward quality over speed where distributed networks of GPUs, like those powering the Render Network, can handle the heavy lifting. Rather than one supercomputer performing the task, many nodes collaborate to refine the result.
For Urbach, this architecture represents the bridge between creativity and infrastructure. Decentralized systems give artists and developers scalable access to the same computational depth that once required specialized studios.
The Architecture of Creativity
The panel shifted from imagination to infrastructure and from how models think to how artists might share in that intelligence.
Mostaque described a new paradigm called test-time compute, which lets AI systems spend more time reasoning during inference. “The longer we give it to think, the more compute sequential… the better the quality output will be,” he said.
This shift favors quality over speed, distributed across networks of GPUs like those powering the Render Network. Rather than one supercomputer doing the work, many nodes collaborate to refine the output. For Urbach, that model bridges creativity and infrastructure, giving artists scalable access to power that once required major studios.
The same logic of distributed intelligence is reshaping how stories are told. Kerris envisioned games that evolve uniquely for every player. “You’re going to have the ability to have the game evolve as you’re playing it,” he said. “You and I might be in the same exact game, but experiencing it differently.”
Mostaque took the idea further: “What if your game could think about everything that you’ve done when you’re not playing it, and then it continues to build the world around that?” In their view, entertainment is becoming asynchronous as living stories that remember and grow with the players.
That transformation has deep roots in Hollywood. “AI in the film industry has been around for decades,” Kerris noted. “De-aging, digital twins, all of those kinds of things.” The fear, he argued, comes from misunderstanding. “When people saw generative art from a prompt, they didn’t understand it was a tool destined for a storyteller.” Creativity, he emphasized, doesn’t vanish with new tools, it expands through them.
The question then turned to ownership. “In crypto there’s this concept of not your keys, not your crypto,” Mostaque said. “It feels like not your models, not your mind in many ways.” Centralized systems, he warned, limit personal agency, while decentralized compute (like the Render Network) can return control to individuals. Urbach echoed the point, linking Render’s mission to keeping advanced computational resources open and accessible to all.
AI, Empathy, and the Human Future
Building on the idea of human agency, the panel turned to how AI might serve empathy, learning, and care. Mostaque described his work using open-source models for health and education. “Within a few years, you will have an AI that’s available to you in every language,” he said. “That outperforms human doctors in empathy.”
Kerris shared similar examples from Nvidia’s partner ecosystem. “There are startup companies […] using that approach for children that have cancer,” he said. “They can talk to a toy and have information about what’s going on with them.”
Both pointed toward a larger shift: AI as an extension of compassion, not competition. As Mostaque noted, open systems and shared data can amplify this potential, while transparency builds trust between creators and users alike.
The conversation then expanded to education and robotics, two arenas where empathy becomes actionable. Kerris imagined film students mentored by digital avatars. “Imagine when you can have AI avatars that are trained in Spielberg or Lucas,” he said. “You can have them as your assistant to help you learn.”
Mostaque described similar progress already happening. In a Nigerian pilot program, he said, “with two months of ChatGPT, they got two years of education advancement.” That kind of access could redefine how the world learns, bringing expert instruction to any child, anywhere.
The same principles are shaping robotics. Nvidia’s Omniverse platform now allows robots to be trained safely in simulation before deployment. “Assistance for the elderly is going to be an important aspect of having a humanoid-like robot at home,” Kerris said. “We’re going to start to see those in the next few years, they’re that close.”
Together, these ideas pointed to a unified vision: intelligent systems that extend empathy, creativity, and knowledge as tools designed not to replace human work, but to elevate it.
Governance, Provenance, and the Blockchain Layer
As AI expands, questions of governance and authorship become central. Urbach emphasized how Render’s blockchain infrastructure already provides a way to track creative provenance and pay contributors transparently.
Mostaque agreed: “Blockchain is about coordination and resilience,” he said. As datasets and models run more on GPUs, he added, decentralized ledgers provide the ideal foundation for ownership and accountability.
The result is an ecosystem where every generated asset can be traced, verified, and rewarded, thus turning transparency into trust.
The Human Future of AI and Reflection
As the session closed, the conversation returned to everyday life. Kerris noted that AI is quickly becoming as ordinary as household automation. “It’s going to become more commonplace to what we do,” he said. “People have to understand that’s a benefit. It gives you more time as a person to do things you want to do.”
Mostaque reflected on the responsibility behind that power. “We’re at the most exciting time in history,” he said, “and it’s up to us to make sure it benefits everyone.”
Urbach closed with a reminder that the creative future must remain open. The decentralized networks being built today, he said, will decide who gets to shape that future, and whether it stays in human hands.
Keeping Humanity at the Center
AI at the Frontier of Science, Art, and Society captured a rare balance of vision and realism. Across disciplines and philosophies, the message was consistent: AI’s future depends on how consciously it’s built and who it serves.
From neural rendering to empathetic AI, from decentralized networks to education and robotics, each thread pointed toward the same truth, technology’s highest purpose is to expand human potential.
Whether the world that emerges feels more like Star Trek’s holodeck or something entirely new will depend on one principle: keeping people, not machines, at the center of the story.
