Zen Blocks Reimagined: Muhammet Altun’s Algorithmic Sculptures Meet Decentralized Rendering
How Muhammet Altun automated high-fidelity rendering using blockchain data and Render Network’s API to evolve his groundbreaking generative art series.
Muhammet’s journey as a 3D generalist and motion designer has been deeply intertwined with cutting-edge tools like Houdini, Octane, and the Render Network. In this interview, he shares his fascination with generative systems and blockchain data and how that led to the creation of Zen Blocks, an innovative, multi-chapter collection of algorithmic sculptures.
Go deeper into Muhammet’s creative evolution, Houdini workflows, and Render Network API integration in this extended artist interview with the Render Network Foundation team’s Sunny Osahn.
As the project evolved, the technical demands of rendering high-fidelity animated sequences became a major challenge until he integrated the Render Network API into his workflow.
By leveraging Render’s decentralized compute power, Muhammet was able to push the visual complexity of Zen Blocks to new heights, automating his pipeline and ensuring on-demand scalability without sacrificing quality.
Can you share a bit about your background in generative art and how you first got into using Houdini and Octane for your projects?
For the last 12 years, I have primarily worked as a 3D generalist in the Motion Design industry, creating high-end commercials, title sequences, and experimental motion design pieces. Octane has been part of my workflow from the beginning, alongside Cinema 4D. Although I explored almost every available render engine, I always found my way back to Octane because it worked perfectly for my needs.
Throughout the years, I experimented with Houdini and integrated some of its functionalities into my workflow. However, it did not become my primary tool until around four or five years ago. Over time, I absorbed so much Houdini knowledge, even passively, that working with it became second nature. At some point, I realized I could create advanced algorithms and generative systems using it. From that moment, I started building systems and workflows around Houdini and Octane. The Render Network naturally became part of this process, leading to Zen Blocks as the first fully developed project that emerged from these experiments.
Muhammet’s work draws inspiration from pioneers like Octane power user Refik Anadol, whose work is generated from real-life data. In the search for meaningful datasets, Muhammethe found that many were either inaccessible or widely overused. Blockchain’s decentralized nature provided a solution, offering an open and dynamic dataset that could be transformed into personalized, data-driven sculptures. This approach allowed collectors to see their onchain activity reflected in unique generative artworks, adding a deeply personal layer to the experience.
“I have always been inspired by real-life data-driven generative artworks from great artists like Refik Anadol. Accessing large amounts of meaningful data is not always as straightforward as one might think, since many great datasets are not publicly available. Even when they are accessible, there is a risk of them being overused in other generative projects.”
Zen Blocks was a major success when it first launched, what motivated you to revisit and evolve the project with this new workflow?
Zen Blocks was always planned as a multi-chapter collection. The idea behind it was that each Zen Block was created by burning another token set called Data Blocks, which served as the foundation of the collection. Data Blocks also had the ability to update Zen Blocks to a new chapter while reflecting the collector’s most recent wallet data.
Instead of burning all their Data Blocks at once, many collectors held onto some, anticipating future updates. Initially, I planned to release the final chapters sooner, but I kept pushing the concept further, making it more technically and computationally demanding. At a certain point, it became clear that my existing compute power was not enough to generate what I envisioned, especially since I wanted the last chapter to be animated rather than a still frame.
This made the process nearly 300 times more expensive to compute. I was not willing to compromise on quality due to technical limitations, so I kept looking for a solution to find a way to produce them on demand and at some point I saw that Render Network was offering grants and API access and got in touch with Dino and Sunny and applied for a grant and API access.
They kindly offered me the means to generate the new visuals with as much fidelity as I intended and beyond.
After that, I suddenly was good to go with closing the Zen Blocks journey with its final chapters and offering my collectors something that I was proud of releasing. Truly grateful for the help from the Render Network.
How do Zen Blocks automatically generate 3D data sculptures from wallet data? Can you walk us through how this process works from start to finish?
Using Houdini, blockchain data, and the Render Network API, the system automates the entire process from creation to final rendering.
Step 1: Minting & Burning Tokens
- Collectors mint Data Blocks, which act as access tokens.
- Burning Data Blocks generates a unique Zen Block sculpture.
Step 2: Processing Onchain Data
- The burn event triggers data collection and formatting for Houdini’s procedural system.
Step 3: Generate the 3D Sculpture
- Houdini’s Procedural Dependency Graph (PDG) builds the sculpture from scratch.
- Each piece includes custom rock formations, foliage, and environmental details with no premade assets.
- In Zen Blocks Resurrected, sculptures evolve into lattice structures to complete the collection.
Step 4: Rendering with the Render Network
- Houdini packages the scene into an ORBX file for Render Network compatibility.
- The Render Network API automates the rendering process.
- Houdini monitors, retries, and ensures quality before finalizing the frames.
Step 5: Finalizing & Delivery
- Houdini applies color grading and formatting to the final render.
- The artwork is encoded into an MP4 and automatically updated in the metadata.
Step 6: Long-Term Storage
- All files are backed up to Arewave and multiple S3 buckets for preservation.
By combining blockchain data, procedural 3D generation, and decentralized rendering, Zen Blocks ensures that each collector’s piece is one-of-a-kind.
How do you see this approach improving the collector experience, especially for those that were a part of the original Zen Blocks?
The original Zen Blocks collectors were part of a highly artist-driven community within Murat Pak’s ASH ecosystem. This included well-known artists like Murat Pak, David Ariew, Raf Grassetti, Gavin Shapiro, Roger Kilimanjaro, Nahiko, and CYPHΞR, among many others.
With the final chapters of Zen Blocks, the updated visual system, which has been years in the making, will bring a fresh and exciting evolution to the collection. Many original collectors still have spare Data Blocks and Zen Blocks, which they can use to generate new sculptures or share with others who want to participate in this new chapter.
You’ve been working closely with the Render Network team to integrate the Render Network API into your generative system. What has that collaboration been like?
When I first contacted and pitched this process to the Render Network team, they immediately understood and agreed that this was an interesting use case and case study for the network and its API.
They have been genuinely very curious about the process and progress and were actively involved in helping me out while creating the system. They have introduced me to the existing API endpoints and even created new ones for me to be able to build custom tools and systems around the network API. The team involved is a brilliant group of developers with an artistic eye and I can’t thank them enough for their help.
What specific benefits does the Render Network provide for your project? How does it compare to other rendering solutions you’ve used?
Render Network allows for nearly infinite scalability while remaining cost-efficient. Traditional render farms often require careful budgeting and are prone to errors, which can lead to unexpected costs. They also limit artistic freedom since every mistake comes with a financial penalty.
Render Network maintains a fair balance between the artists using it and the node operators providing computational power. Additionally, it has one of the most responsive customer support teams I have encountered.
How does real-time, API-powered rendering unlock new creative possibilities for generative and onchain art?
Automating a rendering pipeline is essential for scaling generative projects. Instead of building a local GPU farm, which would be expensive and limiting, artists can leverage the Render Network API to outsource computational tasks worldwide. This allows for greater creative freedom without the need for constant maintenance.
What excites you most about the future of generative 3D art in the Web3 space?
I think besides a few great examples, offline-rendered 3D generative art is currently an underrated medium. The web-based 3d generative artworks that use libraries like three.js are amazing but it’s just not possible to get the amount of data and visual fidelity you can get with an offline path tracer and a DCC like Houdini with that workflow. I find being able to create very rich scenes, doing simulations with millions of items in 3D space, and having those high-quality renders to look at is such an incredible occurrence that needs to be experienced a lot more frequently. I am building toolsets and workflows to make this process a lot simpler for artists like myself to use and create high-quality generative artworks more easily. Looking forward to also sharing those studies with other artists in the future.
How do you see AI and decentralized rendering shaping the next wave of creative projects?
I think the combination of AI, especially large language models, and decentralized rendering is a powerful enabler for transforming ideas into final artworks. AI-assisted research and scripting have opened new doors for imagining and developing highly sophisticated systems, while decentralized rendering provides the scalability needed to execute them efficiently.
Although I believe technologies like Gaussian Splats or similar techniques, instead of traditional image or video files, will play a significant role in presenting 3D scenes in the near future, I am less optimistic about the current state of generative image AI models. While future workflows may introduce exciting advancements, such as extracting 3D spatial data from a model’s latent space, the current models seem to automate the most enjoyable parts of the creative process while leaving artists with the tedious task of prompting and building workflows. This approach does not come close to the experience of crafting an artwork with full creative control.
For artists and developers interested in experimenting with generative workflows using Render Network’s API, what advice would you give them?
I highly recommend looking into Houdini’s PDG system for creating both 3D and 2D outputs. It allows you to automate almost anything effortlessly and it is excellent for integrating multiple data sources, making it incredibly versatile for generative workflows. I would be happy to help out by providing suggestions for workflows for any artists who might need help with their 3D generative endeavors. You can always reach out to me on socials.
This teaser spotlights Zen Blocks — MHX’s generative art project built with Houdini, blockchain data, and Render’s API — automating thousands of frames without compromise.