P003 → Erin Calla Watson- “Nymph”
Erin Calla Watson’s solo exhibition “Nymph” @ Erlich Stienberg Gallery included three large scale digital born prints onto mirrors. The project included an homage to a series of childrens books by Dare Wright called “Edith and the Bears”.
TLDR(final images):
Breakdown:
I was given a few reference images to work off of and created three large scale renderings to meet a 300dpi requirement for printing with minimal quality loss. We knew that the details would get a little blurry as the print ends up getting reflected twice on the surface, so we ended up rendering them based on a 150 dpi pixel aspect ratio to keep things managable on my end.
This required defining render tiles and stitching them together in post which was achieved by way of USD and Houdini’s compositing context Copernicus.
Mantle:
This was the first shot I tackled, and also the most detailed. We simplified some of the more intricate areas where the client was comfortable with a touch of uncanny valley. It’s worth noting that these images weren’t intended to be fully photoreal, but they still needed a degree of plausibility.
-
Most of the scene was modeled from scratch, with the exception of the plant, lamp, and fire basket.
-
The wall tile texture was authored in COPs, using grunge maps from Megascans.
-
The floor tile beneath the fireplace is also a Megascans material.
Book Breakdown:
One of the key challenges I identified early in the project was creating books that appeared dynamic and in motion. Instead of relying on a heavy, overly complex third-party rig, I built a lightweight system from scratch. It starts with simple box-based geometry to define the book shape, adds a minimal posing rig, and drives a vellum-based hair simulation to create cascading pages using Vellum Brush. The simulated strands are then extruded into page geometry, and the entire setup is exported to USD and cached as a standalone asset. For a look at the system in action, see the video below:
Ferry:
This was one of the more straightforward shots to execute. I quickly modeled a portion of the ferry’s bow and used VDB workflows to create the look of deteriorated welds and brazing along the railing. For the background, I sourced most of the NYC skyline from publicly available city data, supplementing it with the World Trade Center model from Sketchfab ( https://sketchfab.com/TitanicKyle ). The shot relies heavily on depth of field and volumetric scattering to maintain a cinematic feel without requiring highly detailed architectural models — a practical solution that balanced visual quality with efficiency.
Bedroom:
This was arguably the most complex shot, involving three separate hair systems and the need to manage collisions between them, as well as with intersecting props like scissors, books, and the ottoman. The bedspread and pillows also had to be created from scratch, with Vellum used for their simulation. At the time of production, I wasn’t able to fully optimize the hair setups, which led to some brute-force solutions and heavier render times—especially at 150 DPI. Since then, I’ve reworked the scene by replacing strand rendering with a curve-based hair system for the carpet, significantly reducing the resource load while preserving the visual fidelity. I plan to use this method in the future as its very useful for creating custom fibers that are composed of hair curves rather than arbitrary geometry. For more info see demo below.
Rug Creation Workflow Example:
opening photos: