top of page

BrokenToy Movie: model, groom, cfx workflow overview

Ellie Ansell

At the start of lockdown, I had the pleasure of embarking on a small project with a handful of dutiful volunteers, working to create a small broken toy robot asset come to life.

It began with a conversation with Daniel Miller (BTP Director) in a pie shop, as I was showing him my illustration work. He then prompted to sending me his script; it was captivating, well-written and with a powerful message -- it was no surprise to hear that he'd already won various awards with it. With the idea of myself working on character design, we met in an ice-cream parlour (the best meetings are in ice cream parlours), and began discussing. Skip forwards 6 months, and we have a little toy asset created by several artists from accross the globe who have communicated well and dedicated their time amidst lockdowns and in-between fluctuating day-jobs.


So far, I've worked on initial concepts, modelling, groom, CFX (as well as initial lighting setups in an example scene and as a 3D supervisor for workflow advice/critique - although I won't be showing that here!). I'll briefly blog screenshots of the setups below.


Concept illustrations

Initial brainstorming illustrations:

Concept of a shot after Mikkei is thrown against a wall.

Colour concept based on a reference photo for a shop window scene:

The modeller at the time needed more arm reference.

My favourite: rough expression sketches!

A quick sketch showing colour to the team and a reminder of references.


Modelling I'm not a master of Zbrush, but learned a lot about the workflow from FlippedNormals tutorials, found on youtube (check them out - they're excellent and easy to listen to!). #flippednormals #tutorials

I retopologise in Houdini, as it makes iterations with model changes much faster. For each item (boots, helmet, buttons etc) this is the kind of node layout I used. #retopology

Renaming was also procedural, using an attrib wrangle:

string group=chs("maingroup");
int class=1; //Used for multiple instances of an object.
string loc="C";

if (hasprimattrib(0, "class"))
{
    class=i@class;
}
if(chi("location_lr"))
{
    if(@P.x<0.0)
    {
        loc="R";
        v@Cd=set(1,1,.4);
    }
    else
    {
        loc="L";
        v@Cd=set(.2,.8,.5);
    }
}

string path_asset_grp = sprintf("/%s_%s_lod%s_GRP", chs("asset"), 
                                            chs("variant"),
                                            chs("lod"));
string path_grp = sprintf("/%s_GRP", chs("maingroup")); 

if(chi("subgroup_tog"))
{ 
    string path_subgroup=sprintf("/%s_GRP", chs("subgroup")); 
    path_grp = concat(path_grp, path_subgroup);
}

string path_geo = sprintf("/%s_%s_%04i_%s_GEO", loc,
                                                chs("part"),
                                                class,
                                                chs("material")
                                                );
s@path = concat(path_asset_grp, path_grp, path_geo, path_geo+"Shape");

The result can be seen in the path attribute hierachy below, as well as UVs. Caching lodA, B, C (high-low resolution) variants was done by subdividing the geometry and ray projecting the meshes.

The model was passed to our brilliant rigger (credits at bottom of page) who passed feedback regarding the topology throughout the process.


Groom

Thankfully we're all sophisticated adults with cuddly toys, so reference was easy to come by. Looking at the reference, the clumps are very noticeable, and synthetic hairs are a uniform colour throughout with partings at the plush seams. (Natural cat hair on the other hand contains several colour variations down the strand, including 'ticking', where the hair is distinctly white at the tip). There are a few stray hairs seen here, too.

As this is a simple groom, the setup is mostly procedural besides drawing the seams and a rough direction groom of the hairs, which is done in the subclumps network. It's important to build up detail methodically using subclumps, (sub-sub clumps, and so forth) in order to stay organised and keep memory usage low.


For the seams, VDBs were created by drawing curves on the skin geo then converting them to meshes->VDB, which were used to repel hairs with the node Guide Skin Collide by VDB.

Finally to deform the hairs, the Guide deform node is used to rigidly deform using primuv attributes. Colour attributes are transferred from the skin geometry using Guide Skin Attribute Lookup.

To render, we used Vray 4. There were a few hiccups and it's a steeper learning curve than Arnold (which I use at work), but the VRScan materials were very useful - I used a knitted VRScan material with adjusted colours on Mikkei's skin. The VFB (Vray Frame Buffer) was fantastic to navigate around the geometry in the render. #vray #render

More info can be found here: https://www.chaosgroup.com/vrscans

CFX

For characters, I create a HDA (at work, pointing to a python Houdini module library for more user-friendly parameter fun for managing caches with shots, caching, controlling CFX settings, wedging etc...) which contains all shaders, references to groom caches in order to render, and the lighter simply plugs in the animation cache.


For Mikkei, the HDA was very minimal with some CFX tools inside. There were two paramater options on the HDA:

- Non-sim deformation: Automatic mesh corrections using delta-mush and wrangles. Manual skin fixes can be added here, too. (Sidenote: at work, I aim to avoid shot-sculpting at all costs - it's laborious, not iterable and rarely looks good - the simulation should be good enough to avoid this).

- Sim: The simulation cache of Mikkei to be used (most accurate, but needs to be cached).

Auto mesh fix (deltamush):

Vellum simulation for softbody collisions and wrinkles (noticeable on the elbows, wrists, backs-of-knees).

If you've gotten this far, a top-secret tip for CFX is using extract transform and transform pieces. So long, pre-roll! You can go further to control orientation too, by taking apart the xform matrix and blending orientations, then putting it together again in VEX (used on Godmothered 2020, flying carriage CFX). Two more tips: I tend to use the sop solver to grab info about a geometry, such as the min/max velocities or measure how much the prims deform; used for simulating and controlling wind at the origin (useful for flying horses when animation ranges vastly in speed for the sake of the camera), and for remesh to have a more optimised wrinkle simulation.


As of writing this blog, animation is being done for a Nov 2020 deadline and the team are lighting, compositing with Vray. I've taken a back-seat during the animation and rendering stage, so fingers crossed rendering goes well for the team!


Credits

Daniel Miller: Director, script-writer

Dominik Hasse: Rigging

Angel Cano: Modelling, (layout, matchmove)

Michael Morgan: Animation

Many thanks to Chaos Group for the Vray license!

(Please note: many people were involved in the project but I've only written those whose work can be recognised on this page.) Check out https://brokentoymovie.uk/ for more, including work from many other artists who contributed!



160 views0 comments

Recent Posts

See All

Code snippet: Create uv along curves

Code dump! Some bits of code from groom work. Varying fur direction based on tangentu (create a polyframe sop beforehand, or use...

Comments


  • LinkedIn // Professional Bio
  • Facebook // ellieansellart page
  • Instagram // Informal fun
  • Vimeo

© 2020 by Ellie Ansell

Happily created using Wix.com

bottom of page