Houdini Crowd Simulation with Motion Capture


Houdini Crowd Test Render with Mantra

Houdini Crowd Test Render with Mantra

The characters were rigged and weight painted in Maya before being sent to Motion Builder for for characterization and applying the captured animations. With the help of my classmate Adam Cline, we captured several two person captures in the SCAD Motion Capture Room. After cleaning up the original data with Vicon Blades software, the characters were rigged and weight painted in Maya before being sent to Motion Builder to receive the captured animations. Also in Motion Builder I was able to make sword and shield adjustments, along with any necessary ground-plane adjustments.  

The characters with the captured animations were sent to Houdini for the simulation. This was a lot of trial and error due to the way Houdini imports .fbx files and interpenetrates constraints. 

Houdini Crowd Test Render with Mantra R&D 

Houdini Crowd Test Render with Mantra R&D 

Compositing Challenge

Originally I planned to use the environment that I have been working on as a setting for the battle. The scene became to heavy and due to time constraints I decided to render what I could and revisit it later. Render passes work differently in Houdini so I need to develop a workflow that will help with compositing. For now  a simple color correction and some occlusion are good.



Motion Capture

Houdini Environment Project for Crowd Simulations

This is a quick update recording some of the assets that I have been working on in Houdini that will act as set dressing and ground cover for my environments. My goal is to build realistic sets that I will use as backdrops for my crowd simulations. Houdini height fields have been a lot of fun to explore, but like everything in Houdini shading and texturing has become a challenge. 

Houdini Assets for Environments

Megascans and SpeedTree in Houdini

Bringing Megascans into Houdini has also been fun. UVs have always proven a challenge for me in Houdini but this project has helped solve that. Working with the organic shapes forced me to up my game in order to make sure the textures fit correctly. 

Houdini Forest

Houdini Forest Test

Speedtree in Houdini with Mantra

Megascans Ground

I have also been working on building a ground terrain that will not look patterned on the large height fields in Houdini. They tend to look good up close but once they are implemented into the scene they are no longer seamless. I have written a shader to work as the ground while try to solve the problem.  

Python Procedural City


This is a project that will procedurally generate a city.  The focus is to use Python inside Houdini to create an application that can assist in city creation without having to individual place all the structures.

Bounding Box Layout of the buildings

Test of the Buildings

Generate a City Map

The first step is to create a map that can be imported into Houdini. I found a website online that can generate random city layouts called the Medieval Fantasy City Generator. After making a layout that works for the design, I bring the image into illustrator, clean it up and rasteurize it. 

Convert to Geometry

Import the map as a Adobe .ai file, or an .eps file then convert it to geometry. At this point I manually select the blocks to be designated as commercial buildings and assign the primitive numbers to a group that will separate the blocks by color. 

Code to assemble the individual blocks and assign the building type.

Python Code Block 1

Import City Blocks and Sim Clusters


Code to import buildings and create the nodes that add the bounding box and the attribute parameters that will be passed through to the clustering simulation. 

Cluster the Buildings

Pop Attract is used to pull the buildings into the center so they can be placed over the individual blocks before being transferred to the main simulation. The left image shows the buildings scattered onto the points of a grid, the right image shows the buildings after the first simulation is run. The clusters are then stored in Null nodes where they will be randomly pulled and placed into the simulation.

Generate the City Blocks

The clusters are called and placed over the block shapes where they are filtered based on location. Buildings outside of the shape are removed, while buildings inside the shape are pulled to the outside of the shape using another Pop Attract. The buildings are now sorted as inside or outside the block space. Outside blocks are deleted and the space that remains in the center of the shape is then filled with smaller commercial buildings.

The Main Simulation

Block Shapes receive more filler buildings based on the offset space remaining in the center of the block. When the simulation is run, each block shape recieves a cluster of randomly selected buildings. The final output is a point cloud for each block that holds the reference number for each building that has been assigned. 

Import the Assembled Block Point Cloud 

Python is used again to import and create the file nodes that were written during the simulation. The original block position is recalled and the blocks are reassembled in their original positions. The blocks are then sent to the final section where the buildings are copied back onto them and the city is assembled. 

3D Menger Sponge - Part 2


A typical cubic Menger Sponge is shown in figure 1. It is a fractal object created by recursively replacing a cube with a grid of 27 sub-cubes of which 7 are removed. Figure 2 shows four objects where the depth of recursion is 0, 1, 2 and 3.

Project Details

This project was designed to bring the Menger Sponge into a 3rd spacial dimension. Through adjusting the code and adding the holeLUT the shape expands and also removes points adding interest to the design.


Python Code in Houdini

node = hou.pwd()
geo = node.geometry()

# Add code to modify contents of geo.
# Use drop down menu to select examples.

import random

#Declaring Variables For Houdini
node = hou.pwd()
geo = node.geometry()  

meng = hou.node('/obj/menger')

bx = node.evalParm("width")
by = node.evalParm("height")
bz = node.evalParm("depth")
iterations = node.evalParm("iterations")
scaleP = node.evalParm("objectScale")
#rands = random.random()*node.parm("randomSeed").eval()
bounds = [0, 0, 0, bx, by, bz]
#declare the global variables  
holes = []  # list of deleted cubes
menger = [] # list of non-deleted cubes
holeLUT = [19,14,12,9,7, 0]  
#holeLUT = [22,16,14,13,12,10,4]  
# finds the center of the 'box' to create a point
def center(bbox):
    x,y,z,X,Y,Z = bbox
    midpnt = []
    midpnt = ((x+X)/2, (y+Y)/2, (z+Z)/2)
    return midpnt
# this proc returns the bounding box coordinates of a "row" of three boxes.
def row(x0,y0,z0, w,h,d):
    x,y,z = x0,y0,z0
    X,Y,Z = x + w, y + h, z + d
    boxes = []
    for n in range(3):
        box = [x,y,z, X,Y,Z]
        z,Z = z + d, Z + d
    return boxes
# Recursion begins breaking the bounding box into 27 sub divisions
def divide(bbox, depth):
    if depth == 0:
        return []
    x0,y0,z0,x1,y1,z1 = bbox
    w = float(x1 - x0)/3
    h = float(y1 - y0)/3
    d = float(z1 - z0)/3
    x,y,z = x0,y0,z0
    boxes = []
    for layer in range(3):
        x = x0
        for rows in range(3):
            x = x + w
        y = y + h
    boxes = delete(boxes)
    # Recursion________________
    for box in boxes:
        divide(box, depth - 1)
    return boxes
# Uses the indices in the holeLUT to remove specific cubes
# from the list of 27 cubes in the "boxes" arg.
def delete(boxes):
    for n in range(len(holeLUT)):
        hole = boxes.pop(holeLUT[n])
    return boxes

###implement code
divide(bounds, iterations)
#create pscale
createpscale = geo.addAttrib(hou.attribType.Point, "pscale" , scaleP)

for box in menger:
    pt = geo.createPoint()
count = 0
for box in holes:
count += 1


This was a fun project, once it's implemented in its basic form there are infinite adjustments that can be added for visual interest. Using "Class" in python helps to grow and control the program making it easier to organize the logic.

Summer To Winter


I took this photograph at the marsh of the Wyld Dock and Bar when I was looking for reference photos for the day to night project. If you haven't been there it's a fantastic place to sit outside and have a meal or cocktail. 


Slide Left Or Right

What is Expected

For the third assignment of vsfx310 you are to convert one (summer time) image to its wintry "equivalent".

Your images must be presented on a webpage titled "Summer to Winter". It is recommended that you make them "rollover" images so that a person viewing the page can appreciate the "day" to "night" conversion. Consider saving versions of your converted image so that the webpage can pictorially explain your image conversion process.

I also really like The Wyld because it's where I had my wedding reception.

Crowd Environment PreVis


For my senior project I am interested in exploring crowd logic and interactions. Both Golaem and Houdini will be used to produce crowds out of agents that are made by myself or Mixamo. I will also be using my own motion capture along with a mix of Mixamo animations. 

Houdini Crowd Test

Thanks to the awesome people at Golaem, I have received a year of access to their crowd simulation plugin. These are some very early tests. It will be fun to compare the similarities and differences between the software capabilities.

White Walker Crowd Sim


The concept will include agent interactions with not only the environment, but also props such as arrows and other agents. My goal is to also place them in a realistic environment using Houdini Arnold and Redshift. The Hero shots may not be up to photo realistic standards due to the limitations of the textures. I may have to recruit some help in that endeavor.

The environment will be grasslands with rugged  hills and wooded areas. The following photos are some of the early pre-visualization renders that I have been building as I begin to set up the individual shots. The trees will be modeled in Speedtree and the environment will be a mixture of Terrigen and Houdini heightfields.



The agents will be rigged with Mixamo and prepared for baking in Houdini. The motion capture will be cleaned up with Vicon Blade and Motion Builder prior to being incorporated into Houdini.


Sierpinski's Gasket - Houdini and Mantra


The purpose of this project was to write a Python script that would generate points in 3D space using the fractal algorithm for Sierpinski's Gasket. The intention was to make the script  renderable in several 3D rendering software packages.


Sierpinski's gasket is created by defining the vertices/points of a tetrahedron, choosing a random point within the tetrahedron, picking a vertex/point of the tetrahedron at random, finding the midpoint between that random point and random vertex and then storing that midpoint in a list. This process is done over and over again in order to create the fractal.

1. Pick a random clocation

2. Define a seed point

3. Collect points (vertices) from a list of 4 to generate the Sierpinski Fractal

4. Find the mid-point of 1 and 2

5. Save the mid-point

The Sierpinski triangle, also called the Sierpinski gasket or the Sierpinski Sieve, is a fractal and attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles.

The Following Code Was Modified For Houdini And Rendered With Mantra:

Python Code:

node = hou.pwd()
geo = node.geometry()

# Add code to modify contents of geo.
# Use drop down menu to select examples.

import random
#import math
#import socket

# Procedure halfstep returns a midpoint between two
# user-defined points
import random, math

def halfstep(p1, p2):
        a = float(p1[0] + p2[0]) / 1.5
        b = float(p1[1] + p2[1]) / 2
        c = float(p1[2] + p2[2]) / 2
        result = [a , b, c]
        return result

def pickpnt(pnts):
        result = random.choice(pnts)
        return result        

triangle = [ [0,0,1], [1,0,-1], [-1,0,-1], [0,1.5,-0.2] ]
seed_pnt = [0,0.5,0]
point_list = []

for n in range(25000):  
        vert = pickpnt(triangle)
        seed_pnt = halfstep(vert, seed_pnt)


Modifications To Midpint

def halfstep(p1, p2):
    a = float(p1[0] + p2[0]) / 2
    b = float(p1[1] + p2[1]) / 1.3
    c = float(p1[2] + p2[2]) / 2
    result = [a , b, c]
    return result

Through Manipulating The Initial Definition You Can Create Variations Of The Design.

def halfstep(p1, p2):
    n = random.randint(1, 10)
    a = float(p1[0] + p2[0]) / n
    b = float(p1[1] + p2[1]) / n
    c = float(p1[2] + p2[2]) / n
    result = [math.sin(a) , math.cos(b), c]
    return result

def pickpnt(pnts):
    result = random.choice(pnts)
    return result

Matte Painting - Day to Night


For the second assignment of vsfx310 you are to convert one (daytime) image to its nighttime "equivalent". The converted image MUST show some evidence of light sources. Motion blur MAY also be applied such as car headlights, star trails, passenger aircraft taking-off/landing, fireworks etc. 

Reference Photographs

Breakdown of the Scene

Matte Painting - Sky Replacement


For the first assignment of VSFX310 you are to prepare two images that have had their "skies" replaced by those taken from two entirely different images. 

Image 1
    This image must be of a scene does NOT have foreground elements that reflect the replacement sky.

Image 2
    This image must be of a scene that DOES have foreground elements such as water, ice etc that reflect the replacement sky.

The Mill Collaboration - Week 2 - Update 2

Early Test of the Fracture

New file with improved Uv's

New file with improved Uv's

I now have a working Houdini file that writes out all the faces within the fracture and maintains decent UV's. UV's and Textures are an area I'm working to improve and Houdini has a different approach to the process. As I mentioned in the previous post I experienced serious issues with grouping and Uv's. I learned the hardway the importance of pipeline integration and have gotten a taste of how it works. Unfortunately it was through mistakes and problems that I created. Sometimes you need to make mistakes to truly understand and grow from the experience. Houdini and VFX in general are very finicky and unique in terms of integration. It's one thing to work in Houdini when you are using Mantra alone, when it comes to teamwork you need to be open and flexible to all sorts of integration procedures and be very aware of your team's needs. Learning to build within Houdini logically to best support any artists in the future is crucial. Below is a sequence of exploded views that should allow texture artists to easily understand the layers of the fracture and also paint and texture the effect.

The new Houdini file takes into account UV space and incorporates two separate fractures based on the points of two spheres in order to create the inside and outside geometry. Sine the object is a cannon ball, and the Sand Solver was beyond my capabilities, I wanted to create smaller chunks of "congealed, hardened" gunpowder that could be shaded.

New Node Tree

The following images show the bullet solver and dopnet including the bullet settings. The rolling ball still has issues. It no longer floats like it did originally due to to much force and not enough friction. Now the problem is ballencing the angular velocity with the other parameters and the uniform force node I am using to add more impulse. 

The Python node is the key to separating out and controlling the geometry. A friend helped me work out the script and showed me the value of Python within Houdini. It was crucial to help offset the inside and outside pieces of geometry within the cannon ball.  Until now my only Python experience was inside Maya with Mel from the Intro to Programming for Visual Effects. I'm looking forward to next quarter and advanced scripting to dive deeper into Python. The next photo includes the uniform force.

Another issue that I need to address are the glue constraints. I have not been able to get them to cooperate so for now I am releasing the glue based on frame count. This week I hope to solve that problem and idealy add some dust or debris to really sell the shot. 

Top View of the New Fracture.

Week 2 Update 1 The Mill Collaboration

Week two has brought us progress and problems. The majority of my time is spent troubleshooting the fractures and trying to wrap my head around the bullet solver and how it communicates and interacts with the rest of the solver and nodes. The second version of the file shown below could only write out to types of Alembic files. The way I built it made it difficult to assign groups and I couldnt find a workaround.The UV's where spaghetti, so I tried to use an Attribute Transfer of the UV Projection past the fractures, in an attempt to  clean things up. After working so hard on trying to create primary and secondary fractures, it never occurred to me what the outcome of UVs would be with so many broken parts. 

Another difficulty that I encountered was the concept of pipeline for VFX from Houdini into Maya. Until this point all of my Houdini work had been done inside of Mantra so working out the details and building with the idea of passing on the project did not go well. One of the reasons I'm rebuilding the file is because I found that I had painted myself into a corner in terms of exporting the file to the required format and making sure the UVs would not cause problems for my teammate. I was so proud of what I had accomplished due to the steep learning curve, it was an abrupt wake up call when I passed off the Alembic file and watched first hand how messed up it truly was for future texturing and look development. I was also completly unaware that the force I used to project the ball forward wasent allowing it to spin. It looked to me in HOudini like everything was fine, but once a texture was placed on it it was clear there was a major issue with the animation. 

Final Zbrush Composite

I am not a character artist. I wanted to learn Zbrush and find out how it can benefit me in the future. It is very powerful software that will save me time on projects in the future. There are tools in there that can speed up workflows in both Maya and Houdini. Learning anatomy was interesting and a great way to get to know the software. 

 Render times slowed to a crawl, without the use of a render farm. I was unhappy with the final composite for class. I wish I had more time, but for a first time sculpt I learned an immense amount about Zbrush and what it's capable of. 

Painting and Details

Eyes were an issue, the left eye has the pupil layer shut off and the right eye on showing the HDR reflection I was trying to capture. The rest of the texturing and painting was trial and error. I'm happy with the results for a first time sculpt.

Painting and Texturing.

The eyes were very difficult, Zbrush has a strange way of handling transparency. It's deffinitly a topic I will revisit in the future or find a way to touch up the reflections durring the composite.