Crowd Environment PreVis


For my senior project I am interested in exploring crowd logic and interactions. Both Golaem and Houdini will be used to produce crowds out of agents that are made by myself or Mixamo. I will also be using my own motion capture along with a mix of Mixamo animations. 

Houdini Crowd Test

Thanks to the awesome people at Golaem, I have received a year of access to their crowd simulation plugin. These are some very early tests. It will be fun to compare the similarities and differences between the software capabilities.

White Walker Crowd Sim


The concept will include agent interactions with not only the environment, but also props such as arrows and other agents. My goal is to also place them in a realistic environment using Houdini Arnold and Redshift. The Hero shots may not be up to photo realistic standards due to the limitations of the textures. I may have to recruit some help in that endeavor.

The environment will be grasslands with rugged  hills and wooded areas. The following photos are some of the early pre-visualization renders that I have been building as I begin to set up the individual shots. The trees will be modeled in Speedtree and the environment will be a mixture of Terrigen and Houdini heightfields.



The agents will be rigged with Mixamo and prepared for baking in Houdini. The motion capture will be cleaned up with Vicon Blade and Motion Builder prior to being incorporated into Houdini.


Sierpinski's Gasket - Houdini and Mantra


The purpose of this project was to write a Python script that would generate points in 3D space using the fractal algorithm for Sierpinski's Gasket. The intention was to make the script  renderable in several 3D rendering software packages.


Sierpinski's gasket is created by defining the vertices/points of a tetrahedron, choosing a random point within the tetrahedron, picking a vertex/point of the tetrahedron at random, finding the midpoint between that random point and random vertex and then storing that midpoint in a list. This process is done over and over again in order to create the fractal.

1. Pick a random clocation

2. Define a seed point

3. Collect points (vertices) from a list of 4 to generate the Sierpinski Fractal

4. Find the mid-point of 1 and 2

5. Save the mid-point

The Sierpinski triangle, also called the Sierpinski gasket or the Sierpinski Sieve, is a fractal and attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles.

The Following Code Was Modified For Houdini And Rendered With Mantra:

Python Code:

node = hou.pwd()
geo = node.geometry()

# Add code to modify contents of geo.
# Use drop down menu to select examples.

import random
#import math
#import socket

# Procedure halfstep returns a midpoint between two
# user-defined points
import random, math

def halfstep(p1, p2):
        a = float(p1[0] + p2[0]) / 1.5
        b = float(p1[1] + p2[1]) / 2
        c = float(p1[2] + p2[2]) / 2
        result = [a , b, c]
        return result

def pickpnt(pnts):
        result = random.choice(pnts)
        return result        

triangle = [ [0,0,1], [1,0,-1], [-1,0,-1], [0,1.5,-0.2] ]
seed_pnt = [0,0.5,0]
point_list = []

for n in range(25000):  
        vert = pickpnt(triangle)
        seed_pnt = halfstep(vert, seed_pnt)


Modifications To Midpint

def halfstep(p1, p2):
    a = float(p1[0] + p2[0]) / 2
    b = float(p1[1] + p2[1]) / 1.3
    c = float(p1[2] + p2[2]) / 2
    result = [a , b, c]
    return result

Through Manipulating The Initial Definition You Can Create Variations Of The Design.

def halfstep(p1, p2):
    n = random.randint(1, 10)
    a = float(p1[0] + p2[0]) / n
    b = float(p1[1] + p2[1]) / n
    c = float(p1[2] + p2[2]) / n
    result = [math.sin(a) , math.cos(b), c]
    return result

def pickpnt(pnts):
    result = random.choice(pnts)
    return result

The Mill Collaboration - Week 2 - Update 2

Early Test of the Fracture

New file with improved Uv's

New file with improved Uv's

I now have a working Houdini file that writes out all the faces within the fracture and maintains decent UV's. UV's and Textures are an area I'm working to improve and Houdini has a different approach to the process. As I mentioned in the previous post I experienced serious issues with grouping and Uv's. I learned the hardway the importance of pipeline integration and have gotten a taste of how it works. Unfortunately it was through mistakes and problems that I created. Sometimes you need to make mistakes to truly understand and grow from the experience. Houdini and VFX in general are very finicky and unique in terms of integration. It's one thing to work in Houdini when you are using Mantra alone, when it comes to teamwork you need to be open and flexible to all sorts of integration procedures and be very aware of your team's needs. Learning to build within Houdini logically to best support any artists in the future is crucial. Below is a sequence of exploded views that should allow texture artists to easily understand the layers of the fracture and also paint and texture the effect.

The new Houdini file takes into account UV space and incorporates two separate fractures based on the points of two spheres in order to create the inside and outside geometry. Sine the object is a cannon ball, and the Sand Solver was beyond my capabilities, I wanted to create smaller chunks of "congealed, hardened" gunpowder that could be shaded.

New Node Tree

The following images show the bullet solver and dopnet including the bullet settings. The rolling ball still has issues. It no longer floats like it did originally due to to much force and not enough friction. Now the problem is ballencing the angular velocity with the other parameters and the uniform force node I am using to add more impulse. 

The Python node is the key to separating out and controlling the geometry. A friend helped me work out the script and showed me the value of Python within Houdini. It was crucial to help offset the inside and outside pieces of geometry within the cannon ball.  Until now my only Python experience was inside Maya with Mel from the Intro to Programming for Visual Effects. I'm looking forward to next quarter and advanced scripting to dive deeper into Python. The next photo includes the uniform force.

Another issue that I need to address are the glue constraints. I have not been able to get them to cooperate so for now I am releasing the glue based on frame count. This week I hope to solve that problem and idealy add some dust or debris to really sell the shot. 

Top View of the New Fracture.

Week 2 Update 1 The Mill Collaboration

Week two has brought us progress and problems. The majority of my time is spent troubleshooting the fractures and trying to wrap my head around the bullet solver and how it communicates and interacts with the rest of the solver and nodes. The second version of the file shown below could only write out to types of Alembic files. The way I built it made it difficult to assign groups and I couldnt find a workaround.The UV's where spaghetti, so I tried to use an Attribute Transfer of the UV Projection past the fractures, in an attempt to  clean things up. After working so hard on trying to create primary and secondary fractures, it never occurred to me what the outcome of UVs would be with so many broken parts. 

Another difficulty that I encountered was the concept of pipeline for VFX from Houdini into Maya. Until this point all of my Houdini work had been done inside of Mantra so working out the details and building with the idea of passing on the project did not go well. One of the reasons I'm rebuilding the file is because I found that I had painted myself into a corner in terms of exporting the file to the required format and making sure the UVs would not cause problems for my teammate. I was so proud of what I had accomplished due to the steep learning curve, it was an abrupt wake up call when I passed off the Alembic file and watched first hand how messed up it truly was for future texturing and look development. I was also completly unaware that the force I used to project the ball forward wasent allowing it to spin. It looked to me in HOudini like everything was fine, but once a texture was placed on it it was clear there was a major issue with the animation. 

Updated Arnold Shader and Tracking

They say three times is a charm. I re-shot the footage a third time and finally was able to pull a halfway decent track. I struggled with my original two tracks for many, many embarrassing hours. They drab colors along with my inability to understand how to frame track-able backgrounds worked really well together. It's all a learning experience, so they say. Lets start with some good news. The photo below is an updated render of the VTOL with Arnold shaders. It is starting to look much better but still needs work.

The next group of photos follow the usual setup. Notice this time I managed to frame lots of detail to help Nuke better track the shot.

The next group of photos are a short series of screen captures that show the track and the point cloud.

Now that I have imported everything into Maya there is a large mess to unwrap. Any progress is good progress. I clearly screwed up scaling along with the xyz axis.

Footage and Preliminary Tracking

I shot footage on the top of an open parking garage. I used two different lenses, 35mm 1.8, and a 10-16mm 2.8. The wider lens seems to have a lot of distortion or possibly a rolling shutter issue. I brought the wider footage into Nuke and started tracking the shot. The project presents many new difficulties to overcome. Accounting for size and spacing is very different. In my situation i will be landing or having a space shuttle/craft from the rooftop.

The next images are the clean plate, white balance, gray ball, and HDRi.

Early Nuke tracks as I attempt to build a point cloud to bring into Maya

Project 2 – Compositing Reflection, Refraction, and Translucency

Day 1 and 2

Shooting the background plates

The weekend I set out to shoot the background proved to be a challenging one. Friday and Saturday ended up being overcast making it difficult to capture shadows and daylight information. Photos were washed out and otherwise not interesting.  Sunday proved to be a better day because the sun was out intermittently. It was difficult chasing breaks in the clouds. I would set the white balance and then the sun would hide behind a cloud. The second obstacle was the wind. The trees were moving causing any shadows and close up branches to move making bracketing the HDR a challenge. It was a waiting game between sun and the wind, all the while hoping that none of the equipment and props didn’t move.

I had three possible background plates but some of the exposures were off because the sun kept ducking behind the trees. Just before sunset I decided to head out to the Old Dairy Farm and out to the marsh to catch the last of the light and was able to capture the below background.

Another challenge was shooting the background so close to the ground. I used a small flexible tripod that was light compared to my others. keeping it still was not easy when taking multiple exposures. The small tripod really came back to haunt me letter when I started setting up the plates inside of Maya. Below was my last attempt to capture a background. This last batch of photos was going to be my background only in my rush I forgot to shoot a clean plate...lesson learned! I decided to go with the marsh photos above, but at the last moment changed my mind.

Once I realized that the clean plate was missing I decided to take to rush back to the location and re-shoot the angel plate. I changed my direction due to the location of the sun and gathered the final photos below. Again the the small lightweight tripod caused me issues that I discovered while compositing the HDR,s. I ended up alligning the photos in Photoshop helping to keep all the backgrounds consistenent. This is important when stting up cameras and verifying perspective. It also helps to dial in shadows and specular information.