Free MOCAP databases – Stage 1

I’ve recently started experimenting with MOCAP animations, for my first test, I used a free MOCAP database from the Carnegie Mellon University.

First, I placed my 3D dino model in Blender.

Then, I downloaded a MOCAP database that was translated so it could be used with blender (see link below), I selected a “walk” animation file (see .xlsx file database reference) and imported it directly as an armature

Then, I attached it to my mesh. It is worth mentioning that this MOCAP file was created to fit a regular human body so I had to try different ways to attach it to my chubby character. Finally, after raising my model arms and also rotating the armature bones, I was able to get the hands moving without colliding with the sides of the belly.

The initial MOCAP animation only lasted for 344 frames, in order to get a longer animation, I decided to make four different renders creating one camera for each take and rendering individual image sequences.

Finally, I imported all 4 image sequences into Adobe Premiere, added a soundtrack from the Youtube Audio Library (free) and rendered a video.

I have to say that, although it was exciting to use a MOCAP database, I feel it was a bit glitchy, and my character did some strange movements at the end of the animation, I do plan to keep doing more experiments to see if it is possible to create smooth transitions between animations. I also want to test the same database in Maya to see if I get a fairly different result.

Useful links:
Carnegie Mellon University Motion Capture Database
http://mocap.cs.cmu.edu/

Translated files that can be used with blender
https://sites.google.com/a/cgspeed.com/cgspeed/motion-capture

MOCAP blender tutorial –  importing MOCAP files as an armature
https://www.youtube.com/watch?v=mzWyO838C-0

Youtube Audio Library
https://www.youtube.com/audiolibrary/music

Micronarrative #3 “Home Alone”

This is my third short story, again, I went through my image archive and found these photos of a quite interesting location, I was imediately attracted by both the reflections on the building’s glasses and the big cardboard box blocking the entrance.

The model produced by Photoscan was quite defective and I had to spend two entire days repairing the mesh and painting the textures, at the end I was quite satisfied with the end result.

Again, I used google’s text to speech tool and managed to experiment with the voice to ad a more dramatic tone to both the speaker and the cat.

This is the story.

Micronarrative #2 “Stairs”

This is my second short story, I went through my photographic archive to find other collections of photos to build photogrammetry objects. For this project I selected a quite interesting stair case I found whilst walking around Paris.

In this project I wanted to take things further as I planned to create a quite different atmosphere surrounding the 3D model.

Instead of going for a daylight setting, I wanted to create a night atmosphere using a black background and one orange light.

This was the text for the story:

I was walking down the street
I couldn’t help to walk down those stairs
now I am away from everything

I decided to use Google’s text to speech app to record the words from the character, I also downloaded the street sound from a free to use source and this is the final video:

Micronarrative #1 “Cowboy boot in Paris”

Since I arrived in London, I started experimenting with new ways to create short audiovisual stories. For my first project, I created a small animation using the 3D model of a boot I once captured whilst walking around Paris.

My aim was to put the boot back into the city and take the viewer for a small ride ending right next to the boot.

I also incorporated a free to use sound file.

Piggy project – a commission from artist Jennet Thomas

On January 2018, Artist Jennet Thomas hired me to help her create new 3D animated footage for her latest piece, the initial idea was to play with a 3D model of a pig she bought from an asset store.

So I started making a series of animations using multiple instances of the pig model.

After reviewing this material, Jennet suggested me to take a different approach as she provided me with some images of a specific location she wanted to use, so we agreed on creating several animations of different piggies walking on the branches of a fallen tree.

So I rigged the model and created 3D branches projected over a still image that would be later used as the background for the digital composing process.

Afterwards, I made multiple animations of different instances of the model with varying sizes and speeds.


Finally, I used After Effects to place individual animations on separate layers (Including the original background image) and rendered the entire project, this footage was later incorporated on to the final audiovisual piece.

This is a video of the final composited shot:

Smells like chicken – Collaboration with artist Jennet Thomas

Around June 2016 I started talking with artist Jennet Thomas, we thought that it could be interesting to explore some sort of collaboration involving animation and photogrammetry.

So I spent the rest of the summer learning how to rig and animate a 3D model, I also did some experiments to learn how to properly create the digital copy of an existing object through the use of photogrammetry.

I worked on two different subjects, first, I made a quite basic model from a stuffed fox that belonged to my wife.

Then I added an armature and animated it.

After this first test, I wanted to work with another character, a rubber chicken that Jennet gave me as a present. This is what I managed to create:

From here, Jennet and I started to discuss how to bring to life the two central characters of a project that she was already working on.

The aim was to create two digital replicas of her partner wearing two different costumes. These replicas will then switch places with the actual actor performing on different digital scenarios.

This is an image showing how Jennet captured the real-life performance that she later digitally placed inside the virtual space.

So we found a proper location and ran a photo session getting more than 100 photographs from each character.


Afterwards, I used Photoscan software to create two 3D models. The results was quite unexpected.

However, after having a look at the resulting 3D models, we agreed that the unexpected results actually created new opportunities for experimentation, so we agreed on doing a series of experiments using both characters and some of the results were actually incorporate in the final video. This is a series of videos showing different creative approaches.

Please continue on page 2