Public transport VR

For the last years, I have developed many different VR experiences that were displayed inside of buildings and in controlled access areas. Allowing the viewer through physical space has proven to be a challenging task as you have to restrict the range of movement whilst constantly supervise movements in order to prevent accidents.

Placing VR in public transport vehicles like buses, trains, boats and perhaps elevators seems to be a move in the right direction as they tend to be safe and existing VR Roller Coasters show that the use of physical controlled movement complements the immersive illusion by adding the haptic sensation of both physical movement and physical acceleration/deceleration.

The first stage of this research project will involve learning how to capture data form the accelerometer and the gyroscope of the VIVE FOCUS in order to sync the movement of the 3D camera (through digital space) with the physical movement of the viewer.

I have already found some tutorials for Unreal Engine, so I hope to develop my first prototype by the end of April 2019.

Free MOCAP databases – Stage 1

I’ve recently started experimenting with MOCAP animations, for my first test, I used a free MOCAP database from the Carnegie Mellon University.

First, I placed my 3D dino model in Blender.

Then, I downloaded a MOCAP database that was translated so it could be used with blender (see link below), I selected a “walk” animation file (see .xlsx file database reference) and imported it directly as an armature

Then, I attached it to my mesh. It is worth mentioning that this MOCAP file was created to fit a regular human body so I had to try different ways to attach it to my chubby character. Finally, after raising my model arms and also rotating the armature bones, I was able to get the hands moving without colliding with the sides of the belly.

The initial MOCAP animation only lasted for 344 frames, in order to get a longer animation, I decided to make four different renders creating one camera for each take and rendering individual image sequences.

Finally, I imported all 4 image sequences into Adobe Premiere, added a soundtrack from the Youtube Audio Library (free) and rendered a video.

I have to say that, although it was exciting to use a MOCAP database, I feel it was a bit glitchy, and my character did some strange movements at the end of the animation, I do plan to keep doing more experiments to see if it is possible to create smooth transitions between animations. I also want to test the same database in Maya to see if I get a fairly different result.

Useful links:
Carnegie Mellon University Motion Capture Database
http://mocap.cs.cmu.edu/

Translated files that can be used with blender
https://sites.google.com/a/cgspeed.com/cgspeed/motion-capture

MOCAP blender tutorial –  importing MOCAP files as an armature
https://www.youtube.com/watch?v=mzWyO838C-0

Youtube Audio Library
https://www.youtube.com/audiolibrary/music

Interactive AR project – Stage 1

This is my second Augmented Reality application, my aim for this project, is to create an interactive app featuring a little dinosaur that has to defeat some creepy monsters by incinerating them with his fire breath.

Stage 1

My first goal was to learn how to move a character through space whilst switching between two different animations: one that played when the character is waiting and another one that played every time the character moves around the space, for this, I also wanted to incorporate a screen joystick to control the character’s movements.

So my first task was to make two separate animations of my 3D rigged model:

I found a great youtube tutorial that had almost everything I needed, it was a bit out of date so I had to look for a secondary source to understand how to use Vuforia plugin nowadays, it is actually easier as it is already included in Unity from version 2017.02 onwards. I managed to replicate the Donald Trump app, it worked! My next step was to use my own 3D model/animations.

In order to create my own project, I had to spend a bit of time understanding how image targets work, one the first things I learned is that Vuforia ranks your images from zero to 5 stars. My first image got zero!!!

After doing a bit of research, I found a very good guide (on Vuforia’s website) and I went from zero to one and finally up to three stars!

It turns out that images with high contrast and non-repetitive patterns make much better targets!

So I used this image as my target image for my project:

I even took the risk of adding a flat terrain for the character to walk on top of it, ant it worked!

Finally, I compiled my project and uploaded it to my phone using an app called Android File Transfer (I had to do this as I was working on a Mac, Windows PCs allow you to connect directly to your android phone).

And this is a video of me testing first stage

From here, I am planning to build a more complex environment for the character to walk on top of, so far I have made some tests but the character just walks through the 3D meshes.

I did a bit of research and found out that apparently all colliders are deactivated by Vuforia’s script. Figuring this out is this project’s stage 2.

Resources

First youtube tutorial:
https://www.youtube.com/watch?v=khavGQ7Dy3c

Vuforia updated content:
https://library.vuforia.com/articles/Training/getting-started-with-vuforia-in-unity.html#about

Vuforia target features
https://library.vuforia.com/articles/Solution/Optimizing-Target-Detection-and-Tracking-Stability.html

Android file transfer
https://www.android.com/filetransfer/

Micronarrative #3 “Home Alone”

This is my third short story, again, I went through my image archive and found these photos of a quite interesting location, I was imediately attracted by both the reflections on the building’s glasses and the big cardboard box blocking the entrance.

The model produced by Photoscan was quite defective and I had to spend two entire days repairing the mesh and painting the textures, at the end I was quite satisfied with the end result.

Again, I used google’s text to speech tool and managed to experiment with the voice to ad a more dramatic tone to both the speaker and the cat.

This is the story.

AR animated sculpture

For this project, I planned to create an animated projection of a digital sculpture, my aim was to learn the basic workflow to create an AR application using Unity.

Right from the beginning of my research, I’ve got pretty frustrated by the fact that ARcore, one of the leading apps used to create AR applications, was pretty restrictive regarding mobile devices, basically, I discovered that my phone wasn’t compatible for their content.

I was decided to find a workaround, I saw some references from people that claimed they hacked ARcore to make it work with a broad variety of phones, I also read some articles featuring some of the best AR apps in the market, that is when I started to notice this name: Vuforia.

Later I found a pretty interesting tutorial with this title: “Let’s Make an Augmented Reality App in 6 MINUTES!!!!”, I was intrigued so I followed it. Soon I discovered that making a basic app with Vuforia was pretty straight forward, however, I could not find any reference suggesting that I could play it on my phone, at this point, I decided to give it a go as I was willing to spend 6 minutes to find out if my phone was compatible.

First, I learned that you could install Vuforia into Unity, then I found that the video was from 2017 and a lot of things had changed since then, the biggest improvement was that Vuforia became included inside of Unity form version 2017.02 onwards!

Quick tip:

I always try to avoid working on the latest version of a specific software unless I really need to. Very often you will find that the features you need are already available in previous versions and with the benefit of finding many more tutorials and forum references based on that specific version, I personally find it exhausting trying to keep up with the newest trending thing and usually end up using software that was released at least one year ago.

So I downloaded Unity 2017.02, My first task was to create a 3D animation of a sculpture that I later imported inside of Unity.

Using Vuforia is quite straight forward, first you have to create a free Vuforia account, then, you have to create a license key, this is basically a reference string that links your apps to your Vuforia account. If you follow the tutorial I mentioned, at some point you will need to copy and paste your license string inside your Unity project.

To create an image target, first you have to pick/create an image to be used as your target, for this project, I decided to use the same image I used as texture for the sculpture.

Then you assign that image to a target database.

Finally, you download that database (Unity format), import it to your project and install it by double clicking on the file. Now it will be available for you to activate it.

In the end, it didn’t take me 6 minutes, actually, I spent around 20 minutes as the tutorial was a bit out of date and ended up looking for updated instructions on Vuforias plugin (check the resources at the end of this page) but in the end, it was worth it! I compiled my AR app and transferred it to my android device using the Android File Transfer software (I had to do this as I was working on a Mac, Windows PCs allow you to connect directly to your android phone).

And this is a video of me testing the app.

Now that I understand the basic workflow, I decided to work on a more challenging project exploring character animation and user interaction. I called this project Interactive AR Project

Resources

Unity download archive (previous versions)
https://unity3d.com/get-unity/download/archive

Youtube tutorial:
https://www.youtube.com/watch?v=khavGQ7Dy3c

Vuforia updated content:
https://library.vuforia.com/articles/Training/getting-started-with-vuforia-in-unity.html#about

Vuforia account
https://developer.vuforia.com/vui/auth/register

Android file transfer
https://www.android.com/filetransfer/

Micronarrative #2 “Stairs”

This is my second short story, I went through my photographic archive to find other collections of photos to build photogrammetry objects. For this project I selected a quite interesting stair case I found whilst walking around Paris.

In this project I wanted to take things further as I planned to create a quite different atmosphere surrounding the 3D model.

Instead of going for a daylight setting, I wanted to create a night atmosphere using a black background and one orange light.

This was the text for the story:

I was walking down the street
I couldn’t help to walk down those stairs
now I am away from everything

I decided to use Google’s text to speech app to record the words from the character, I also downloaded the street sound from a free to use source and this is the final video:

Micronarrative #1 “Cowboy boot in Paris”

Since I arrived in London, I started experimenting with new ways to create short audiovisual stories. For my first project, I created a small animation using the 3D model of a boot I once captured whilst walking around Paris.

My aim was to put the boot back into the city and take the viewer for a small ride ending right next to the boot.

I also incorporated a free to use sound file.

Immersive Splash

A few days ago I had a great opportunity to run a workshop as a visiting lecturer at Camberwell College of Arts, 11 BA Photography students took part in this long-day activity, at the end of the day, everybody got a chance to test their own stereoscopic 360 content.

The following are examples of the type of content that we created during the workshop:

Photogrammetry Booth

I recently joined a group interested in photogrammetry, it is worth saying that I have been exploring this technology for the past three years. I believe that there is a huge ground yet to be covered and also that it has a huge potential for VR, AR, and game development industries.

A month ago, I had the opportunity to attend a quite interesting meeting held at CSM where a research team shared their experience in building a portable photogrammetry booth that could be used by both museums and similar organizations interested in documenting their collections.

I was really impressed by the array of 7 DSLR cameras, all reacting to the synchronized movement of a rotating platform holding the 3D Model. This project inspired me to create a low-cost alternative using Raspberry Pys as I already knew you could use them as cheap DSLR cameras.

I started my research by looking at the Raspberry Pi camera features and I was lucky enough to have access to both 5MP and 8MP versions so it was easy for me to run a series of tests using both cameras.

First, I learned how to take a single image using a python script that was executed from the Raspberry Pi. On the first two lines, it just calls some libraries needed to take the photo, the third line calls the camera object and the forth activates it, from there, all the consecutive lines set different parameters for the camera. Next, ” camera.capture” saves the image in a specific folder and the last line shuts the camera off.

from picamera import PiCamera
from time import sleep

camera = PiCamera()

camera.start_preview()
sleep(3)
camera.iso = 100
camera.shutter_speed =8000
camera.sharpness = 10
camera.resolution = camera.MAX_RESOLUTION
camera.capture('/home/pi/Desktop/test3.jpg')
camera.stop_preview ()

It worked!

Next, I wanted to create a code that would allow me to take simultaneous photographs of an object from 8 different angles. As I did not have a rotating platform, I was aware that I needed enough time to manually rotate the object up to 45 degrees each time in order to achieve a full 360 rotation in 8 steps.

And this is the code I managed to create, it is basically the single image code duplicated eight times and separated by a small delay (sleep (X)).

from picamera import PiCamera
from time import sleep

camera = PiCamera()

camera.start_preview()
sleep(3)
camera.iso = 150
camera.shutter_speed =7000
camera.sharpness = 100
camera.capture('/home/pi/Desktop/imagesP4/imageA1.jpg')
camera.stop_preview ()
sleep(5)
camera.start_preview()
camera.iso = 150
camera.shutter_speed =7000
camera.sharpness = 100
camera.capture('/home/pi/Desktop/imagesP4/imageA2.jpg')
camera.stop_preview ()
sleep(5)
camera.start_preview()
camera.iso = 150
camera.shutter_speed =7000
camera.sharpness = 100
camera.capture('/home/pi/Desktop/imagesP4/imageA3.jpg')
camera.stop_preview ()
sleep(5)
camera.start_preview()
camera.iso = 150
camera.shutter_speed =7000
camera.sharpness = 100
camera.capture('/home/pi/Desktop/imagesP4/imageA4.jpg')
camera.stop_preview ()
sleep(5)
camera.start_preview()
camera.iso = 150
camera.shutter_speed =7000
camera.sharpness = 100
camera.capture('/home/pi/Desktop/imagesP4/imageA5.jpg')
camera.stop_preview ()
sleep(5)
camera.start_preview()
camera.iso = 150
camera.shutter_speed =7000
camera.sharpness = 100
camera.capture('/home/pi/Desktop/imagesP4/imageA6.jpg')
camera.stop_preview ()
sleep(5)
camera.start_preview()
camera.iso = 150
camera.shutter_speed =7000
camera.sharpness = 100
camera.capture('/home/pi/Desktop/imagesP4/imageA7.jpg')
camera.stop_preview ()
sleep(5)
camera.start_preview()
camera.iso = 150
camera.shutter_speed =7000
camera.sharpness = 100
camera.capture('/home/pi/Desktop/imagesP4/imageA8.jpg')
camera.stop_preview ()

I also managed to create a proper setup using two sidelights and a lightbox to fade shadows as much as possible. I also managed to create a quite practical camera tripod by using a flexible desk light I had around.

Continue on page two.

Marvellous Reality – 360 CGI+Real-life footage research project

From March 2018 I started leading a research project funded by Debora Arango Higher Education School. The project aimed to create a bespoke process for the creation of an immersive story.

For six months I led and worked side by side with a team of professionals from the audiovisual industry i.e audio specialist, camera crew, producer and actors.

The first plan was to create a 3D environment and placing real-life footage of performing actors. So we started shooting actors against a green screen and testing the resulting video inside of very basic 3D environments.

After testing the chromed characters inside of both Unity and Blender, I decided that the most efficient course of action due to the complexity of this project was to produce a Monoscopic 360 video with ambisonic sound instead of building a VR app.

We spent a good amount of working hours trying to figure out the best way to place characters in front of a moving camera. One of the biggest challenges was consistently matching lighting schemes on both ends (physical and digital), Another big challenge was to figure out how to place and reveal characters in the 3D environment preventing them to reveal their flatness once the viewer’s point of view changed.

We managed to overcome all this obstacles and ended up introducing a total of 5 characters, including two that directly interacted with the viewer at a very close range.

Regarding the 3D environment, we discover that buying an existing 3D environment was not an option as we wanted it to reflect specific architectonic features from our culture and the models available online have very different ones, so apparently, depicting a typical Colombian city hasn’t been of interest for 3D artists.

So I decided to start building the entire fragment of the city from scratch.

The first models were quite basic and this was actually pretty useful as the script evolved a lot an we ended up changing the story and characters more than 3 times.

I also started testing two different approaches for building creation, the first one explored the process of building an entire facade by extruding some features of a frontal photo of an existing building.

The process proved to ad a quite realistic look to the scene but, at the same time, added a lot of size to the textures used in the project and demanded a lot of retouching e.g removing cables from the facades or getting rid of shiny materials on doors and windows.

The second option was to build all the locations from scratch using convencional 3D modeling and texturing techniques.

In the end, I decided to mostly do every asset from scratch in order to have enough flexibility to design the 3D environment, in total, it took me over three months to build a fragment of typical Colombian city spread across a 5 by 5 array of blocks.

It is worth mentioning that I did use google maps to navigate the cities of Medellín and Envigado to get inspired and also to get the footage I needed to replicate architectural features of two emblematic buildings.

Finally, to create the ambisonic sound for the video I learned how to integrate Reaper (Digital audio production application) and Facebook 360 workstation, this allowed me to spatialize both sound effects and dialogs.

The following is a 2K version of the project, it took 8 days of continuous rendering using 16 27-inch iMacs to produce a total of 8800 frames.

This is an ongoing project, at the moment I am looking to get more funding to finish adding more characters, 3D animations, enhancing the sound FX and rendering it in at least 4k quality.