Actors and Performers
Over the last few decades, more and more actors have been directing their careers away from the traditional worlds of stage and live-action performance, and towards a new horizon: motion capture.
From big-name franchises such as Marvel and Lord of the Rings, to hugely popular videogames including Fortnite and Call of Duty, the demand for motion capture performances has never been higher – actors are finding artistic and commercial success by moving into this sphere.
Performing for Motion Capture: A Guide for Practitioners is the first all-encompassing resource for the next generation of performers in this exciting medium. Touching on themes of character creation, acting in the digital realm, and the realities of working on projects with a long pipeline of development, Performing for Motion Capture is the essential guide for practitioners.
Here, in this extract, authors John Dower and Pascal Langdale briefly break down the technical history of the art form, and impart some crucial initial information.
Motion capture as we know it today was first used over a hundred years ago in a process called rotoscoping, developed by animator Max Fleischer, in which filmed performances were traced over by animators to create more life-like animations. This was famously first used in a movie in Walt Disney’s Snow White and the Seven Dwarfs in 1939.
By the late 1950s, animators were experimenting with potentiometers (adjustable resistors) to record an actor’s movement for display on a television screen, and by the 1980s they were using bodysuits lined with active markers with cameras to track movement. This technique was also used by scientists and medical experts to analyse the gait of athletes and war veterans to further understand the effects of injuries on the body.
Even as late as the 1990s, motion capture technology was rudimentary and there was much more work to do for the animators than there is now in cleaning up the data in every frame. This painstaking process was streamlined and improved so that, by the turn of the century, motion capture was becoming increasingly sophisticated and often used by the medical community for movement analysis, while the appearance of Gollum in The Lord of the Rings: The Fellowship of the Ring in 2001, played by Andy Serkis was what really put motion capture as we know it on the map.
The important concept to get over here is that what all motion capture systems are looking to capture is – the skeleton. Human skin, muscles, body shape, clothes – none of this is captured. Purely and simply – just the skeleton! The digital skin of a character is then applied to the skeleton, which in turn then drives the character. So, the performer is acting as a puppeteer – their movement is driving their digital avatar.
This is the moment in introductory classes when students often gape. Some of them might secretly be thinking – so, if the system only captures my skeleton and it can’t see me, my performance is anonymized, it’s no longer me – so does my performance really show? This book is going to prove to you that it really does show and that it really does matter. But before we do, let’s just be clear as to how the technology works because if you understand that, it will all begin to fit into place. What follows is a basic explanation. You may want to study this subject deeper, but we want to get the principles over.
There are three main modes of motion capture, all seeking to capture the skeleton:
This is likely to be the method you have seen most coverage of to advertise the films and games that utilize it. It’s also identified by the figure-hugging Lycra® suits covered in reflective markers. Using infra-red cameras which surround the studio, the markers’ positions in space are captured to a high degree of accuracy and then turned into a 3D cloud of dots that correlate with the skeleton and joints of the performer. This ‘cloud data’ moves through space as the performer moves and this is recorded.
The space in which the actor’s movements can be captured is defined by a 3-dimensional area which the cameras can cover. It is called ‘the volume’ and is often split up into 1-metre squares in a grid pattern on the floor, so that performers, sets and props can be accurately placed in the space. It’s the most accurate and expensive way of capturing mocap data. It is also possible to shoot outdoors using ‘active’ LED markers rather than the ‘passive’ reflective ones used in the studio, enabling shooting in daylight and in real locations. You may have seen videos of Andy Serkis in the recent Planet of the Apes movies that use this technology. Well-known manufacturers include – Vicon, Motion Analysis and OptiTrack.
Most inertial systems use inertial measurement units (IMUs) containing a combination of gyroscope, magnetometer and accelerometer to measure rotational rates. The digital information comes from miniature inertial sensors, which are placed on key joints and points in the skeleton. The motion data of the inertial sensors is transmitted wirelessly to a computer, where the motion is recorded or viewed. These rotations are then translated to the skeleton in the software.
The benefit of this system is that it does not require line of sight, meaning that actors can wear costumes and do not have to be recorded in a studio. However, there are drawbacks – magnetic fields can interfere and the data is less accurate since it is not about a performer’s position in space, but the actor’s own movements broken down to their constituent parts relative to only their body.
Well-known manufacturers include – Xsens, Noitom and Rokoko. You can use the HTC Vive also to capture motion.
Markerless systems do not require performers to wear special equipment for tracking. Special computer algorithms are designed to allow the system to analyse optical input and identify human forms, breaking them down into constituent parts for tracking. Markerless systems tend to use both outline and depth sensors – early adopters used Xbox cameras for capturing, for example. What’s exciting about new systems using markerless technology is that it needn’t cost much to capture, which is reminiscent of the effect of the advent of cheap consumer priced film and video equipment which drove the independent sectors. Now indie games developers can create budget mocap solutions which will diversify the mocap community and promote innovation. More on this in our final chapter.
Needless to say, the quality and fidelity is not as high but the price is more affordable. Solutions include Qualisys, PS4 and HTC Vive.
You may have seen images of actors in mocap suits standing straight, with legs slightly apart and arms wide stretched out at 90 degrees, and wondered what they are doing. This is called a T-pose and actors are asked to stand in this pose at the beginning and end of every take. It is used in order for animators to ensure the system is seeing all of the reflectors and the skeleton is aligned – a rough way of checking the calibration of the system.
There are systems that require less regular actor calibration and occasionally we hear about other poses being used, but the T-pose has become emblematic of mocap, so it’s important to get used to it as a requirement for actors on every take.
Performing for Motion Capture: A Guide for Practitioners is now available on Bloomsbury.com