Blog
Blog

26th September 2022


 

An interview with: Junichi Higuchi (Producer, Virtual Production Group, TOEI Digital Center / Zukun Lab), Kenshiro Yasui (Motion Capture Director, i-Pairs), Hiroki Yamada (General Manager, Amineworks).

 

This work is the latest in the Avataro Sentai Donbrothers, part of the Super Sentai metaseries, produced over the past 40 years. It's no exaggeration to say that almost all Japanese kids grew up watching this TV show. It is a justice ranger series in which five main characters transform into heroes and face villains. This work marks the first time that bold virtual production has been used in the production process. The technical background was summarized in an interview.

 

donbrothers 1©TV ASAHI・TOEI AG・TOEI

Could you tell us why you chose Xsens motion capture for this work?

 

Higuchi:
In conventional works, human actors wear hero suits and act even after transformation, but in this work, there was a request from the beginning to use a character whose life after transformation is different from that of a human. Originally, in the Sentai series, it was customary to challenge shooting using new technology, like in the previous series "Kikai Sentai Zenkaiger," where the background was rendered in real-time. Using virtual production to create the full scene. When I first heard about it, I thought it would be a great way to line up CG characters later, but there were many requests that they wanted to do live virtual production, so we decided to use the real-time motion capture method by using Xsens MVN, which is capable of location support.

 

How is Xsens motion capture used in the field?

 

Yasui:
This time, we had two actors wearing the XSENS MVN suits to capture the motions of the CG characters simultaneously. The data acquired on NVN is sent to the software in real-time using the network streamer.

 

Could you tell us what you spent a lot of time on-site and if you have tips for Mocap shoots?

 

Yasui:
Firstly, we spent enough time on preparation before entering the site. Since the motion will be transmitted in real-time, we have made ample efforts in equipment selection and system verification so that there is no delay in the streaming data.

 

On-site, the acting point must be changed for each cut. For each cut, we select a position and direction so the mocap actor can check the view angle, as well as the position. Since there is a lot of movement on the system base, the system is set up to move and deploy quickly on the cart. Since there are many battle scenes in the movement, the position of the bodypack and battery is rearranged. The bodypack and battery are attached to the chest side to reduce the burden when falling and to allow the upper body to move easily. Since there is no pocket on the chest side, they must wear an additional harness.

 

What is your pipeline for live compositing for on-site shooting?

 

Yamada:
We send the motion capture data to MotionBuilder for final touches and send it to Unity. We create the lighting, shadows, post-processing, etc. in Unity, control the animation of movable equipment, add control of other small items, and link the movement of the live-action camera and the camera in Unity. To speed up the operation up to this point, we have developed and used a dedicated tool for shooting in-house. The picture created in Unity is sent to Ultimatte and composited with live-action video and chromakey in real-time. We also adjust the white balance, etc., so there is no inconsistency after compositing. Finally, Ki Pro records the post-composite picture and each pre-composite picture and mask.

 

Could you tell us if there is anything you have changed after retargeting the CG character model?

 

Yamada:
Since the shape of the model is so different from the real human skeleton, there are restrictions on the movement of the motion actors. To do poses that cannot be done physically due to skeletal problems, we are also doing things such as offsetting MotionBuilder.

donbrothers 3©TV ASAHITOEI AGTOEI

 

Could you tell us about your area of expertise?

 

Yasui:
We specialize in motion capture engineering for all applications. Motion capture for video production and character live. There are many fields, such as sports engineering and traditional performing arts. Recently, we have been focusing on virtual live production using Unreal Engine, and we are producing with motion capture and lighting.

 

Yamada:
It's a creative process that requires the ability to solve problems. Rather than just making things, we will give form to the ideals we are pursuing. Modeling and setup, virtual production in recent years, real-time graphics workflows such as video production using the metaverse and game engines, and various concepts are still being constructed. I am good at pioneering the future.

 

Are there any areas where Xsens could be used in other filming projects?

 

Yasui:
I think it can be used in any video work that combines live-action and CG characters. We believe that recording motion simultaneously with other mocap actors on set shortens the production period and makes the CG characters more lively.

 

Yamada:
I think the greatest advantage of XSENS MVN is that it allows us to use filming locations as we did with Don Brothers. I think there will be many opportunities in the future to take advantage of the strengths of the XSENS MVN that are difficult to do with optical systems. The good points of the XSENS MVN and the good points of the optical system are different, so when shooting in virtual production in the future, it will be necessary to make the best decision according to the situation at that time.

Thinking about getting your Xsens motion capture setup?

Choosing the best Xsens mocap setup is not always as straightforward as we like it to be. Different options, requirements, and challenges will be based on each individual's situation.

This is why we've put together a blog to help you make the best decision.

Choosing the best Xsens mocap setup

Related articles