Resources | Movella

A positive vision for AI: Illusionist Q&A

Written by Xsens | Dec 23, 2019 2:18:00 PM

Illusionist is a multi-award winning, digital art studio based in Istanbul—we last spoke with the team after a project they completed in Tajikistan. Using Xsens motion capture, alongside a range of cutting-edge technologies, Illusionist envisions a future where creating art is independent of human capabilities, except through an individual’s limitless creativity. Their latest work with Xsens is the very first test result of a self-described ‘AI-Enhanced, Cyber-Physical Art Project: Humachine.’ All of the visuals in the project are generated in realtime using the drones’ spatial and motion analysis data.

Can you tell us about the idea behind the creation of this project and what you hoped to achieve?

Inspired by a prescient dream of computing devices combined with human brains, we wanted to simulate a glimpse of the future where creating art is independent of the limits of human capabilities, except limitless creativity.

Watch: AI-Enhanced Cyber-Physical Drone Art: Humachine

 

Tell us more about this idea of a symbiotic partnership between man and machine that is such an inspiration for Illusionist?

60 years ago, American computer scientist J.C.R. Licklider proposed this idea of a complementary relationship between humans and machines. Our experiment is mainly inspired by Licklider’s vision for a symbiotic relationship between humans and machines, in other words: the idea of technology as an enabler of human capabilities. Rather than having a pessimistic approach to AI, which has often been depicted over the past 60 years, we thought of an innovative, new path. Instead of anticipating the ‘inevitable’ AI supremacy of the future, we think we can benefit from it, and still operate normally as humans. In our case, it is through art.

 

 

Where did the idea of combining the micro-drones with motion capture technology come from?

The idea behind working with microdrones is to adapt and center them into our future shows. We wanted to accomplish an indoor drone swarm system that we could have full control over—when we started programming the drones we had no intention of controlling them as an extension of a performer to obtain visuals. As we got further into the programming, we realized that the real-time interaction between humans and the drone swarm system was a challenge, and our main approach must be to use the drones indirectly for artistic expression. At that point, the only efficient way to connect human motion with the drones was to use Xsens MVN Animate.

Can you tell us about the interface you developed to connect the drones, performer, and computer and how you overcame challenges with positioning?

The micro-drones were interconnected by drone swarm systems, however, the real challenge was programming the drones to know their exact positions in relation to each other and to the performer. This was an important consideration since the project was held indoors and the use of GPS tracking was impossible. We trained the micro-drones’ tracking system in our studio using the Xsens suit and a local positioning system (HTC Vive), which we obtained using the spatial and motion analysis, three different coding languages, and a game engine. The human-machine symbiosis led to the creation of a previously unattained form of expression.

 

 

How easy or difficult was it to have this level of interconnectivity between multiple systems?

Lots of sleepless days and nights have been spent and countless tests have been made in order to achieve this experiment with zero faults—but that wasn’t the most difficult part. All the components should work in perfect synchronization to achieve our final goal. Even the slightest error in timing between the drones, the motion capture, the game engine, and the software we’ve developed, results in failure. The fact that there was no margin for error encouraged us to achieve this amusing to watch yet challenging project.

Can you tell us more about the pipeline and any software you used?

The experiment was developed in accordance with the spatial and motion analysis in realtime. We built three different parts for this project: the first was coding the process for real-time positioning. Next, we trained drones with Xsens. And lastly, we designed the tools for the drones' visual creation with the Unity Game Engine. We used Unity Software to combine all of these technologies together.

The performer is first seen in a motion-tracking suit. As he swings his arm, a sequence of data is spread from his body to the micro-drones and then to the kinetic visuals. With each movement, a swarm of algorithmic patterns trace the performer's gestures and spreads across his body in waves of color and shape.

The performer's movements aren’t the source affecting the visual arrangements. The lighting, color, and texture of the performer's visual representations are generated in realtime by the aesthetic base of the drone swarm system's positioning data, which is used as an extension of the human body.

How important was the performance and reliability of the motion capture technology in the project?

The reliability of the motion capture technology became the backbone of the whole project. The processing speed of Xsens, along with its clean data, has given us the maximum support that we needed to accomplish our experiments without failure.

What does the future hold for the technology you have developed for this project?

It was very motivating to be able to use the already existing drone swarm system at a bespoke level for our specific needs. We attained a new form of artistic expression, along with the enhanced human abilities through mocap technology and micro drones. We managed to make human and machine complete partners in artistic creation through a symbiotic relationship as our first output, and we’re very excited to explore the potential opportunities ahead!

Where has this project been performed? Are there more plans for it to be performed?

Our first performance was done in our studio. We are currently working on new aspects of using this technology in order to adapt it to our shows. However, the final frontier is to carry human-machine symbiosis to a level where any person can use it as a tool for artistic self-expression.

 

Download a 15 day trail of MVN Animate

MVN Animate is the Xsens proprietary software application to record and view motion capture data for Xsens MVN motion capture.

During the trial period you will have a fully functional version of the software, including pre-recorded data.