The Problem: What does it mean to create a virtual life? Our client wanted us to create a live experience where humans could create their own Avatar. We can create realistic 3D models, change facial expressions, add movements and voice to video footage, but they also wanted us to think about the possibility of capturing somebody’s mind and memories. Today, advances in storage and processing power allow us to use a living thing’s massive digital record to feed a series of learning algorithms that can recognize and track individual characters, capture their body language, facial expressions, and voice pitch.
The Inc Lab Solution: We used a gaming engine and created custom software and an interface that interacted real-time with badge data to personalize the experience and deliver the digital 3D selfie. After the live demo, we started to think about the possibility of capturing somebody’s mind and recreating it. In essence, building a series of algorithms that can recognize and track individual characters and capture their body language, facial expressions, and voice along with a machine learning tool analyses each gesture and response to understand how a living thing will respond or physically react in a given situation.
The current limitation isn’t the capability of the models but the existence of data sets with high resolution. As soon as a dataset of an Australian Shepherd dog became available we started experimenting with an interface that would allow a user to submit a variety of content to custom software. Even though the technology is developmental, we were able to envision a path to create a rough model that could be imported into a VR experience and “acted” as the subject. While “Mason” has a very rudimentary AI-fueled engine, its digital “twin” was able to provide a credible experience in the virtual world.
Why it Matters: How long until we can produce full AI-generated 3D Avatars that are hyper photorealistic? With ways to use AI and Machine Learning frameworks in cloud environments, there is a path for the democratization of the technology. Along with machine learning tools that analyze each gesture and response to understand how a model will react to a given situation, the end result could be a “virtually immortalized” 3D version of the individual or living thing that can generate new sentences, move and express just like the model.