It’s hard to give a comprehensive overview about all the work being done on multimedia manipulation and Artificial Intelligence (AI) right now. We can create realistic 3D models, change facial expressions, add movements and voice to video footage… How about if we think about the possibility of capturing somebody’s mind, memories and soul and recreate it? Since the inception of the digital camera we collect and create memories like never before. It is estimated that on average a person can generate 2.0 petabytes of data over their lifetime. Today, advances in storage and processing power allows us to use a living thing massive digital record to feed a series of learning algorithms that can recognize and track individual characters, capture their body language, facial expressions and voice pitch.
Not so long ago TheInc created a live experience for CES2017 where people could instantly create a 3D avatar of them and insert themselves into a virtual reality environment. The concept was simple: with so much AR and VR development going on…can you place yourself in a video game?
A Fortune 500 client commissioned a conceptual prototype to show how our digital life will create an exponential demand for data transmission and storage. We used a gaming engine and created a custom software and an interface that interacted real time with badge data to personalize the experience and deliver the digital 3D selfie.
Virtual identity is a very active field, A dozen new companies emerged last year alone claiming capabilities from digitalization to semantics enablement. After the live demo at CES, we started to think about the possibility of capturing somebody’s mind and soul and recreate it. In essence, building a series of algorithms that can recognize and track individual characters and capture their body language, facial expressions and voice along with a machine learning tool analyses the each gesture and response to understand how a living thing will response or physically react in a give situation.
The current limitation isn’t the capability of the models but the existence of data sets with high resolution. As soon as a dataset of a Australian Shepherd dog became available we started experimenting with an interface that would allow a user to submit a variety of content to a custom software.
The technology is developmental, but we were able to envision a path to create a rough model that could be imported into a VR experience and “acted” like the subject. While “Mason” has a very rudimentary AI-fueled engine, its digital “twin” was able to provide a credible experience in the virtual world.
How long until we can produce full AI generated 3D Avatars that are hyper photorealistic? It’s anyone’s guess, but it’s probably on the order of years, if not at least a decade. With ways to use AI and Machine Learning frameworks in cloud environments, there is a path for the democratization of the technology. Along with a machine learning tools that analyses each gesture and response to understand how a model will react to a given situation, the end result could be a “virtually immortalized” 3D version the individual or living thing that can generate new sentences, move and express just like the model.