Doug Roble -- a computer graphics software researcher, delivers a TED talk on digital humans while displaying the technology that his team is developing. The experience is quite interesting as the speaker really calls the audience's attention to how machine learning has reached a point of astonishing realism.
Roble comes on stage wearing an inertial motion capture suit that captures what he is doing in real time and digitizes it. The talk on digital humans is delivered both in physical and digital formats, simultaneously. The computer graphic specialist introduces the on-screen version of himself as 'DigiDoug' -- "a 3D character that [he is] controlling live in real time."
The quest of producing believable, computer-generated characters in film has been long but the industry is now at a point, where the components of the experience have become "seriously fast" and machine learning algorithms have become more sophisticated. The process of creating DigiDoug was lengthy, of course, but the result is baffling. First, the team collaborated with ICT to reconstruct Roble's face in great detail -- from the pores and wrinkles to the way his skin responds to different light intensities. The mapping was so precise that it even took into the consideration blood flow. Afterward, the researchers captured an obscene amount of data about how Roble's face moved and expressed itself. This information was used to train deep neural network so that "the network can look at [Roble's] image and figure out everything about [his] face in 16 milliseconds."
The talk on digital humans specifies that the technology used for this is similar, however, more sophisticated than deepfake videos. The possible uses for this type of innovation extend to film, creating new forms of communication, making virtual assistants more user-friendly and etc.