A few months ago, millions of viewers in South Korea were tuned to the MBN channel to get the latest news on Covid-19.
The host of the show, Kim Joo-Ha, started talking about the day’s topics.
Yet this particular bulletin was anything but normal, as Kim Joo-Ha was not “real”. His image had been replaced by a “deepfake” version, a computer-generated copy that aims to perfectly reflect his voice, gestures and facial expressions.
Viewers had been informed in advance that this would happen. While some people were amazed at how realistic everything was, others said they were concerned that the real Kim Joo-Ha might lose her job.
Despite the negative connotations surrounding the colloquial term “deepfake” (people usually don’t want to be associated with the word “fake”), the technology is increasingly used commercially.
More commonly called AI-generated video, or synthetic media, usage is growing rapidly in industries such as news, entertainment and education, with technology becoming more sophisticated.
One of the first commercial users was Synthesia, a London-based company that creates AI-based corporate training videos, for companies such as global advertising firm WPP and business consultancy Accenture.
“This is the future of content creation,” said Victor Riparbelli, CEO and co-founder of Synthesia.
To make a video generated by artificial intelligence using the Synthesia system, you simply need to choose an avatar and type the word you want to communicate.
Riparbelli says this means that global companies can easily make videos in different languages, for example for internal training courses.
“Let’s say you have 3,000 warehouse workers in North America,” he says. “Some of them speak English, but some may be more familiar with Spanish. If you have to communicate complex information to them, a four pages PDF is not a great way. It would be much better to make a two or three minutes video, in English and Spanish. If you were to record every one of those videos, it would be a huge job. Now we can do it thanks to lower production costs and less time. This more or less exemplifies how the technology is used today.“
However, there are those who remain skeptical about it and many companies prefer not to invest in deepfake, preferring more and more “human” versions.
Deepfakes, in fact, are part of the larger problem of disinformation that undermines trust in institutions and the visual experience, so many people no longer trust what they see and hear online.
However, the “good” side of this technology should be grasped, such as translating films into different languages ??or creating engaging educational videos.
One such educational use of AI-generated videos is at the University of Southern California’s Shoah Foundation, which hosts more than 55,000 video testimonies of Holocaust survivors.
This project allows visitors to ask questions that solicit real-time answers from survivors in pre-recorded video interviews.
In the future, will this technology allow grandchildren to converse with AI versions of deceased elderly relatives?