In the modern era, a growing menace has evolved in the name of deepfake technologies that make it possible to create evidence of scenes that never happened. Celebrities have found themselves the unwitting stars of pornography, and politicians have turned up in videos appearing to speak words they never really said. This technology could potentially be used to incite political violence, sabotage elections, unsettle diplomatic relations, and spread misinformation. This technology can also be used to humiliate and blackmail people or attack organisations by presenting false evidence against leaders and public figures.
Deepfakes use deep learning techniques, such as generative adversarial networks, to digitally alter and simulate a real person. Malicious examples have included mimicking a manager’s instructions to employees, generating a fake message to a family in distress and distributing false embarrassing photos of individuals.
A risk to Humanity?
Deepfake is a multimedia content (image or video), wherein a person’s face or body is modified to appear as a different person. Deepfakes have been around since 2017 and refer to videos, audios or images created using a form of artificial intelligence called deep learning. In its early stages of development, Deepfake AI was capable of generating a generic representation of a person. More recently, Deepfakes have used synthesized voices and videos of specific individuals to launch cyber attacks, create fake news and harm reputations.
Researchers have observed a 230% increase in Deepfake usage by cybercriminals and scammers. The technology involves modifying or creating images and videos using a machine learning technique called generative adversarial network (GAN). The AI-driven software detects and learns the subjects’ movements and facial expressions from the source material and then duplicates these in another video or image.
To ensure that the Deepfake created is as close to real as possible, creators use a large database of source images. This is why more Deepfake videos are created of public figures, celebrities and politicians. The dataset is then used by one software to create a fake video, while a second software is used to detect signs of forgery in it. Through the collaborative work of the two software, the fake video is rendered until the second software package can no longer detect the forgery. This is known as “unsupervised learning”, when machine-language models teach themselves. The method makes it difficult for other software to identify Deepfakes.
How to curb this menace?
There are laws which can be invoked to deter one from creating deep fake videos. India’s IT Rules, 2021 require that all content reported to be fake or produced using Deepfake be taken down by intermediary platforms within 36 hours.
Since the Deepfake videos of the film actress Rashmika Mandanna went viral, the Indian IT ministry has also issued notices to social media platforms stating that impersonating online was illegal under Section 66D of the Information Technology Act of 2000. The IT Rules, 2021, also prohibit hosting any content that impersonates another person and requires social media firms to take down artificially morphed images when alerted.
How to identify Deepfake content
However, as is the case with all new tech, Deepfakes have positive usages as well. The technology has been used by the ALS Association in collaboration with a company to use voice-cloning technology to help people with ALS digitally recreate their voices in the future.
Deepfakes are a fact of the present life. So consumers will have to be more careful in verifying the source of content. AI, digital stamps, and making original content tamper-proof will help, but it’s up to each of us to stop the spread of it. Ensure employees and family know about how Deepfaking works and the challenges it can pose. Educate yourself and others on how to spot a Deepfake. Make sure you are media literate and use good quality news sources.