Deepfake videos are becoming increasingly easy to make, and harder than ever to detect, experts warn.
As fake videos are used to spread misinformation quickly around the world, there are fears that more effort is going into developing deepfake-generating tools than into detection.
EU tech policy analyst at the Centre for Data Innovation Eline Cheviot warned that there is a growing imbalance between technologies for detecting and producing deepfakes and that this means there is a lack of tools needed to efficiently tackle the problem.
And she warned humans are no longer able to spot the difference between deepfakes and the real thing, and so are unable to stop weaponised fake news from spreading.
"Debunking disinformation is increasingly difficult, and deepfakes cannot be detected by other algorithms easily yet," she said.
"As they get better, it becomes harder to tell if something is real or not.
"You can do some statistical analysis, but it takes time, for instance, it may take 24 hours before one realises that video was a fake one, and in the meantime, the video could have gone viral."
She added that simply bringing in laws banning or regulating deepfakes is unlikely to be enough and that more understanding is needed by politicians of the technology.
"Partnerships should be developing with industry including social media companies, university researchers, innovators, scientists, startups, to build better manipulation detection and ensure these systems are integrated into online platforms," Eline went on.
Social media apps should have better deepfake detection software, tech CEOs have claimed(Image: Getty)
Tech entrepreneur Fernando Bruccoleri said tech platforms need to make it easier for people to work out what is real and what is fake.
"I think it will not be as simple as it seems to be able to pass and legislate in the short term," he said.
"Surely any platform will design tools to detect if a video is fake or not, as a counterpart."
But CEO of video verification site Amber, Shamir Allibhai, whose company specialises in detecting fakes, said that it would be impossible to regulate the creation of deepfakes.
Instead, he said, platforms should work to tackle the distribution of such videos, in the same way that they already work to stop the spread of revenge porn.
And he also warned that deepfakes are here to stay, adding: "I think we are going to see significantly more of it in the run-up to the US presidential elections in 2020."
It comes after astudy revealed almost all deepfake videos are porn.
Despite the fears about the political use of such videos, around 96% of AI-generated clips online were adult fakes.