Can A.I. Impersonate a Human?James BoreGiven a recent bit of controversy around Google's new AI, LaMDA, being claimed to be sentient I thought it was a good time to talk about AI again.
If you've not come across it, an engineer from Google (now suspended) claimed that he was convinced the AI was 'sentient'. In judging these claims, the chat log he used as 'proof' was heavily edited for readability, and was compiled from nine different conversations with the chatbot, so conscious, sentient AI is still a long way off.

I've spoken about Deep Fakes and AI-generated content before, but while there have been a few incidents of faked voice calls for scams, and faked videos for humour, Russia's activities in Ukraine have highlighted how these can be weaponised for propaganda purposes.

In May, the Ukrainian government released a series of audio recordings in which Russian officials were allegedly discussing the situation in Ukraine. The twist? The people speaking on the recordings had been replaced by AI-generated voice clones.
This is far from the first time that AI has been used to create fake content. In 2017, a company called Lyrebird released a platform that allows anyone to create an AI-generated voice clone from just a minute of audio. The results are impressively realistic.

In the wrong hands, AI-generated content is used to create fake news stories, spread misinformation and propaganda, and even to generate fake reviews and testimonials.
As AI gets better at generating realistic content, it's becoming more difficult to tell what's real and what's fake. This poses a serious threat to our ability to distinguish between reliable information and disinformation.

Deep fakes are a particular concern because they can be used to create realistic, believable audio and video content that is very difficult to distinguish from the real thing. This means that deep fakes could be used to spread misinformation, or even to impersonate people for fraudulent purposes.
Can you tell the difference between AI and human content?Now this is a bit of an experimental post, and I'll explain why in a bit, but I want to ask how sure you are that you can tell the difference between something created by an AI, and something created by a human.

To help with this, I've got two pieces of content. One is AI-generated, and one is human-generated. Can you tell which is which?

One is a photograph of a dog on a beach from Meg Sanchez on Unsplash, and the other is a completely artificially generated picture of a dog on a beach from an AI tool called DALL-E 2. To avoid spoiling the surprise, I'll tell you which is which at the end of this article.

All fairly light-hearted so far, but this has real consequences for security on individual and national levels. Let's try another one.

One of the two pictures above is a frame from a genuine video. The other is taken from a faked video of Prime Minister Zelenskyy announcing peace and demanding Ukrainian troops lay down their arms. The video was established as a fake quickly, and generally ridiculed for poor quality, but there are much more sophisticated fakes out there.

We're very much past the time of Deep Fake technology being used to create fake pornography for sale, or for extortion purposes, which has been going on for years. With technologies like DALL-E 2, 'proof' of guilt can be created from nothing more than a description in photographic form, and it won't be long before that applies to video as well. Deep Faked audio is already sophisticated enough to be difficult to distinguish from reality, as you can find out if you try Lyrebird.

Of course, it doesn't stop there.
Can you tell the difference between AI and human writing?There's a second experiment at play in this article. Roughly half of it is written by an AI system, while the other half is written by hand (well, typewriter). See if you can tell which is which.

The reason why I'm doing this experiment is to show just how good AI systems have become at imitating human communication. In particular, I wanted to test whether an AI system could successfully impersonate a human in any context.

So far, the results have been mixed. On the one hand, the AI system has been able to fool some of you into thinking that it is human. On the other hand, other people have been able to tell that it is not human. Of course, that does mean some of you have decided that I'm not human either (unless that was written by the AI of course, this could get a bit confusing).

Regardless of the outcome of this experiment, one thing is certain: Deep Fake technologies are getting better and better, and are a genuine threat on both individual and societal levels through misinformation, potential extortion, impersonation, and myriad other attack vectors that we haven't even conceived of yet. Tools to recognise generated content are far less developed and less common than tools to generate it, and so currently maintaining suspicion of anything that seems out of character and validating content through other means is the only approach that seems viable.

Incidentally, with both of the sets of pictures above, it was the left hand one which was the fake.
James is a cyber security consultant, speaker, author, and Chartered Security Professional with over a decade of experience. He provides training in business continuity, disaster recovery, and threat modelling as well as broader cyber security consultancy.
He can be reached at james@bores.com
or on LinkedIn at https://linkedin.com/in/jbore
Unlock This Issue for Free
Optin option for Issue 55
READ MORE LIKE THIS