Every now and again, Son Number One sends short videos: some of them are of me, saying mad stuff on the radio. Some are of people talking about his siblings, saying equally mad stuff.
The last one showed him walking through what appeared to be an extremely dodgy area, complete with heavily tattooed gang members glaring at him. And if he hadn’t told me, I would have thought the video was real. It was, of course, all made with AI.
Son Number One is fairly adept at making videos that he pops online (it is, as the young people say, one of his side hustles), yet he’s not possessed of advanced programming skills. Any of us could do this. We’re close to the point now where AI isn’t a feature of the internet. AI is the internet.
When I looked back at the video – on a desktop computer – it was (just about) possible to tell that it was fake. On a phone, it appeared completely realistic – and it’s largely on phones that we encounter this kind of thing.
READ MORE
I’m hardly the first person to point this out. Millions of words have been written on the subject, a lot of it by AI itself (the current calculation is that more than half of new articles published online weren’t written by a human).
Our Government has several schemes to promote “digital literacy”: schooling people in the idea of being sceptical about what they might see on social media. But as AI gets better at producing fakes, it becomes increasingly difficult to tell the difference: everything has to be treated with suspicion, to the point where you will, despite your best efforts, believe something that isn’t true and disbelieve something that is.
[ Seán Moncrieff: Can ChatGPT write this column for me?Opens in new window ]
A real-world analogue: if you have a friend who is always interesting and entertaining, but you know that a proportion, perhaps most, of what they tell you is a lie, would you want to maintain that friendship?
You might tolerate it if they’re fond of spinning the odd tall tale, but the problem with this friend is that anything they tell you is potentially false. Are they really ill? Have they lost their job? Did they see George Clooney in Tesco? It’s too much.
Yet, increasingly, every time we open up a device, this is what we are supposed to be doing, while the device itself is actively working to degrade our critical thinking skills.
Not enough research has been done in the area yet, but early studies of students using AI to write essays found far less brain activity compared to those who wrote the essays under their own steam. Anecdotally, teachers all over the world have voiced concerns that the use of AI and screens don’t help students learn, and crucially, don’t help them develop critical thinking skills – the very thing they need when they go online.
Yes, call me a Luddite. Point out that Socrates disapproved of writing (which we only know because Plato wrote it down). But Socrates never had to contend with a digital-industrial complex that, like so much else in our world, is singularly motivated by a ravenous need for money and power.
[ Sean Moncrieff: The dark side of artificial intelligence is doomsday scaryOpens in new window ]
So-called “cognitive offloading” doesn’t mean that the device in your pocket doesn’t want you to think. It wants you to think – in short, 30-second clumps – but what it doesn’t want is for you to think about what you’re thinking.
It doesn’t want you to conclude that your friend is a congenital liar and it’s unhealthy to maintain a relationship with them.
Of course, artificial intelligence is proving to be revolutionary and positive in many areas, such as medicine. But that doesn’t make it positive in all situations.
For Daughter Number Four, use of a screen is now a standard part of her school day, yet there’s no evidence that it’s better than an old-fashioned book. There’s every chance that it might be worse.
















