Norse Horse
True AI doesn't exist yet
By F. N.
Friday 6 September 2019

Some say AI is our guardian angel: choosing fun social media posts, suggesting products to make us happiest, restocking our fridges and driving our cars. But others disagree: the Asian man whose phone recognised the faces of all his European friends but not his, the woman whose family discovered her pregnancy because an algorithm sent her baby-related adverts, or the US Governor of Massachusetts who claimed the state’s medical records were private and anonymous – and promptly had his own medical records found by an algorithm and publicly released.

We are all familiar with the dark side of artificial intelligence: villains like HAL, who locked Dave Bowman on the cold side of the pod bay doors in 2001: A Space Odyssey, or Skynet's attempt to terminate Sarah Connor in The Terminator. But the myth of evil AI goes back much further. In 1872, Samuel Butler's novel Erewhon suggested for the first time that machines could evolve conscious thought. And in 1920, Czech writer Karel Capek introduced robots - who promptly took over the world.

These stories show the power, the danger and the risk of human-level intelligences – smart minds with their own agenda - in tough, fast machine bodies. This is what the term “artificial intelligence” used to mean: a machine which looks at you with a human gaze, which knows you, and which has clever plans that may not have your well-being at heart.

Today, however, the term “AI” is applied to everything from your smartphone to your fridge. “AI” has lost its value. It is now used to describe algorithms which may be smart at one task, but don’t have the ability to adapt, to switch between different tasks, to and to understand the world like humans do. To refer to machines with human-level intelligence, researchers now say artificial general intelligence, or AGI. These AI villains are actually AGI villains.

But how far are we from being oppressed, harmed, or exploited by AGI? Oxford philosopher Nick Bostrom has catalogued a variety of unhappy endings which AGI could bring us, from tiny robots replicating out of control until the surface of the world is covered in “grey goo”, to machines deciding that we humans can’t look after ourselves and keeping us in a kind of technological zoo.

But I’m not worried about any of these theories. AGI doesn’t yet exist, AI can only just play Jenga on its own, and only a handful of robots can recover from being pushed over. I think we’re safe for a while. What worries me most about the present and the future is not artificial intelligence, but an evil which has always walked beside us and always will: normal people who have swapped their moral compass for the power of algorithms.

I began to worry ten years ago, when online shopping sites started to use recommendation algorithms to suggest products. If we all follow their suggestions, we’ll all end up buying the same things, and losing the diversity of individual taste, the "long tail". I worried a bit more when social media platforms decided that their algorithms, rather than I, should control my news feed. These social media sites aren’t services designed to help us: they are addictive experiences designed to sell our attention. Their real customers are ad posters, from whom which some sites are reported to earn £20 per year per user in the US.

Things got even more concerning in 2015, when Michael Kosinski at the University of Cambridge developed a simple model which, given 250 of your Facebook likes, could guess your personality traits better than your own family could. And the world was recently shocked when Cambridge Analytica copied this model, allegedly using it to promote political campaigns for President Trump and the Vote Leave and Leave.EU groups.

Here’s the bombshell. None of these methods use artificial general intelligence. They all use simple methods called machine learning: any algorithms which allow machines to learn from data and predict the outcome of new events. The term machine learning does cover powerful methods like “neural networks” algorithms modelled loosely on the human brain, which recognise objects and faces.

But it also includes very simple techniques which can be used off-the-shelf by any programmer. Cambridge Analytica’s personality prediction model uses regression, a method that asks which variables (such as your social media preferences) have most influence on an outcome (such as your personality traits). Regression is so simple that first-year psychology students can do it by drawing a line on a graph.

"AI" is such a vague term that it’s wise to avoid using it yourself and to mistrust it wherever you see it. Nine times out of ten, you’ll see it in an advert for a service which actually uses simple machine learning methods - definitely not as exciting. If you mean human-level minds in metal bodies, talk about artificial general intelligence. We don’t need to fear AGI: it’s very far in the future. But we should be very worried about the harm that a small team of humans can do by gathering large amounts of user data, spinning up a few simple algorithms, and switching off their conscience.