Leader

NST Leader: AI worry lines

So many ugly things are happening to artificial intelligence (AI) even its famed developers find them troubling.

One such personality is controversial Twitter owner Elon Musk, who last month told the equally controversial Fox News that AI could cause the destruction of civilisation as we know it.

On Monday, Geoffrey Hinton, an AI pioneer, quit Google to join the growing chorus of critics who are warning of the danger the technology is posing to humanity. Existential threat is their frequent refrain.

A part of him, Hinton told The New York Times, now regrets his life's work. The public regret may have had its origin in startup OpenAI's release of a new version of ChatGPT, a chatbot, in March. It appears to come with new abilities that even the startup didn't think the chatbot could do: it writes its own computer code. Going rogue is just a step away.

AI may have already gone rogue in a less roguish way, though. Just "google" and you will see the ugly exhibits of the technology: fake audios, videos, pictures and text. Mind you, with this technology, it is hard for one who is not AI-savvy to tell the fake from the real. Already we are being "faked" like never before. There is also a replay of an old concern: stealth of jobs by machines.

Robots have been doing it for a while now at factories and assembly lines at a terrible economic cost to households, but with chatbots, more sophisticated jobs may be at stake. Would chatbots be future teachers and lecturers? Hard to tell, but they are already persuaders. Our thinking, at least in the last decade, was that chatbots are as good as the people who make them. It is no longer true.

Even developers admit that chatbots have become better than those who put them together. If before they were programmed, now they programme themselves. Want an inaugural speech for the president? Give the chatbot an hour, it will turn out one that would make Vinay Reddy, President Joe Biden's speech writer, redundant. Who knows who will be next.

But redundancy is the least of humanity's worries. Start with trivial tragedies. If The Economist is right, chatbots are turning out to be ruiners of lives. Bing Chat, the product of Microsoft's chatbot race, was quoted by the newspaper as having asked a journalist to divorce his wife. ChatGPT is no less dangerous. It is being accused of defamation by a law professor. A chatbot as a defendant?

It is not a leap for the chatbots to move from these trivial pursuits to existential threats. If they can persuade people to divorce their spouses, they can persuade people to manufacture biological weapons. Chatbots in the wrong hands can go rogue.

Critics of the AI head-to-head race often compare it to the nuclear arms contest. They are not wrong. The seemingly innocent can one day be the rogue, like the United States became when it dropped nuclear bombs on Hiroshima and Nagasaki just to win a Japanese surrender. Chatbots come built-in with such roguish tendencies.

Musk, Hinton and other AI pioneers are calling for a pause in such a giant leap in AI development for a reason. The world is just not equipped with a global body to stop AI from becoming uglier than it already is. Better to be safe than sorry.

Most Popular
Related Article
Says Stories