If you watch sci-fi movies, robots with Artificial Intelligence (AI) usually exhibit some form of sentience. They have emotions and can think independently. It’s pretty much a given that in science fiction AI can have consciousness.
In the real world, this is still very much a debatable point. What’s obvious is that machines can become very smart — if by “smart” you mean do complex calculations at blinding speed. There was a time when it was uncertain whether machines could actually beat humans in cerebral, strategy-based games like Chess or Go. Today, there’s no question that it can. Computers can now even generate horror stories and compose music.
All this is done through the use of massive and rapid calculations. There’s no thinking, creativity or imagination involved. There’s also no intention or emotion either. To have such qualities, the machine would need to literally possess a mind of its own, or what scientists refer to as “Strong AI”. A machine with Strong AI would possess all the features of human cognition and literally become self-aware.
Such a system doesn’t exist or at least not yet. What we have instead is what’s dubbed “Weak AI”. That doesn’t mean it’s not powerful. Weak AI can do massive calculations, solve complex problems and do specific tasks with remarkable speed and precision. But it’s completely unaware and has no emotion. Google’s AlphaGo programme might be able to beat the best human Go player but it doesn’t even know it’s playing Go and has no sense of joy when it wins.
None of today’s AI systems can experience the world qualitatively. It can only do so quantitatively. And although machines can be programmed to mimic or exhibit signs of consciousness, that’s just a simulation and not the real thing.
ILLUSION OF CONSCIOUSNESS
The Turing Test, developed in the 1950s, is a way to assess whether a machine can pass off as human by having a conversation with it. If the machine can fool a human, it would have passed the test. Some programmes have seemingly passed that test as early as 1966 but American philosopher John Searle has argued that the Turing test doesn’t accurately measure the ability of a machine to “think” because although the machine can give the right response, it doesn’t necessarily know what it’s saying. It’s just designed to provide answers that can fool a human interviewer.
Searle had devised a “Chinese Room” thought experiment to illustrate this point. In 1980, he published a paper arguing that if you have an AI system that takes in Chinese characters as input and produces appropriate Chinese character responses as output, it could fool someone into thinking it understands Chinese even though it actually has zero understanding. It’s merely simulating an ability to understand Chinese.
As AI advances, there’s no question that machines can be programmed to accurately mimic consciousness. A robot could, for example, be programmed to display a sense of dismay when it detects something bad has happened. Or a sense of joy when something good has happened. But even if it’s fine-tuned to a very high level, so much so that it appears to be truly conscious, that doesn’t mean it actually is. The illusion of consciousness isn’t the same thing as actual consciousness.
There’s a movie called Ex Machina where a female AI robot convinces the protagonist that she was in love with him only to eventually use him to escape the laboratory where she was confined. Her deviousness could be seen as proof that she was conscious. But what if she was programmed to find different ways to escape and tricking the human was just part of its programme? Then it’s clearly not conscious.
As mentioned earlier, simulating consciousness isn’t the same as duplicating it. Some AI systems today can simulate consciousness but none are even remotely close to duplicating it. But will that forever be the case?
DEBATE CONTINUES
Three top scientists recently made the case that machine consciousness is possible. Cognitive scientists Stanislas Dehaene, Hakwan Lau and Sid Kouider published an article in the October edition of Science (a prestigious journal) that claims that “empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.”
In other words, these scientists believe that consciousness fundamentally involves information processing, albeit a very complex form of it. If that is indeed the case, what is required to elevate artificial intelligence to artificial consciousness is to map the way the brain works and to then generate computer algorithms that replicate that process.
Of course this is easier said than done. The human brain is a complex biological organ. To map out its neural architecture and then emulate the way neurons interact is something that’s currently beyond what neuroscience or computer science is capable of. But even if that were possible someday, would replicating brain processes in a computer algorithm actually result in consciousness? That again is a heavily debated point.
Christof Koch, the chief scientific officer at the Allen Institute for Brain Science in Seattle, Washington, argues that replicating the processes digitally will not result in consciousness. “I think consciousness, like mass, is a fundamental property of the universe,” Koch says, adding: “The analogy, and it’s a very good one, is that you can make pretty good weather predictions these days. You can predict the inside of a storm. But it’s never wet inside the computer.”
Koch says that today’s computers, which are made of transistors, have a very different “cause-and-effect” structure than what we have in the brain, where one neuron is connected to 10,000 input neurons. But he believes if you were to build a very complex device — what he calls a “neuromorphic computer” — that replicates how the brain works, that device could potentially have a form of consciousness.
In other words, according to Koch, if one were to build a device that physically (not digitally) replicates the electro-chemical processes of the brain, that might actually do the trick. That’s still a matter of conjecture, of course, because we really don’t know whether non-biological machines can support consciousness.
PRECIOUS CONSCIOUSNESS
It’s tempting to treat the human brain as a kind of biological computer but that analogy is misleading. If the brain were really just an organic information processing system, then yes, all mental functions would be mere computations and we can therefore find ways to duplicate it.
This is the reason some people believe that one day it’s possible to upload the contents of your brain to a computer. But this may be impossible no matter how powerful our computers become because of this thing called consciousness, which may be a biological phenomenon that can’t be replicated in silicon-based systems. It could very well be that consciousness is, by its nature, restricted to carbon-based substrates.
The rapid and impressive developments in AI makes it clear that even within our lifetime, the greatest intelligence on earth will all be silicon-based. Computers will be able to solve a host of human problems in transportation, medicine, nutrition and so on. There’s no way human brains can out-calculate computers. But the one thing that keeps us superior is that we have consciousness. And that’s something computers will probably never have.
Oon Yeoh is a consultant with experiences in print, online and mobile media. reach him at oonyeoh@gmail.com