LETTERS: Artificial intelligence (AI) has become the key technology behind self-driving, automatic translation systems, speech and textual analysis, image processing and diagnosis and recognition systems.
In many cases, AI can surpass the best human performance levels at specific tasks. It would seem that there are no areas that are beyond improvement by AI, no tasks that cannot be automated, no problems that can't at least be helped by an AI application.
But is this true? Theoretical studies of computation have shown there are some things that are not computable.
Alan Turing, the mathematician, proved that some computations might never finish, while others would take years or even centuries.
Early AI research often produced good results on small numbers of combinations of a problem (like noughts and crosses, known as toy problems) but would not scale up to larger ones like chess (real-life problems).
Modern AI has developed alternative ways of dealing with such problems.
These can beat the world's best human minds, not by looking at all possible moves ahead, but by looking a lot further than the human mind can manage by using methods involving approximations, probability estimates, large neural networks and other machine-learning techniques.
It is expected that future AI systems will communicate with and assist humans in friendly, fully interactive social exchanges.
What is needed are proper social interactions, involving free-flowing conversations over the long term during which AI systems remember the person and their past conversations.
AI will have to understand intentions, beliefs and the meaning of what people are saying.
This requires what is known in psychology as a theory of mind, that is, an understanding that the person you are engaged with has a way of thinking, and roughly sees the world in the same way as you do.
So when people talks about their experiences, you can identify and appreciate what they describe and how it relates to yourself, giving meaning to their comments.
Social interaction makes sense only if the parties involved have a sense of self and can maintain a model of the self of others.
This means social AI will need to be realised in robots with bodies.
A designer can't effectively build a software sense-of-self for a robot.
If a subjective viewpoint were designed from the outset, it would be the designer's own viewpoint, and it would also need to learn and cope with experiences unknown to the designer.
So, what we need to design is a framework that supports learning, how to control or move, how to perceive and experience objects and environments, how to act, and the consequences of actions and interactions.
Research in the new field of developmental robotics is exploring how robots can learn from scratch, like infants.
The first stages involve discovering the properties of passive objects and the "physics" of the robot's world.
Later on, robots note and copy interactions with agents (carers), followed by gradually more complex modelling of the self in context.
Future research with robot bodies may one day create lasting, empathetic, social interactions between AI and humans.
Benjamin Irfan Mohd Ilahi
Mechatronics Engineering Student, International Islamic University Malaysia, Gombak, Selangor
The views expressed in this article are the author's own and do not necessarily reflect those of the New Straits Times