Columnists

How to manage risks and abuses of artificial intelligence

'PREMATURE optimisation is the root of all evil' — in Brian Christian's book, titled The Alignment Problem: How Can Machines Learn Human Values?', the conclusion chapter begins with this quote attributed to Donald Knuth.

This quote is prescient to technological innovation. We are increasingly relying on Artificial Intelligence (AI) to be the salve in resolving humanities' problems — healthcare, environment, education, agriculture, the pandemic — and elevating the quality of life, society and nation.

We cannot, however, do this at the peril of undervaluing human values. AI tools may threaten values, such as fairness, due process, justice, equality, the right to be treated with dignity, and non-discrimination. In Malaysia, the conversation around these values is exiguous when the trend elsewhere is not as indigent.

To ensure that these values are part of the life cycle of an AI tool in its design and use, the role of ethical frameworks and laws have become eminent. Is being cautious of the risks around AI premature? Or is its optimisation premature? The answer is neither.

With the life cycle of new technologies, there is a trajectory of invention, approval and adoption, exploitation, and finally, regulation. The point in its life cycle when a particular technology is legally regulated has differed over time.

Historically, there is evidence of early intervention, such as in the case of radio communication. With rapid increase in the use of technological innovation, risks must be assessed and minimised.

With risks associated with the use of AI, regulators, such as the European Commission (EC), propose a classification system that may result in the particular use of an AI to be classified as high risk.

Is the EC the village idiot in recommending an AI law that will regulate high-risk AI and prohibit the use of AI in certain activities that present unacceptable risks?

Are reports recommending oversight of high-risk AI systems, such as the most recent one by the Australian Human Rights Commissioner (AHRC) a case of getting ahead of ourselves?

Such laws aim to ensure that designers and users of AI tools carry out risk assessments before placing these tools on the market. Bodies with regulatory oversight will play a key role.

For example, AHRC's report proposes an AI safety commissioner. The classification of AI as high risk or unacceptable, and the obligations imposed by the regulatory framework will differ from country to country.

EC's proposal classifies the use of high-risk AI tools in employment, biometric identification, educational and vocational training, management and operation of critical infrastructure.

Additionally, "unacceptable" use of AI tools will be prohibited until there is proper legislative protection.

An example of such technology is facial recognition software used in policing and surveillance that may lead to invasion of privacy, excessive surveillance and even discrimination.

Further, the EC proposal prohibits practices by public authorities such as AI-based social scoring and the use of real time remote biometric identification systems in publicly accessible spaces for law enforcement.

The proposals attracted resounding criticisms for the vagueness of the regulation and oversimplification of the classification of risks.

Concerns raised include views that such proposals will require a fundamental shift in the design of AI tools, their impact on competitiveness with other countries in AI innovation, and the cost and expense of compliance.

These are all valid concerns. However, the call for legal intervention is not premature in the face of the threat of the erosion of human values as a consequence of using AI.

With every technological innovation, its uses can lead to manifold abuses and risks.

These can be managed by recognising the risks, and by strengthening AI innovation with laws that build trust. The latter need not require the adoption of an extensive model, such as the one proposed by the EC.

We may opt for a more pliable regulation that accommodates the embryonic stage of AI innovation in Malaysia — that prioritises averting the harm that precipitates the risks, and, most importantly, a regulation that promotes innovation while preserving human values.

The alignment of AI with human values is central to the spirit of the EC's proposal — that "AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human wellbeing".

The writer is associate professor at the Faculty of Law & Government, HELP University

Most Popular
Related Article
Says Stories