LETTERS: Amidst the surge of enthusiasm in artificial intelligence (AI), a concerning trend known as "AI washing" has emerged.
This involves companies overstating or misrepresenting the AI capabilities of their products or services, leading to potential deception.
This marketing act also risks escalating into blatant fraud, undermining trust and stalling genuine technological progress.
It is reminiscent of "greenwashing" from the early 2000s, when companies exaggerated their environmental efforts to attract eco-conscious consumers.
This time, it involves labelling products as "AI-powered" or "AI-driven" without substantial evidence.
This not only misleads consumers, but also distorts the competitive landscape, disadvantaging businesses that genuinely invest in AI research and development.
There is a recent high-profile case that exemplifies the dangers of AI washing, which involves a device developed by an overseas-based start-up.
Marketed as a revolutionary AI-powered gadget, the device promised unprecedented levels of interaction and automation.
However, investigations revealed that the AI capabilities of the device were significantly overstated. Its functions were found to be no more advanced than basic automation.
The fiasco highlights several key issues.
The company claimed that its AI device could perform complex tasks autonomously, but users quickly discovered its limitations. Many reviewers claim that its functionality was no different from a smartphone.
The promotional materials for the device were filled with buzzwords and vague descriptions, creating an illusion of advanced AI abilities.
This misleading marketing strategy is a textbook example of AI washing.
The fallout of this marketing tactic included significant financial losses for investors and a tarnished reputation for the company.
The disparity between marketing claims and actual performance has potentially eroded consumer trust in all products running on AI.
In the worst-case scenario, when AI washing crosses the line into blatant fraud, the implications are severe.
Fraudulent AI claims involve intentional deceit, where companies knowingly present false information about their AI capabilities to gain financial or strategic
advantages.
This not only constitutes ethical and legal violations, but also poses significant risks to stakeholders.
AI systems are often deployed in critical applications such as healthcare, finance and security.
Fraudulent AI claims in these sectors can lead to catastrophic consequences, including compromised data integrity, privacy breaches and even endangerment of human lives.
Blatant fraud distorts market dynamics, allowing dishonest entities to thrive while genuine innovators struggle.
To combat AI washing and prevent it from escalating into fraud, governments and regulatory bodies need to establish clear guidelines and standards for AI claims.
This includes defining what constitutes an "AI system" and setting benchmarks for performance and transparency. Independent audits by third party organisations can help verify AI claims.
These audits should assess the validity of AI capabilities, performance metrics and the transparency of AI processes.
Educating consumers about AI technologies will help them make informed decisions and avoid deceptive claims.
By addressing AI washing and preventing it from escalating into fraud, stakeholders can foster a trustworthy and innovative AI ecosystem that delivers on its transformative potential.
Only then can we fully harness the benefits of AI, safe from the risks of deception and fraud.
RAYMON RAM
Certified Fraud Examiner,
Anti-Money Laundering Specialist,
Graymatter Forensic Advisory
Kuala Lumpur
The views expressed in this article are the author's own and do not necessarily reflect those of the New Straits Times