Leader

NST Leader: Good computing, bad computing

STARTUP OpenAI has a new version of a chatbot, ChatGPT-4o, that acts like humans.

Other chatbot-makers are rushing to bring out their own human-like robots that run on generative Artificial Intelligence (AI). Therein lies the danger.

It is a race to incorporate human prejudice, advertently or inadvertently, at machine speed, as one Stanford Law School publication disclosed on March 4.

There wasn't just prejudice, but race and gender bias of the pernicious kind across large language models (LLMs) used by chatbots reviewed by the paper.

Prompting the LLMs for advice involving individual names across a variety of scenarios in the United States — car purchase negotiations and election predictions are two examples — the authors found them to systematically disadvantage "racial minorities and women" in a paper that begins with a question: "What's in a name?". Well, plenty of pernicious prejudice produced at the speed of light. Well, almost.

Understandably not wanting to deny the promise of AI to humanity, the paper suggests that AI-makers audit the LLMs for potential harm before they are deployed.

Though it sounds like a last-ditch effort to prevent toxicity, it is better to be late than never. Starting at the beginning will be more effective, provided the AI-makers don't programme their machines with prejudice in the first place.

And that place is the LLMs training data sets. They are indiscriminate feeders. The Internet, as we have come to know, is not a good "grazing" ground for LLMs. They just can't tell the difference between good and evil. Like humans who make them, they, too, must be trained well.

Neither do the LLMs know how to keep up to date nor fix errors they have made, as researchers in Stanford discovered in another study in February last year, just over three months after OpenAI launched ChatGPT.

Tweaking won't work because there are billions of parameters to work on. Retraining is an option, but an expensive and time-consuming one. The researchers' cure? Virtual surgery to remove errant neurons in the LLMs' neural networks.

Far harder to cure is the insatiable appetite of AI developers, notwithstanding the fact that not an insignificant number of them have warned against the risk-averse race to make the fastest and biggest LLM first.

Speed has all the chance of making machines go rogue. The more ultraintelligent the machine is the greater its roguery. Yes, the world needs a revolution in computing, but not at a risk-averse speed. Of course, it would be wrong to blame the machines for all the faults of humans.

Let's not forget that the LLMs are trained on data sets built on human values. If they are a pile of prejudice, then they will turn out to be rogues. AI enthusiasts say governments mustn't rush to regulate.

Well, if the AI-makers are not able to take the technology to a good place, then governments must make them go there. AI may have started with a noble aim — to make the technology as human-friendly as possible — but today it is everything but.

AI developers have had a decade to tame the technology, enough time to keep regulators at bay. Paradoxically, AI developers are making a good case for government intervention.

Most Popular
Related Article
Says Stories