ARTIFICIAL intelligence (AI) has no doubt brought a lot of good things to mankind. But the bad guys are also benefiting from it.
The popular ChatGPT tool is now being exploited by hackers to write malicious codes.
According to cybersecurity company Palo Alto Networks, cyberattackers can now instruct ChatGPT to write malicious code, with remarkable outcomes.
Its Asia Pacific and Japan regional vice-president and chief security officer, Sean Duca, said AI has always been a double-edged sword.
"AI tools can be trained to mimic human behaviour based on its input learning program. Coding and curating malicious code, if there's no implicated rule to prevent AI from doing so, can be one of the fallouts of the technology," he said.
"Malicious code written by AI tools can potentially be more damaging than code written by humans. While the makers of ChatGPT have clearly suggested that the AI-driven tool has the in-built ability to challenge incorrect premises and reject inappropriate requests, it is expected to have some false negatives and positives for now.
"For criminals who intend to violate the rules, they can find a way to overcome the loopholes, playing between the gaps of AI's judgments."
More efficient tool
One of the main dangers of AI tools being capable of creating malicious code is how much it increases the efficiency of writing dangerous tools. Duca said even the most experienced hackers can spend up to an hour developing a script that can infiltrate a target using a software vulnerability.
"However, this can be done in a couple of seconds with OpenAI's ChatGPT. With that, just as other automation, it can potentially increase the number of attacks by these threat actors."
INDUSTRY'S CURRENT POSITION
Almost all industries have increased their adoption of AI to automate their software supply chain.
"While cybersecurity providers are using AI to identify and filter malicious codes/phishing links, threat actors are also using similar technologies to increase their efficiency and ensure their 'business' is profiting," said Duca.
He said because of this, the cybersecurity sector could be disrupted in a number of ways due to how simple it is to create malware.
"Its increasing ability to generate and disseminate malware is one of the key issues it poses. It gets harder for cybersecurity experts to find and stop these attacks as more people are able to produce and transmit malware. This might result in more effective cyberattacks, raising the price of responding to and recovering from such incidents.
"Users now have AI-powered security tools and products that tackle large volumes of cybersecurity incidents with minimum human interference. However, it can also allow amateur hackers to leverage the same technology to develop intelligent malware programs and execute stealth attacks.
"This trend is only expected to rise as the availability of tools on the dark web for only less than RM30 for ransomware-as-a-service models and AI-based tools such as ChatGPT lower the barrier of entry for cybercriminals."
Overcoming threats
As hackers become smarter and more advanced, the cybersecurity industry must be resourceful and keep up in the fight against AI-powered exploitation.
"In the long run, the industry's vision cannot be that a swarm of human threat hunters try to sporadically fix this with guesswork.
"The need of the hour is to take intelligent action to neutralise these evolving threats," said Duca.
On the positive side, autonomous response is significantly addressing threats without human intervention.
"However, as AI-powered attacks become a part of everyday life, businesses, governments and individuals impacted by such automated malware must increasingly rely on emerging technologies, such as AI and maching learning, to generate their own automated responses."