ARTIFICIAL intelligence (AI) has dramatically altered our lifestyle for the better, be it at work, school or play.
But as AI gets 'menial' tasks done with ridiculous ease, bad actors see it as a means to develop malicious bots for financial gain.
And due to AI's major contribution to society, industry stakeholders need to have a good understanding of it and cybersecurity, and their correlation.
Monash University's School of IT deputy head Professor Raphaël Phan explained how these two elements can coincide in varying interactions.
MALICIOUS BOTS
One of the most popular abuses of AI is the development of malicious bots, which are autonomous programmes on the Internet that can interact with systems or users.
Phan said bad actors exploiting AI has been customary ever since computers were commonly used in day-to-day tasks.
"And with AI capabilities, malicious bots have human-like intelligence and can surpass what their designers programmed them to do," he said.
AI can also learn from experience and adapt.
"The difference between AI-powered malicious bots and conventional malicious bots with no AI is the ability to continuously learn by observing and adapting. This makes them stealthy enough to avoid detection better, while stealing your data when you're unaware of them," explained Phan.
"As long as the software can learn from experience, these abuses can be easily done," he added.
Some instances of AI-enhanced bots include Facebook, which used to have facial recognition capability on social media photos, and Adobe, which uses deep learning for dynamic photo editing.
AI AND CYBERSECURITY
With AI doing most of the legwork, why is cybersecurity in the picture? Despite their opposing traits, AI helps cybersecurity in a love-hate manner.
Phan explained that traditionally, AI could be used to solve cybersecurity problems like anomaly detection for suspicious activities, but recently people have realised something different.
"People have been assuming that data is always benign. Yet if cybersecurity is a concern that won't go away, then there's no reason why AI should exist in a malice-free world. Data or AI models could be intentionally modified to have a bias on the AI prediction results. And this line of research is known as adversarial machine learning, which focuses on how AI can be attacked as the victim," he added.
However, AI could also be the attacker, much like deepfakes. It is when AI is used to attack cybersecurity with modified images, videos and speech to the point humans can't differentiate them. Therefore, both AI and cybersecurity should co-exist.
But how can they co-exist? Phan said that cybersecurity could make AI more robust and use it to enhance resilience to attacks.
For instance, in cryptography, AI has been replacing manual and laborious efforts in characterising the desired security properties of underlying building blocks within security systems.
RESEARCHES ON AI ATTACKS
In bridging the gap between both AI and cybersecurity domains, Phan and his team of researchers have been delving into initiatives that will ensure that AI and cybersecurity techniques remain sturdy.
"At our Australian campus, there are reputable research groups in AI and cybersecurity. In Monash Malaysia, our forte is AI and cybersecurity. We have leading researchers looking into how AI attacks cybersecurity (deepfakes), how cybersecurity attacks affect AI (adversarial machine learning), and how security modelling applies to AI (generative adversarial networks or GANs )," he said.
He added that they also have PhD students in research and who have published in top conferences, such as Computer Vision and Pattern Recognition and International Conference on Acoustics, Speech, and Signal Processing.
The team is government and internationally funded and is developing the techniques with French, Australian, and Chinese collaborators.
Aside from adversarial machine learning for hidden emotions and generative network-style deep learning techniques for generating new images for the funded projects, they also proposed techniques for chemistry research problems to discover new inorganic material types.
"The various researches ensure that society is not at a disadvantage despite the burgeoning number of AI techniques. They are not only AI-advancing but are robust to cybersecurity threats from malicious parties. This way, no individual rights are violated and we can continue benefiting from AI," Phan explained.
He expects the solution for the initiatives to be available soon.