LAS VEGAS: Amazon Web Services (AWS) today announced that Mesolitica, a Malaysian startup specialising in training large language models (LLMs), has built a Bahasa Malaysia generative artificial intelligence (gen-AI) LLM called MaLLaM on its cloud services.
The MaLLaM can understand local nuances, such as slang and colloquialisms that merge different dialects, Bahasa Melayu, and 16 other regional languages, for use in AI-assistants across industries.
AI-assistants built on MaLLaM can provide quick, accurate responses to citizens' inquiries in multiple languages, including dialects from various Malaysian states such as Johor, Kedah, Sarawak, Selangor, and Terengganu.
This aims to ultimately improve citizen communication and data processing capabilities across the culturally diverse country, said AWS.
AWS ASEAN managing director Jeff Johnson said Malaysian enterprises using MaLLaM can improve operations with generative AI in regional languages to assist underserved audiences, such as farmers in rural areas, enabling them to make data-driven decisions using real-time parameter.
"The Malaysian government is also exploring the integration of MaLLaM into its operations, which aligns with the country's broader goal of AI sovereignty and local data governance," he said during a media briefing for ASEAN countries at AWS re:Invent 2024 here, on Monday.
According to AWS, Mesolitica is part of the AWS Asia-Pacific and Japan Generative AI Spotlight Programme, a four-week accelerator programme aimed at supporting early-stage startups in the region that are developing generative AI applications.
The startup is also one of two Malaysian companies to receive AWS credits from the AWS Activate Programme, a comprehensive initiative that provides access to various resources, including AWS credits, technical support, training, and tools tailored to help startups build, launch, and scale their applications on AWS.
"Mesolitica has joined the AWS Partner Network, a global programme designed by AWS to assist businesses in leveraging AWS for growth and success," according to a statement by AWS.
The company has significantly enhanced its machine learning operations by leveraging AWS services, migrating its model training workloads to Amazon Elastic Cloud Compute (Amazon EC2), and deploying inference workloads using Amazon EC2 G5 instances, which provide cost-effective GPU (graphics processing unit) acceleration for AI models.
The AWS re:Invent 2024, which is held on Dec 2-6, 2024 at multiple venues across Las Vegas, is a learning conference hosted by AWS for the global cloud computing community.
The in-person event features keynote announcements, training and certification opportunities, access to more than 2,000 technical sessions, the Expo, and after-hours events, and is expected to gather around 60,000 attendees from across the globe.
– BERNAMA