CHILDREN aren't safe in the cyberworld, especially Malaysian ones. A 2020 Unicef study points to the reason why: nine out of 10 Malaysians aged 7 to 17 used the Internet.
In the same year, 92 per cent of Malaysian students aged 13 to 17 were on social media. For sure, one can't deny the role of social media in providing virtual learning opportunities for our children. But that is not all the social media platforms' menu.
Dark and harmful content often targets impressionable young minds. In fact, 70 per cent of Malaysian minors have reported being exposed to disturbing content on social media.
The onus is on social media platforms to ensure that harmful content is not targeted at children. But would they? It seems they have no reason to do so.
In fact, they have at least two incentives to do the opposite. One, almost all of social media platforms are from the United States, where they are protected from being liable for content provided by others.
Two, the money the tech titans earn from ads targeted at minors is too good to act against content providers. How good? According to a Harvard University study, the top social media platforms — almost all in the US — earned US$11 billion in ad revenue in 2022 from users younger than 18.
It is in this context we must read the Australian law banning minors from social media and calls for similar bans elsewhere.
The ban wasn't easy for Canberra to adopt, with child rights groups and parents groups on opposite ends of an emotive debate. Still, come late 2025, Australians must be over 16 to use social media platforms such as X, TikTok, Snapchat, Instagram, Facebook and more.
The law — the Social Media Minimum Age Act — compels social media platform owners to stop minors logging in or face fines up to A$49.5 million. Bans are what many will call extreme measures.
But why do tech titans who run the social media platforms force governments around the world to go down such a forceful path, we ask. Their response: policing cyberspace isn't as easy as it may seem. Such an excuse is a hard sell. There are many apps around that not only can detect whether minors are using the platform but also can block unsolicited content.
Lack of robust self-policing means increased scrutiny by governments. Texas is doing precisely this, with the state's Attorney-General Ken Paxton launching investigations on Dec 12 into more than a dozen social media platforms over their privacy and safety practices, reported Reuters. Such investigations are becoming a staple diet of many governments.
Malaysia for its part has several laws that protect minors, such as the Child Act 2001, the Sexual Offences Against the Children Act 2017 and the Communications and Multimedia Act 1998 (CMA).
Section 211 of the CMA prohibits social media content providers from sharing content that is indecent, obscene, false, menacing, or offensive.
Is the CMA, together with our other laws, enough to keep the Australian-style of ban at bay? The coming year may provide the answer when Section 211 starts biting.