66.79 ↑ 100 JPY
11.33 ↑ 10 CNY
73.14 ↓ USD
63.72 ↓ 1000 KRW
+25° ветер 3 м/c
31 July


China’s Internet regulation method is going to be the future all the world

Experts say West will regulate media and delete false information

Photo: EPA

/NOVOSTIVL/ In 2018 UN investigators concluded that the Myanmar military had for years used social media platform Facebook as a tool to enable ethnic cleansing of the Muslim Rohingyas. Hundreds of Facebook pages had been set up, populated with content aimed at inciting hatred against the Rohingyas and forcing hundreds of thousands to flee the genocide taking place. This article appeared in the South China Morning Post.

However, the tipping point on what seems to be acceptable internet discourse and content appeared to come last month, when an Australian gunman live-streamed a mass shooting at two Christchurch mosques. Three weeks later, sweeping legislation conceived and passed in five days was enacted in Australia to punish social media companies that fail to remove “abhorrent, violent material” on their platforms “expeditiously”.

Under Australia’s new legislation, employees of social media sites face up to three years in prison and the companies involved could be fined up to 10 per cent of their annual profit.

Australia is not alone. Governments around the world, including the UK and Singapore, have moved to take a more active role in deciding what constitutes acceptable content in an era where social media content is king and anything can be shared at the push of a button.

“It’s dystopian, but I think China’s [regulation method] is going to be the future,” said Aram Sinnreich, an associate professor at American University's School of Communication, referring to the “Great Firewall” that blocks foreign social media platforms and censors content deemed politically sensitive or disruptive to public order.

For the West and many other countries, government regulation of social media is uncharted territory. Proposals to regulate for the sake of public interest have been met with fears that it would impede freedom of expression, even as platforms like Facebook – which eventually removed the anti-Rohingya sites – agree that internet companies should be “accountable for enforcing standards on harmful content”.

Experts say that the core of the issue is that while internet companies like Facebook and some governments want to do the right thing in the name of public interest, nobody has come up with a solution that would do so without potentially affecting rights like free speech.

The dangers of allowing a platform to self-regulate were highlighted when a Bloomberg report recently found that video-streaming platform YouTube’s top executives, focused on increasing the amount of time users spent watching its videos, allowed its algorithm to recommend videos bordering on hate speech or pushing conspiracy theories as these videos tended to attract large numbers of views.

“At the beginning, governments intended for platforms to self-regulate, but obviously this has not worked,” said Fu King-wa, an associate professor at the University of Hong Kong’s Journalism and Media Studies Centre.