Following criticism of its “Online Safety Bill,” the UK government softens it

On Tuesday, the UK defended its choice to weaken the Online Safety Bill, an ambitious but divisive attempt to combat online bullying, fraud, racism, and other harmful content.

After receiving harsh criticism from politicians and civil rights organisations, the British government dropped a plan to require computer companies to remove illegal but harmful internet information.

The European Union and the United States are also engaged in similar initiatives, although the U.K.’s was one of the most comprehensive. The measure originally offered regulators broad authority to impose sanctions on social media and digital media giants like Google, Facebook, Twitter, and TikTok.

A demand that the largest platforms remove “legal but harmful” content had raised concerns that it would result in censorship and compromise the right to free speech.

Also Read | Andrew Tate gets Banned From Tiktok And Meta

That portion of the plan has now been dropped by the Conservative government of Prime Minister Rishi Sunak, who took office last month, because it may “over-criminalize” online content. The change is intended to enable the bill to be passed by Parliament by the middle of 2023 after it has been stuck there for the previous 18 months.

According to Digital Secretary Michelle Donelan, the chance that “tech businesses or future governments could exploit the legislation as a licence to restrict valid ideas” was eliminated.

She told Sky News, “It was the formation of a quasi-legal category between unlawful and legal.” “A government shouldn’t be acting in that way. It’s perplexing. Online and offline sets of laws would be created differently as a result.

Instead, the bill mandates that businesses define and uphold unambiguous terms of service. As long as it’s not against the law, businesses are free to permit adults to post and view objectionable or harmful content. However, platforms that commit to prohibit racist, homophobic, or other offensive content but later break that commitment risk being fined up to 10% of their yearly revenue.

Also Read | Top 20 Songs That Became Viral On TikTok

The law also mandates that businesses assist users in avoiding legal but potentially harmful content, such as glorifying eating disorders, misogyny, and other forms of abuse, by providing warnings, content moderation, or other tools.

Companies must also demonstrate how they enforce user age restrictions meant to prevent youngsters from viewing hazardous content.

The bill continues to make certain online behaviours illegal, such as cyber flashing, which is the act of sending someone unwelcome graphic photos, and epilepsy trolling, which is the act of sending flashing images that might cause seizures. Additionally, it criminalises helping or encouraging self-harm. This action was taken in response to a campaign by the family of Molly Russell, a 14-year-old who committed suicide in 2017 after watching internet material encouraging self-harm and suicide.

Her father, Ian Russell, expressed his relief that the bill was at least moving forward. However, he claimed that it was “extremely difficult to understand” why safeguards against dangerous substances had been reduced.

Donelan emphasised that children would still be protected and that “legal but damaging” content would only be allowed for adults.

As a result of this bill, the material that Molly Russell saw will not be permitted, she declared.

Also Read | Biden speaks about gun control in Pennsylvania


Related Articles

Back to top button