Global body pushes for regulation of social media

Image Courtesy: The Centre for Countering Digital Hate

Sourced Content
Washington DC, March 2, 2023

The Centre for Countering Digital Hate (CCDH) recently released its STAR Framework, Global Standard for Regulating Social Media with recommendations for contouring hate and disinformation on the internet.

The Star Framework Report said that the objective of the Centre is to inform stakeholders and governments advocating for and designing legislative reform.

Global standards help to ensure the efficiency, effectiveness and impact of national efforts and are best supported by a strong relationship with independent civil society and researchers.

“Through the CCDH STAR Framework, we aim to establish key global standards for social media reform to ensure effectiveness, connectedness and consistency for a sector whose reach impacts people globally,” the Report said.

It noted that a handful of technology billionaires control internet content.

Systematic bias

“The tech companies and their executives know the real harms their products can cause, which one would expect would lead to their curation of these environments, but the imperative for growth and acquisition of a market share of eyeballs to sell to advertisers is their only concern,” the Report said.

The result of this singular focus is that the algorithms of companies contain a systematic bias towards hate and misinformation with a damaging impact on our information ecosystem.

The above was from Transparency Times of Transparency International based in Wellington.

The following is from the STAR website.

“We need to reset our relationship with technology companies and collectively legislate to address the systems that amplify hate and dangerous misinformation around the globe. The STAR Framework draws on the most important elements for achieving this: Safety by Design, Transparency, Accountability and Responsibility,” the Report said.

The big tech billionaires

A handful of companies dominate Internet content, owned by a small coterie of Big Tech billionaires. This elite owns the technology that connects 4.5 billion people around the world and creates a platform on which individuals can share information, make new relationships, create communities, develop their brands, and transact business. Platforms produce little content themselves but have produced business models in which they monetise the content produced by billions of people by selling both viewers and data on the psychology of those viewers to those seeking to sell their own products, services, brands and ideas.

The communities on these online platforms, the behaviours and beliefs, and the values emerging from those spaces increasingly touch every aspect of offline society.

Sometimes that can be good. Sometimes it can be bad.

The tech companies and their executives know it can sometimes be bad, which one would expect would lead to their curation of these environments, but the imperative for growth and acquisition of a market share of eyeballs to sell to advertisers is their only concern.

This was most pithily explained by the Chief Technology Officer of Meta when he wrote in an internal memo: “So, we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated with our tools. And still, we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect with more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.”

Utopian myths

For many years, this elite of billionaire owners has postured themselves to operate under the utopian charter myth of neutrality and virtuous contribution to the growth of human understanding through social media.

At its core, their proposition is an old-fashioned advertising business that reshapes cheaply-acquired content by promoting the most salacious, titillating, controversial, and therefore “engaging” content. This was designed to stave off the moment that regulators might turn their eye to this industry.

‘Social media’ is not a synonym for the Internet or even for technology, and yet, by hiding behind the techno-utopian halo of online technological innovation, they have both hidden the banal atavism of their core business models and avoided real scrutiny of the harms they cause. Surely, many opine, this is inadvertent or unavoidable. In fact, neither of these statements is true.

The laws that seek to regulate this enormous industry which directly affects billions of people were, for the most part, created before social media companies existed.

In the United States, they were codified in Section 230 of the Communications Decency Act 1996 which sought to protect bulletin boards and newspaper comments sections from third-party liability to foster innovation and growth for a fledgling industry.

This led to decades of regulatory ambivalence and the international community adopting a ‘hands-off’, or at best, an individual content-based approach, to regulating online harm in some jurisdictions, with technology companies seen as neutral actors in this environment.

Tech companies were encouraged through this permissive regulatory environment that functions without checks and balances to adopt aggressive profit-driven business strategies that follow what Mark Zuckerberg described as a “move fast and break things” maxim, as outlined in his 2012 letter to investors.

The online harm

Things are, indeed, broken.

The Center for Digital Hate (CCDH) has developed a deep understanding of the online harm landscape. Since 2016, it has researched the rise of online hate and disinformation and has shown that nefarious actors can easily exploit digital platforms and search engines that promote and profit from their content.

CCDH has studied the way anti-vaccine extremists, hate actors, climate change deniers, and misogynists weaponise platforms to spread lies and attack marginalized groups. It has seen the depth and breadth of harm that tech companies profit from on a daily basis, including:

Hate and Extremism: Racism, and hate content targeting women, the LGBTQ+ community, and faith communities (e.g. anti-Jewish hate and anti-Muslim hate).

Mis/Disinformation on critical issues like Covid-19 climate change and elections.

What has remained consistent, across all types of harmful content, is an absence of proper transparency and a failure of platforms and search engines to act.

The CCDH research and advocacy work shows repeated failures by social media companies to act on harmful content or the actors/networks who are sharing it.

The Centre has demonstrated how the companies’ algorithms, with a systematic bias towards hate and misinformation, have had a damaging impact on our information ecosystem.

The failure of social media companies to act on known harmful content connected with terrorism, racism, misogyny and online hate is a violation of their own terms and conditions, the pledges made to an international community when the cameras were rolling, and the inherent dignity to which the victims of tragedies like Buffalo, Christchurch and Myanmar were entitled, the right to live safely in their communities and to be safe from extremist, racist terrorism.

This failure to act is the reality of the self-regulation environment.

Self-regulation means no regulation.

Self-Regulation=No Regulation

The status quo cannot stand. It has a damaging impact on individuals, communities and our democracies. CCDH research has evidenced the need for legislation that changes the fundamental business models and therefore behaviour of the platforms that profit from the spread of misinformation, disinformation, conspiracy theories and online hate by bad actors and by their own systems. The Centre has advised the UN, the UK, the US, and other governments on disinformation, violent extremism and how conspiracy theories can overwhelm fact-checking countermeasures and cause considerable real-world harm.

The CCDH Global Summit in May 2022 saw a need to develop values and a research-driven framework to support global efforts to regulate social media and search engine companies.

The document has set out the core elements of the STAR Framework with explanations and examples from its research.

The impact is real, on people, communities and democracy.

“We cannot continue on the current trajectory with bad actors creating a muddy and dangerous information ecosystem and a broken business model from Big Tech that drives offline harm. We need to reset our relationship with technology companies and collectively legislate to address the systems that amplify hate and dangerous misinformation around the globe.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share this story

Related Stories

Indian Newslink

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide

Advertisement

Previous slide
Next slide