‘Alapasita Teu
Auckland, August 28, 2022
This past fortnight saw the birth of the Aotearoa New Zealand Code of Practice for Online Safety and Harms. Five of the world’s big Tech companies (Meta, Google, TikTok, Twitch and Twitter), in a joint effort to reduce harmful online content, are now signatories to this industry code that sets the benchmark for online safety in the Asia Pacific region.
The Code is a framework outlining principles and voluntary commitments to safer online practices on digital platforms.
With the internet being a borderless terrain, this industry code can be seen as a step in the regulatory direction. On the other hand, it raises questions like, can we honestly regulate online content and harm? As presented in a previous column, what constitutes “harm”? Who gets to decide what that is?
Big tech companies dictate and curate the online world in our modern society. We consume news, schooling, and communication with others, and the world, using their platforms.
We interact with technology and digital systems daily, which means we interact with them by the rules of these platforms.
Tech companies constantly shape our understanding of the world, so much so that we often forget they operate in a largely unregulated environment. So, when tech companies, who have vast digital power and are unaccountable for it, lead the charge on any form of regulation, societies need to pay attention.
Rules of engagement
Big tech companies decide the rules of engagement within their platforms. Whether we’re passive or active users, we are beholden to their rules, terms, and conditions. Digital platforms can enforce their laws in a way that any king could only dream of. These companies also collect data on us, further consolidating their digital power. This means we self-regulate our behaviour without anyone needing to tell us yes or no.
In 2020, Meta CEO Mark Zuckerberg called for governments to collaborate with online platforms to develop and adopt new regulations for online content, noting, “It is impossible to remove all harmful content from the Internet, but when people use dozens of different sharing services, all with their policies and processes; we need a more standardized approach.”
Industry codes are a step in that direction. However, what is often missing from the discussion is, if people are still involved in creating and using digital systems and platforms, is regulating online content enough?
Should people be regulated?
In addition to traditional modes of regulation, governance, certification, and rules applied to digital platforms and products, should we also regulate people?
For policymakers, it is time to step up and assess what roles and functions within the influential tech industry need oversight. They cannot be left to decide for themselves what’s right and wrong. For the public, it may be a matter of regulating our online behaviours—is the right to freedom of speech a license to be unwise and unruly with our words online? When engaging online, it is healthy to check in every now and then to ask yourself, what sort of person are you becoming as you’re immersed in this digital world?
‘Alapasita Teu is a Researcher at Maxim Institute, an independent think tank working to promote the dignity of every person in New Zealand by standing for freedom, justice, and compassion.