Communications
misinformation
Between true and false, there's ten million shades of grey
Key points
- I do not support the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024, and I will not be voting for it.
- I think people believe some crazy things, and some of that comes from online. But the problem is how readily people are believing it, and that’s got to be the focus of any solution.
The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 represents a well-intentioned but fundamentally flawed approach to addressing misinformation and disinformation in Australia. I don’t think it’s a great conspiracy, I don’t think it’s a Big Brother-style attempt at establishing some thought-police, or an attempt to suppress dissent.
I also don’t think it’s a good idea. Because it’s a misunderstanding of the problem, and it’s a massive overreach of a solution.
While the spread of false information online presents real challenges to our democracy, attempting to regulate it through government oversight and platform obligations misunderstands both the nature of the problem and the mechanisms that perpetuate it. The solution to misinformation lies not in regulatory control but in rebuilding trust, fostering critical thinking, and strengthening community connections.
The problem with regulatory solutions
The proposed legislation would empower ACMA to oversee digital platforms’ efforts to combat misinformation, with penalties of up to 5% of global turnover for serious breaches.
(I’m not opposed to this policy because I love Facebook or Google, and they do deserve to be held to some minimum standard of accountability. This is just silly though. Penalties that represent 5% of turnover are absurd, and would bankrupt these companies, as turnover and profiti aren’t the same thing, so if they’re making 10% profit margins and you’re taking 5% of global turnover you’re cutting their profitability in half, per serious breach. But there’d be no chance of this ever occurring, because they’d just pull out of Australia straight away. Would you risk your entire company on a market that represents 0.3% of the global population?)
Aside from the penalties, this approach faces several fundamental challenges that make it both impractical and potentially harmful to democratic debate. It is anti-democratic to empower a bureaucracy to define the parameters of tolerable discourse, free from ever having to face the voters whose speech they are regulating.
First, defining “misinformation” in a consistent and objective way is virtually impossible. Information exists on a spectrum from verified fact to complete fiction, with vast grey areas of interpretation, context, and evolving understanding in between.
Today’s “misinformation” might be tomorrow’s accepted wisdom – we need only look at how rapidly scientific consensus evolved during the COVID-19 pandemic to see how truth can be both urgent and uncertain simultaneously. We need to be able to talk about things that aren’t necessarily true, in order to test their truthfulness against the expertise of others.
Second, proving intent in the spread of false information is extraordinarily difficult. Without a “smoking gun” admission of deliberate deception, distinguishing between sincere belief, honest mistake, and malicious disinformation becomes a matter of judgment rather than fact. This creates a dangerous situation where government agencies might be tasked with assessing the sincerity of beliefs rather than their accuracy.
The trust deficit
Misinformation is an issue, genuinely. It isn’t a made-up fever dream (no pun intended). It’s really harmful, and it’s really happening, The issue with the government’s response isn’t that they’re trying to do something — it’s that they’re trying to do this.
At its core, the spread of misinformation is not primarily a problem of access to accurate information – it’s a crisis of trust. When people lose faith in traditional institutions, experts, and mainstream media, they seek alternative sources of information that align with their existing beliefs and communities. This creates self-reinforcing information silos that no amount of content moderation or platform regulation can effectively penetrate.
The proposed legislation risks blowing this trust deficit even wider. By establishing government authority over what constitutes acceptable information, it feeds into existing narratives about institutional control and censorship. This could drive more people toward alternative platforms and deeper into communities that resist “official” sources of information.
The social nature of information
Information spreads through social networks, both online and offline. People are more likely to believe information that comes from trusted sources within their community than from official channels, regardless of its accuracy. It’s because we want to be a part of a group. As social creatures, we long for communities. And communities have to have something in common.
Rejecting ideas that are shared within a community of people you see yourself as part of is scary, because you know that severing a tie you have in common with a community is potentially alienating yourself from your ‘tribe’. You have a strong social incentive to believe the things your community believes, because to fail to do so risks ostracism, rejection, alienation and loneliness. Faced with that threat, you have an inbuilt bias to believe what your friends believe.
This means that addressing misinformation requires engaging with these social networks — and, within them, those communities — rather than trying to regulate them from above.
The bill’s approach of targeting platforms and content creators misunderstands this fundamental aspect of how information spreads. You cannot regulate trust into existence, nor can you force communities to accept information from sources they’ve decided not to trust.
If anything, an institution that people do not trust, telling people that some speech is not to be trusted, is going to have the opposite effect.
A better approach: Building digital resilience
Instead of attempting to regulate misinformation through platform oversight, we should focus on building society’s resilience to false information. This involves several key strategies:
Digital literacy education
Rather than relying on platforms to identify and suppress misinformation, we should equip people with the skills to evaluate information critically. We want to harden people against false and dangerous extreme ideas and theories, not by preventing them from being exposed to them, but by exposing them to ideas with the tools they need to determine how to respond to them.
This includes understanding how digital platforms work, recognising common manipulation tactics, and developing healthy skepticism without falling into cynicism.
Community engagement
Breaking down information silos requires rebuilding bridges between communities. This means creating spaces for genuine dialogue and understanding across ideological divides, rather than forcing compliance through regulation.
Regulating mis- and dis-information via the blunt instrumnet of govenrment will simply push speech deeper into these silos where it is already echoing, unchallenged, unexposed to the scrutiny that comes from dispassionate analysis.
Transparency through education
While the bill seeks to force transparency through regulation, a better approach is to help people understand how information spreads online. This includes education about algorithmic content promotion, the economics of digital platforms, and the role of engagement in spreading information.
Rebuilding institutional trust
The long-term solution to misinformation requires rebuilding trust in institutions, including government, media, and expertise. This can’t be achieved through regulation – it requires demonstrable commitment to transparency, accountability, and genuine engagement with community concerns.
The impulse to regulate misinformation through platform oversight is understandable but misguided. It attempts to solve a social and psychological problem through technical and regulatory means, which is doomed to failure. Instead, we must address the root causes of misinformation: the breakdown of trust in institutions, the fragmentation of communities, and the lack of digital literacy skills.
People believing in misinformation is the problem — not misinformation itself. If people were better equipped to know true from false, we’d have a society that didn’t need the regulating of what people can and can’t say in the first place.
By focusing on education, community building, and trust restoration, we can create a more resilient society that is naturally resistant to misinformation without compromising the fundamental principles of free expression and open debate that are essential to democracy.
Proposed solutions
-
Reject the current legislation
Reject the current legislation in favour of a comprehensive digital literacy strategy that emphasises education and community engagement over regulation. -
Establish a bottom-up digital literacy program
Establish a national digital literacy program that operates through schools, libraries, and community organisations rather than through platform regulation. -
Rebuild institutional trust
Support independent research into the social and psychological factors that drive the spread of misinformation, focusing on trust and community dynamics rather than content regulation. Focus government on delivery, to rebuild faith that government is not necessarily an ideological threat but a tool to support people.