Author: Neil Quarmby CEO
My views are based on extensive experience in leading regulators in Federal and State governments and also as a recognized expert in regulatory reform and intelligence in regulation. In addition, I have extensive experience in information warfare and countering subversion. The proposed law has a number of critical weaknesses that would not allow the regulator to meet the regulatory principles of proportionality, fairness/equity, targeted effort, flexibility/consistency, and transparency. In overview, based on the way the law is framed, the regulator is more likely to become a tool of misinformation rather than a harm prevention agency for the people.
Specific Issue – the lack of an ‘object’ in the law
The intent of the proposed Law, from a regulator’s view, is unclear and hence there is no outcome expectation or public value expectation for the regulator. Successful regulation, in this case, will be argued by the number of controls enforced; and not by a directed preventative outcome.
On reading, the intent appears to be to assist and require platforms to silence misinformation/disinformation where the message has originated from an individual.
Sources of messaging exempt under the proposed Law includes Australian governments, media outlets and universities (although it is unclear if this relates only to Australian entities). It is unclear why these very broad exemptions are made and how misleading and false messages arising from these domains but replicated by an individual would be managed jurisdictionally.
The art of counter-subversion (learned over many centuries) is to promote truth and enquiry to reduce false narratives. It is unclear why the intent of this Law appears to operate contrary to this central tenet of information warfare and selects instead the more problematic path of silencing voices.
Normally there are two types of regulatory intent statements in an object clause. One is related to harm prevention and the other often relates to creating a level playing field for participants (a sense of fairness or equity for participants in the regulated field).
Schedule 9.1.1 includes a comment that may be taken as the object of the law although it appears to apply to the regulator only is special circumstances: “…to provide adequate protection for the community from misinformation or disinformation on digital platform services”. This is a harm prevention statement. The fairness/equity statement would relate to ensuring that regulatory work encourages fair treatment in managing platforms for free-speech and also protects everyone’s right to free speech.
Neither of these two intent statements come through in the wording of the Act – especially as key voices are unfairly exempt from the fairness and equity equation; which in itself generates community harm.
Specific Issue – uninformed explanation of harm
A clear understanding of harm and threat drives regulatory targeting and focuses prevention effort. The poor explanation of harm in the proposed Law misdirects and misleads regulatory effort. It does not support the founding of an intelligence system supporting good regulatory decisions.
The definitions of harm on page 6, contain no definitions of harm that could be used in targeting regulatory effort. Hence, they appear more as a ‘jurisdictional interest’ list when considering a complaint. Harm in the proposed Law is defined as:
Harm is …Hatred against a group
Harm is …Disruption of public order
Harm is …Harm (four times)
Of the definitions, only disruption of public order could be considered a statement of harm, but only if the reader assumes this definition refers to messages that incite violence. As read, this harm statement could infer that every rallying call for a protest would need to be cancelled; therefore removing the ability for unions, green movements, Get-Up, women’s groups etc to organize efforts that disrupt public movement or commerce.
Hatred against a group is not a harm. Victims can hate a group that has caused them harm. Scale, intent and social acceptance is important to defining ‘harm to a group’. Individuals self-representing a minority group can use hate messaging against a ‘majority’ group. Hence, if viewing from a neoMarxist perspective, the Law may provide ‘minority’ groups protection from oppression; however, it can and should be equally applied to small group hate speech against larger groups.
Is misrepresentation of Australian colonization hate speech against the British race? Or just bad historiography?
This skewed view of harm will mean the regulator will continually chase unnecessary and illformed group victim complaints.
The environment does not get harmed by messages. This appears to be included as a current political issue or slanted – one assumes - at “climate-deniers”. As shown during COVID and in the ongoing debate in science on climate change and the merits of certain responses, disinformation/misinformation can only be corrected by maximizing outlets for all competing viewpoints in a contested space. Much of the activist voice that is re-stated in the press, universities or politically, needs to be countered by an open dialog rather than stymied – as shown in the widespread harm caused by the re-cycling of foreign false messages on COVID harm mitigation through our politicians, the media and the universities. This is the same subversive/corruptive pathway for the green industry lobby and the fossil fuels lobby (often funded by foreign state actors with financial interests in poor policies. Refer to Russian influence operations on the green lobby in Europe and Chines influence operations on Australian energy responses).
One definition implies that government institutions and the will of the people can be harmed by messages. This is a core truth in information warfare. The institutions of government in a democratic society are always under threat from those offering a different form of institutional approach: foreign actors, anarchists, neo-marxists, communists, criminal elements, some religious movements. All of these movements are already reflected in voices within government, the press and the universities. As are the voices of those that wish to retain and sustain the current forms of institutions. So, it remains very unclear how the regulator can counter false information peddled by these entities through individuals if government, media and academic entities are exempt. This proposed Act does not seek to counter this foundation harm and therefore is illogically contrived.
Specific Issue – the lack of threat behaviour
The main behaviour being targeted appears to be failure of the platforms to conform to the directions of the regulator or achieve appropriate codes for users. Hence, there is a disconnect between the intent of the Act and the powers inferred. While the intent is unclear, it appears the Act seeks to silence misinformation/disinformation spread by individuals (not authorized) by forcing another entity to regulate that behaviour in a vacuum.
This intent appears naïve as to the primary sources of influence operations and provides no real assistance for platforms to address and counter influence operations.
There is no clear recognition of the role of states, organisations and institutions conducting influence operations. (There is a minor reference on page12: “Disinformation includes disinformation by or on behalf of a foreign power.”) Is there a view that disinformation originating from one country that is perpetuated through the Australian media or by government officials in the same or other digital services is exempt? But if perpetuated by an individual (often under a false identity) it is targeted?
There is no recognition of the assistance the Australian state can provide to platforms to assist them target false handles and bot-messaging systems. There is no recognition that misinformation is not an individual tweet issue. Rather there is a collective targeting problem that platforms cannot manage without intelligence support. So, instead of having the regulatory law stylized to support platforms as part of the solution, the proposed law appears to stigmatize them as part of the problem – needing enforcement.
The exclusions of entities on page 5 and 6 display this naivety. Can a commentator for any news outlet, state a racist opinion on air and then re-state that opinion online under their own or an assumed name? If a politician then re-messages that presenter’s opinion in an online forum, does the perpetuation of the disinformation then become exempt?
Specific Issue - Reactive in nature
Without applied contemporary thinking in regulatory design, the Act appears to enable a purely reactive regulatory system; dependent on allegations and complaints being received. Action (enforcement) will then be taken based on an assessment of the complaint or volume of complaints. In certain circumstances, the regulator will use its powers to force a set of rules for conduct to be put in place. There is no sense of what is being prevented through what proactive regulatory system.
This is a very traditional enforcement approach to regulation and does not lend itself to preventative action as supposedly envisaged in the perceived intent of the Act.
Specific Issue – Independence from government policy and how to fix it
Currently the media regulatory system stewards an unequal regulatory domain over media due to the lack of coverage of publicly funded media. This new Act reinforces the public perception of a greater shift towards government control on ideas and policy-capture of regulators. Instead, the new Act should include all the entities that are currently exempt. With a better statement of harm and threat, and with a more contemporary approach to preventative controls, the regulator will then be able to retain its independence from government policy makers, not be swayed by activists and lobbyists, provide a more coherent intelligence support framework to platforms; and in doing so, provide more preventative outcomes for the free-thought of the Australian people.