The US government’s recent ban of Israeli technology firm NGO Group’s Pegasus spyware has significant implications for Australian efforts to regulate digital technologies in the face of new online national security threats.
Putting human rights and democratic freedoms at the centre of US foreign policy was one of Joe Biden’s key election promises. His administration has honoured that promise by blacklisting NSO Group for selling its Pegasus software to governments that used it to abuse those principles. This move put the increasing challenge for states to regulate cyber and digital technologies squarely in the middle of US policy decisions and strategy.
This is a strong statement to the international community, especially given the US’s historical support for Israel. For Australia, it’s particularly significant given Canberra and Washington’s renewed commitment to working together to maintain security in the Indo-Pacific region through the AUKUS agreement, plus their longstanding cooperation as members of the Five Eyes intelligence-sharing arrangement. Australia’s new cross-border communications act, or ‘Cloud Act’, which enables data sharing with partners like the US based on common values, highlights the importance of Australia and the US seeing eye to eye on ethical regulation of digital technology.
Israel is already lobbying the US to remove the ban, arguing that Pegasus is critical to its foreign policy. NSO Group maintains that Pegasus is a national security tool for governments to stop transnational organised crime and violent extremist groups from using the ‘dark’ parts of the internet to conduct business.
The Australian government is not an existing or prospective client for Pegasus. But it used the same justification as NSO Group to pass a swathe of bills significantly increasing the powers of police and intelligence agencies to spy on Australians. In addition to the Cloud Act, legislation was enacted that enables agencies to access encrypted data and to alter data. The speed with which these bills passed the parliament, the uncertain safeguards against scope creep and the rushed consultation with industry sparked serious concerns. The use of the encryption law by the Australian Federal Police in Operation Ironside also raised concerns that Australia could become a policing partner of choice due to the expanded powers and undemocratic government overreach the legislation has allowed. The AFP’s refusal to say how the law’s powers were used further damaged public trust.
Certain conditions and warrants are necessary for Australian agencies to use these laws; it’s not an authoritarian free-for-all in the way Viktor Orban’s government used Pegasus to ‘wage war’ on the media in Hungary. But by overriding Australians’ civil liberties based on arguments about countering unprecedented threats to national security, the government is building a legal framework to enable policing of the internet that’s disturbingly similar to how Pegasus enables spying through the ability to access, decrypt and even alter data in online accounts and apps on devices.
Australia’s laws don’t allow devices to be remotely activated for audio recording. But given the justifications for the powers that have been granted—that new, exceptional threats justify new, exceptional measures—that may yet come.
The problem is not finding new strategies and tools to police cyberspace; those are needed. But when we legalise new security powers based on the argument that a threat landscape is ‘unprecedented’ and ‘exceptional’, it’s difficult to then define what other threats are similarly ‘exceptional’ and what is justified by ‘exceptional circumstances’. We saw this with the pseudo-legal framework built by the George W. Bush administration to allow widespread use of torture during the ‘war on terror’. That policy has been widely condemned on the basis of ethics, international and US domestic laws, and even of yielding actionable intelligence.
If the Morrison government is as serious as the Biden administration is about protecting civil liberties in the digital age, it should spend as much effort on building legal frameworks to regulate and govern the fourth industrial revolution according to democratic principles as it does on policing it. Where is Australia’s equivalent of the European Union’s regulations on data protection and privacy for digital tech, given how quickly we’ve passed these policing bills?
And why is the new artificial intelligence ethics framework for government and business entirely voluntary, when AI products are already being used by Queensland Police to risk-profile possible domestic violence offenders? And that’s despite the well-known limitations of available algorithms to accurately screen police data without exacerbating human bias and discrimination. With the threat this poses to already overpoliced communities, where are the hastily passed bills to protect democratic rights in an age where emerging technologies pose flashy ‘solutions’ with their own ethical problems on implementation?
It’s easy to applaud Washington’s decision to ban Pegasus. But are we not hurtling down a similar path, to the same place, propelled by the same ‘exceptional threat’ argument?