A “kill switch” that would stop artificial intelligence from running amok could form part of Australia’s defences against the emerging technology.
Mandatory restrictions around high-risk AI uses like facial recognition, medical devices and recruitment are under development.
To ensure organisations that develop AI models will assess and manage the risks their technology poses, the federal government is outlining proposals such as an AI-specific law and a government framework in a discussion paper released on Thursday.
Human intervention will be key for the safe and responsible use of AI, according to Industry and Science Minister Ed Husic.
“It’s not just us … the kill switch – that is the on-off switch – is under contemplation in different parts of the world,” he told reporters in Canberra on Thursday.
“If the AI that is being deployed is operating in a way that is not in line with what was expected, you’ve got to have a way to intervene.”
Though mandatory guardrails are under consultation, the federal government on Thursday released a set of voluntary guidelines around the use of AI.
They range from establishing an AI strategy and training, identifying risks and protecting privacy, to testing and monitoring AI tools, disclosing the use of AI to customers and identifying potential bias.
The guidelines also recommend organisations provide a way to “contest decisions, outcomes or interactions that involve AI,” so those affected by decisions can appeal them.
AI tools have, for example, been caught making biased decisions in employee recruitment and wrongly identifying criminal suspects based on their ethnicity.
“Put simply, we can’t allow AI to undermine basic human rights like that,” Mr Husic said.
“Australians know AI can do great things but people want to know there are protections in place if things go off the rails.
Though the voluntary guidelines take effect immediately, the mandatory AI guardrails will be considered in a four-week public consultation.
Australia has already been slow to act on AI reform and the government should commit to introducing legislation by 2025, at the latest, UTS Human Technology Institute co-director Edward Santow said.
Australian Academy of Technological Sciences and Engineering chief executive Kylie Walker said greater adoption of AI could help Australia’s economy increase by $200 billion annually, but noted safeguards were necessary.
AI is not all bad, Mr Husic insisted, and these guidelines will help Australians build the confidence they need to take advantage of the technology.
“I don’t want coming out of this people thinking that all AI is bad and therefore should not be used,” he said.
“It has some benefits, it does have some risks too.”
The government is moving too slowly on AI, the opposition’s communications spokesman David Coleman and digital economy spokesman Paul Fletcher said in a joint statement.
“Australia is being left behind by other countries on developing serious policies related to AI,” they said.
“Of course we need to be alive to the risks associated with this technology and its implications on legislation and regulations, but the Albanese government must also provide leadership and start making decisions.”
A Senate inquiry into AI is expected to report to the parliament by September 19.
Who can be trusted?
In a world of spin and confusion, there’s never been a more important time to support independent journalism in Canberra.
If you trust our work online and want to enforce the power of independent voices, I invite you to make a small contribution.
Every dollar of support is invested back into our journalism to help keep citynews.com.au strong and free.
Thank you,
Ian Meikle, editor
Leave a Reply