<?xml version="1.0" encoding="ISO-8859-1"?> <docID>328408</docID> <postdate>2024-09-05 08:45:06</postdate> <headline>AI laws for high-risk uses coming as guidelines arrive</headline> <body><p><img class="size-full wp-image-328409" src="https://citynews.com.au/wp-content/uploads/2024/09/20070430000034598451-original-resized.jpg" alt="" width="900" height="634" /></p> <caption>Restrictions on high-risk AI uses in Australia, such as facial recognition, are still weeks away. (Andrew Brownbill/AAP PHOTOS)</caption> <p><span class="kicker-line">By <strong>Jennifer Dudley-Nicholson</strong></span></p> <p><strong>Businesses will be encouraged to warn customers when they are using AI, identify risks from the technology, and establish ways for people to appeal decisions made by AI under voluntary guidelines.</strong></p> <p>Federal Industry and Science Minister Ed Husic revealed the voluntary artificial intelligence standards on Thursday following their development by an expert advisory group.</p> <p>But mandatory restrictions on high-risk AI uses in Australia, such as facial recognition, medical devices and recruitment processes, are still weeks away, with the government revealing it would hold a four-week consultation on their development.</p> <p>The AI guidelines come more than year after the government launched a discussion paper on the technology, and as a Senate inquiry into adopting AI continues in Canberra.</p> <p>Mr Husic said the consultation and inquiry had shown businesses and citizens wanted regulations placed on the use of generative AI tools.</p> <p>"Australian want stronger protections on AI - we've heard that, we've listened," he said.</p> <p>"Australians know AI can do great things but people want to know there are protections in place if things go off the rails."</p> <p>The Voluntary AI Safety Standard features 10 guidelines for organisations, including those who develop and deploy the software within them.</p> <p>The rules range from establishing an AI strategy and training, identifying risks and protecting privacy, to testing and monitoring AI tools, disclosing the use of AI to customers and identifying potential bias.</p> <p>The guidelines also recommended organisations provide a way to "contest decisions, outcomes or interactions that involve AI," so those affected by decisions can appeal them.</p> <p>AI tools have, for example, been caught making biased decisions in employee recruitment and wrongly identifying criminal suspects based on their ethnicity.</p> <p>Australians will have to wait longer for mandatory restrictions on the use of AI technology, however, as the government released a discussion paper detailing potential approaches to regulation.</p> <p>The report outlined three options for restricting high-risk uses of the technology, including an AI-specific law, making changes to existing laws, or introducing a framework for government decisions.</p> <p>Mr Husic said the proposals would be considered in a four-week public consultation, which would also establish the highest risk uses of AI technology.</p> <p>"This important work will protect Australians from potential harms," he said.</p> <p>"For instance, the potential for AI algorithms to create bias based on a person's race, gender or age... in the use of algorithms to vet CVs in the recruitment industry."</p> <p>Risky uses of the technology identified in other countries, the paper noted, included using AI to determine admission to educational programs, to influence an individual's emotions, or to profile individuals by law enforcement.</p> <p>The Senate inquiry into AI is expected to sit again on September 11 before reporting to the parliament by September 19.</p> </body>