AI Regulation in Australia: Debating Bans on High-Risk Technologies and Criteria for Assessment

Sunset over a roundtable discussion of diverse individuals, shadows emphasizing facial expressions of concern, curiosity, and determination, warm inviting glow of hope lighting the meeting room, complex documents with risk matrices and AI applications scattered on a table, futuristic cityscape outside window, tones of optimism, caution, and innovation.

In a surprising move, the Australian government has launched an eight-week consultation to assess whether any “high-risk” artificial intelligence (AI) tools should be banned. This appears to be in line with recent actions taken by countries like the United States, the European Union, and China, which are attempting to understand and mitigate potential risks associated with rapid AI development.

On June 1st, Industry and Science Minister Ed Husic unveiled two papers that focused on “Safe and Responsible AI in Australia” and a report on generative AI by the National Science and Technology Council (NSTC). The consultation, open until July 26, aims to gather feedback on how to promote the “safe and responsible use of AI.” The government is contemplating whether to adopt voluntary ethical frameworks or enforce specific regulations, or a combination of both approaches.

A key question in the consultation is whether certain high-risk AI applications or technologies should be prohibited, and what criteria should be employed to identify such tools. A draft risk matrix for AI models was provided for feedback. Self-driving cars, for example, were categorized as “high risk,” while a generative AI tool for creating medical patient records was deemed “medium risk.”

The paper cited both positive and harmful uses of AI, such as its applications in the medical, engineering, and legal sectors, and its deployment for deepfake tools, creating fake news, and cases where AI bots have encouraged self-harm. The bias of AI models and the generation of nonsensical or false information (referred to as “hallucinations”) by AI systems were also acknowledged as problems.

According to the discussion paper, AI adoption in Australia is relatively low due to low levels of public trust. The paper also referenced AI regulation in other jurisdictions and Italy’s temporary ban on ChatGPT. The NSTC report emphasized that Australia has promising AI capabilities in robotics and computer vision. However, it also conceded that its “core fundamental capacity in large language models and related areas is relatively weak.”

The report further noted that “the concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potential risks to Australia.” It also examined global AI regulation and outlined examples of generative AI models, which could have a significant impact on industries such as banking, finance, public services, education, and creative sectors.

As the Australian government seeks public opinion on AI regulation, the future of AI in the country remains uncertain. However, the consultation does signal a growing global awareness of the potential risks and benefits associated with AI development.

Source: Cointelegraph

Sponsored ad