Australia New AI Regulations Framework

Posted on

Over the past few weeks/months we have seen more and more countries addressing the potential issues and challenges with Artificial Intelligence (and it’s components of Statistical Analysis, Machine Learning, Deep Learning, etc). Each country has either adopted into law controls on how these new technologies can be used and where they can be used. Many of these legal frameworks have implications beyond their geographic boundaries. This makes working with such technology and ever increasing and very difficult challenging.

In this post, I’ll have look at the new AI Regulations Framework recently published in Australia.

[I’ve written posts on what other countries had done. Make sure to check those out]

The Australia AI Regulations Framework is available from tech.humanrights.gov.au, is a 240 page report giving 38 different recommendations. This framework does not present any new laws, but provides a set of recommendations for the government to address and enact new legislation.

It should be noted that a large part of this framework is focused on Accessible Technology. It is great to see such recommendations.  Apart from the section relating to Accessibility, the report contains 2 main sections addressing the use of Artificial Intelligence (AI) and how to support the implementation and regulation of any new laws with the appointment of an AI Safety Commissioner.

Focusing on the section on the use of Artificial Intelligence, the following is a summary of the 20 recommendations:

Chapter 5 – Legal Accountability for Government use of AI

Introduce legislation to require that a human rights impact assessment (HRIA) be undertaken before any department or agency uses an AI-informed decision-making system to make administrative decisions. When an AI decision is made measures are needed to improve transparency, including notification of the use of AI and strengthening a right to reasons or an explanation for AI-informed administrative decisions, and an independent review for all AI-informed administrative decisions.

Chapter 6 – Legal Accountability for Private use of AI

In a similar manner to governmental use of AI, human rights and accountability are also important when corporations and other non-government entities use AI to make decisions. Corporations and other non-government bodies are encouraged to undertake HRIAs before using AI-informed decision-making systems and individuals be notified about the use of AI-informed decisions affecting them.

Chapter 7 – Encouraging Better AI Informed Decision Making

Complement self-regulation with  legal regulation to create better AI-informed decision-making systems with standards and certification for the use of AI in decision making, creating ‘regulatory sandboxes’ that allow for experimentation and innovation, and rules for government procurement of decision-making tools and systems.

Chapter 8 – AI, Equality and Non-Discrimination (Bias)

Bias occurs when AI decision making produces outputs that result in unfairness or discrimination. Examples of AI bias has arisen in in the criminal justice system, advertising, recruitment, healthcare, policing and elsewhere. The recommendation is to provide guidance for government and non-government bodies in complying with anti-discrimination law in the context of AI-informed decision making

Chapter 9 – Biometric Surveillance, Facial Recognition and Privacy

There is lot of concern around the use of biometric technology, especially Facial Recognition. The recommendations include law reform to provide better human rights and privacy protection regarding the development and use of these technologies through regulation facial and biometric technology (Recommendations 19, 21), and a moratorium on the use of biometric technologies in high-risk decision making until such protections are in place (Recommendation 20).

In addition to the recommendations on the use of AI technologies, the framework also recommends the establishment of a AI Safety Commissioner to support the ongoing efforts with building capacity and implementation of regulations, to monitor and investigate use of AI, and support the government and private sector with complying with laws and ethical requirements with the use of AI.

Advertisement