Over the past 18 months there has been wide spread push buy many countries and geographic regions, to examine how the creation and use of Artificial Intelligence (AI) can be regulated. I’ve written many blog posts about these. But it isn’t just government or political alliances that are doing this, other types of organisations are also doing so.
NATO, the political and (mainly) military alliance, has also joined the club. They have release a summary version of their AI Strategy. This might seem a little strange for this type of organisation to do something like this. But if you look a little closer NATA also says they work together in other areas such as Standardisation Agreements, Crisis Management, Disarmament, Energy Security, Clime/Environment Change, Gender and Human Security, Science and Technology.
In October/November 2021, NATO formally adopted their Artificial Intelligence (AI) Strategy (for defence). Their AI Strategy outlines how AI can be applied to defence and security in a protected and ethical way (interesting wording). Their aim is to position NATO as a leader of AI adoption, and it provides a common policy basis to support the adoption of AI System sin order to achieve the Alliances three core tasks of Collective Defence, Crisis Management and Cooperative Security. An important element of the AI Strategy is to ensure inter-operability and standardisation. This is a little bit more interesting and perhaps has a lessor focus on ethical use.
NATO’s AI Strategy contains the following principles of Responsible use of AI (in defence):
- Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable.
- Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability.
- Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level.
- Reliability: AI applications will have explicit, well-defined use cases. The safety, security, and robustness of such capabilities will be subject to testing and assurance within those use cases across their entire life cycle, including through established NATO and/or national certification procedures.
- Governability: AI applications will be developed and used according to their intended functions and will allow for: appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour.
- Bias Mitigation: Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets.
By acting collectively members of NATO will ensure a continued focus on interoperability and the development of common standards.
Some points of interest:
- Bias Mitigation efforts will be adopted with the aim of minimising discrimination against traits such as gender, ethnicity or personal attributes. However, the strategy does not say how bias will be tackled – which requires structural changes which go well beyond the use of appropriate training data.
- The strategy also recognises that in due course AI technologies are likely to become widely available, and may be put to malicious uses by both state and non-state actors. NATO’s strategy states that the alliance will aim to identify and safeguard against the threats from malicious use of AI, although again no detail is given on how this will be done.
- Running through the strategy is the idea of interoperability – the desire for different systems to be able to work with each other across NATO’s different forces and nations without any restrictions.
- What about Autonomous weapon systems? Some members do not support a ban on this technology.
- Has similar wording to the principles adopted by the US Department of Defense for the ethical use of AI.
- Wants to make defence and security a more attractive to private sector and academic AI developers/researchers.
- NATO principles have no coherent means of implementation or enforcement.
In May this year (2021) the EU released a draft version of their EU Artificial Intelligence (AI) Regulations. It was released in May to allow all countries to have some time to consider it before having more detailed discussions on refinements towards the end of 2021, with a planned enactment during 2022.
The regulatory proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. One of the primary aims to ensure people can trust AI and to provide a framework for all to ensure the categorization, use and controls on the safe use of AI.
The draft EU AI Regulations consists of 81 papes, including 18 pages of introduction and background materials, 69 Articles and 92 Recitals (Recitals are the introductory statements in a written agreement or deed, generally appearing at the beginning, and similar to the preamble. They set out a précis of the parties’ intentions; what the contract is for, who the parties are and so on). It isn’t an easy read
One of the interesting things about the EU AI Regulations, and this will have the biggest and widest impact, is their definition of Artificial Intelligence.
Artificial Intelligence System or AI system’ means software that is developed with one or more of the approaches and techniques listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing real or virtual environments. Influencing (real or virtual) environments they interact with. AI systems are designed to operate with varying levels of autonomy. An AI system can be used as a component of a product, also when not embedded therein, or on a stand-alone basis and its outputs may serve to partially or fully automate certain activities, including the provision of a service, the management of a process, the making of a decision or the taking of an action;
When you examine each part of this definition you will start to see how far reaching this regulation are. Most people assume it only affect the IT or AI industry, but it goes much further than that. It affects nearly all industries. This becomes clearer when you look at the techniques listed in Annex I.
ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES (a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference/deductive engines, (symbolic) reasoning and expert systems; (c)
Statistical approaches, Bayesian estimation, search and optimization methods.
It is (c) that will causes the widest application of the regulations. Statistical approaches to making decisions. But part of the problem with this is what do they mean by Statistical approaches. Could adding two number together be considered statistical, or by performing some simple comparison. This part of the definition will need some clarification, and they do say in the regulations, this list may get expanded over time without needing to update the Articles. This needs to be carefully watched and monitored by all.
At a simple level the regulations gives a framework for defining or categorizing AI systems and what controls need to be put in place to support this. This image below is typically used to represent these categories.
The regulations will require companies to invest a lot of time and money into ensure compliance. These will involve training, supports, audits, oversights, assessments, etc not just initially but also on an annual basis, with some reports estimating an annual cost of several tens of thousands of euro per model per year. Again we can expect some clarifications of this, as the costs of compliance may far exceed the use or financial benefit of using the AI.
At the same time there are many other countries who are looking at introducing similar regulations or laws. Many of these are complementary to each other and perhaps there is a degree of watching each each other are doing. This is to ensure there is a common playing field around the globe. This in turn will make it easier for companies to assess the compliance, to reduce their workload and to ensure they are complying with all requirements.
Most countries within the EU are creating their own AI Strategies, to support development and job creation, all within the boundaries set by the EU AI Regulations. Here are details of Ireland’s AI Strategy.
Watch this space to for more posts and details about the EU AI Regulations.
Over the past few weeks/months we have seen more and more countries addressing the potential issues and challenges with Artificial Intelligence (and it’s components of Statistical Analysis, Machine Learning, Deep Learning, etc). Each country has either adopted into law controls on how these new technologies can be used and where they can be used. Many of these legal frameworks have implications beyond their geographic boundaries. This makes working with such technology and ever increasing and very difficult challenging.
In this post, I’ll have look at the new AI Regulations Framework recently published in Australia.
[I’ve written posts on what other countries had done. Make sure to check those out]
The Australia AI Regulations Framework is available from tech.humanrights.gov.au, is a 240 page report giving 38 different recommendations. This framework does not present any new laws, but provides a set of recommendations for the government to address and enact new legislation.
It should be noted that a large part of this framework is focused on Accessible Technology. It is great to see such recommendations. Apart from the section relating to Accessibility, the report contains 2 main sections addressing the use of Artificial Intelligence (AI) and how to support the implementation and regulation of any new laws with the appointment of an AI Safety Commissioner.
Focusing on the section on the use of Artificial Intelligence, the following is a summary of the 20 recommendations:
Chapter 5 – Legal Accountability for Government use of AI
Introduce legislation to require that a human rights impact assessment (HRIA) be undertaken before any department or agency uses an AI-informed decision-making system to make administrative decisions. When an AI decision is made measures are needed to improve transparency, including notification of the use of AI and strengthening a right to reasons or an explanation for AI-informed administrative decisions, and an independent review for all AI-informed administrative decisions.
Chapter 6 – Legal Accountability for Private use of AI
In a similar manner to governmental use of AI, human rights and accountability are also important when corporations and other non-government entities use AI to make decisions. Corporations and other non-government bodies are encouraged to undertake HRIAs before using AI-informed decision-making systems and individuals be notified about the use of AI-informed decisions affecting them.
Chapter 7 – Encouraging Better AI Informed Decision Making
Complement self-regulation with legal regulation to create better AI-informed decision-making systems with standards and certification for the use of AI in decision making, creating ‘regulatory sandboxes’ that allow for experimentation and innovation, and rules for government procurement of decision-making tools and systems.
Chapter 8 – AI, Equality and Non-Discrimination (Bias)
Bias occurs when AI decision making produces outputs that result in unfairness or discrimination. Examples of AI bias has arisen in in the criminal justice system, advertising, recruitment, healthcare, policing and elsewhere. The recommendation is to provide guidance for government and non-government bodies in complying with anti-discrimination law in the context of AI-informed decision making
Chapter 9 – Biometric Surveillance, Facial Recognition and Privacy
There is lot of concern around the use of biometric technology, especially Facial Recognition. The recommendations include law reform to provide better human rights and privacy protection regarding the development and use of these technologies through regulation facial and biometric technology (Recommendations 19, 21), and a moratorium on the use of biometric technologies in high-risk decision making until such protections are in place (Recommendation 20).
In addition to the recommendations on the use of AI technologies, the framework also recommends the establishment of a AI Safety Commissioner to support the ongoing efforts with building capacity and implementation of regulations, to monitor and investigate use of AI, and support the government and private sector with complying with laws and ethical requirements with the use of AI.