White House guidance for regulating Artificial Intelligence applications | futureTEKnow by Raffaella Aghemo

On the 17th November, the White House published a Memorandum of Guidelines for government agencies, for the regulation of the applications of Artificial Intelligence. This document is addressed to the heads of the agencies and government departments, as a warning to follow ‘warnings’, in the application of new technologies which make use of AI (the integral document can be found at this link https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf). That memorandum provides guidance to all federal agencies “to guide the development of regulatory and non-regulatory approaches to technologies and industries that are enhanced or enabled by artificial intelligence (AI) and consider ways to reduce barriers to the development and adoption of AI technologies.”

“Agencies should continue to promote advances in technology and innovation, while protecting American technology, economic and national security, privacy, civil liberties, and other American values, including principles of liberty, human rights, the rule of law, and respect for intellectual property.” reads the introductory part of the document, but focusing on ‘narrow’ (also known as ‘weak’) AI, which goes beyond advanced conventional computing to learn how to perform domain-specific or specialized tasks by extracting information from data sets, or other structured or unstructured sources of information.

Precisely because of this, the importance of AI development and deployment requires a regulatory approach that promotes innovation and growth and builds trust, while protecting core American values through both regulatory and non-regulatory actions and reducing unnecessary barriers to AI development and deployment.

Consistent with the law, agencies should consider the following principles when formulating innovative approaches to implementing AI, which are:

  1. Public trust in AI: since adoption and acceptance of AI will depend significantly on public trust and validation, government’s regulatory and non-regulatory approaches to AI should help nurture and forage trust by promoting secure, robust and reliable applications, reducing incidents and protecting user privacy.
  2. Public participation: Agencies will be encouraged, to the extent possible, to inform the public and promote awareness and widespread availability of voluntary frameworks or standards and the creation of other information documents.
  3. Scientific integrity and quality of information: when an agency regulates AI applications, it should, as appropriate, transparently articulate the strengths and weaknesses of the applications, expected optimizations or outcomes, risk biases and mitigations, potential impacts on competition, privacy, and personal decision-making, any implications for national security, and appropriate uses of the results of the AI application.
  4. Risk assessment and management: it will not be necessary to mitigate every foreseeable risk; indeed, a fundamental principle of regulatory policy is that all activities involve trade-offs.
  5. Benefits and costs: this consideration will include the potential benefits and costs of using AI, when compared with systems that the AI is designed to complement or replace; whether implementing AI will change the type of errors created by the system; and comparison with the degree of risk tolerated in other existing systems. In cases where a comparison with a current system or process is not available, the risks and costs of not implementing the system should also be assessed.
  6. Flexibility: Agencies should pursue performance-based and flexible approaches that are technology-neutral and do not impose mandates on companies that would harm innovation.
  7. Fairness and non-discrimination: agencies should transparently consider the impacts that AI applications may have on discrimination.
  8. Disclosure and transparency: these two elements can increase public trust in AI applications by allowing (a) non-experts to understand how an AI application works and (b) technical experts to understand the process by which the AI made a particular decision. Such disclosures, when required, should be written in a format that is easy for the public to understand.
  9. Safety and security: agencies should encourage consideration of this throughout the AI design, development, deployment, and operation process, including considering methods to provide systemic resilience and to prevent bad actors from exploiting these systems!
  10. Inter-agency coordination: agencies should coordinate with each other to share experiences to ensure consistency and predictability of AI-related policies that promote AI innovation and adoption in America, while appropriately protecting privacy, civil liberties, national security, and American values, and enabling sector- and application-specific approaches.

The promulgation of this document reaffirms what is currently apparent almost everywhere, namely that, increasingly, Artificial Intelligence, from being an obscure domain of computer science, is now an instrument of economic, political and social conquest of all states in the world, and even more so of the superpowers!

Originally published at https://futureteknow.com on January 12, 2021.

Lawyer and consultant for innovation technology, IP, copyright, communication & marketing, likes movies and books, writes legal features and books reviews