Artificial Intelligence and machine learning

Raffaella Aghemo
4 min readMar 4, 2020

--

The S.A.R.I., an acronym which stands for Automatic Image Recognition System, is a new system available to the State Police to counter criminal activity; exploiting the A.F.I.S. system, Automated Fingerprint Identification System which collects the fingerprints, personal data, photographs and biometric notations of the subjects under investigation, law enforcement agencies can count on an identification system with a database of more than 10 million data; in this way those who stain a crime can be identified more quickly and efficiently.
In the United States, another system is used, the C.O.M.P.A.S. (Correctional Offender Management Profiling for Alternative Sanctions), which is an algorithm used by judges to calculate the probability of recidivism within two years of a crime.

These are two concrete and recent examples of Artificial Intelligence.

Artificial Intelligence, for Stuart Russell and Peter Norvig, authors of “Artificial Intelligence: A Modern Approach, Global Edition”, means a “field of studies in which intelligent agents are designed and built”. In what sense? What would be the etymological meaning of the term? “Agent” is “anything that can demonstrate the capacity to perceive one’s own environment through sensors, and to act on it through actuators”; “intelligent” identifies a “rational” agent, which “for each possible sequence of perceptions is able to select, among the various possible actions in response, the one that is presumed to maximize the measure of its performance, on the basis of the data deriving from the sequence of the same perceptions and the form of knowledge inserted within it”.
It is also true that what is more incisive in this binomial, is the “form of knowledge” with which the actions will be measured, so that, to a mere application of the data, according to the inserted logic, a vision that contemplates ethics as the addendo of the operation is substituted! It is no coincidence that the American Criminal Courts have been presented with an appeal, following a conviction of recidivism, for a black prisoner, because the system favoured whites over African-American ethnic groups; in order not to disregard the entire system, they have reinforced the concept of close collaboration between the human element, capable of judgement, and the technological element; in cases of gender differences, or ethnic and social origin, which can produce different bias of evaluation (defined by Wikipedia as “a form of distortion of evaluation caused by prejudice), it will be necessary that the data, on which the Artificial Intelligence rests, do not induce the latter to make evaluations or actions that vary according to the category of population.
From an ethical point of view, then, Artificial Intelligence poses problems of transparency and openness, since, often, it is not possible to determine either what are the data, on which it bases its functioning, or the architecture of its algorithms, which are covered by industrial secrecy.
It is no coincidence that on February 16, 2017, the South Korean Government ordered that South Korean Ministries and Government Agencies should present new legal standards in order to define the legal status, responsibilities and ethical profiles of the industries involved in artificial intelligence production and studies.
The primary objective of all governments, more or less evolved, is to start thinking about how to use AI, but also and above all how to regulate it, so that it does not govern us, but becomes our help in the management of everyday life. At the end of 2017, representatives of the United States government presented a bill, “Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017”, to promote and govern AI.

DARPA (Defense Advanced Research Projects Agency), the U.S. Pentagon’s technology research agency, announced in September 2018, an investment of two billion dollars for the development of new Artificial Intelligence solutions. China has not stood idly by and has promoted investments of more than two billion dollars to stimulate the development of new technologies.
By now the famous challenge among the super powers, Russia included, which is investing in war machines that can learn from themselves, is being fought, not only on the economic front, but on the technological one.

David Weinberger, an American thinker and philosopher, writes: “We should all be concerned that artificial intelligence (AI) systems based on “machine learning”, to which we entrust decisions — from whom to make them — may make serious mistakes, harming individuals and whole categories of people. Sometimes the mistakes are relatively harmless.[…]It is devastating when, in the USA, an AI recommends that black men should be kept in prison longer than white men simply because of their race.
What in an old 1983 film, War Games, seemed an indefinite and distant future, is a concrete reality of our times, with which everyone, sooner or later, will have to measure themselves; the ability of a computer, to learn from its mistakes and to correct itself, although it represents the dawn of a new era, should not be underestimated, should be understood but “polite”, so as not to be subjugated by it.

All Rights Reserved

Raffaella Aghemo, Lawyer

--

--

Raffaella Aghemo

Innovative Lawyer and consultant for AI and blockchain, IP, copyright, communication, likes movies and books, writes legal features and books reviews