How to defend yourself against AI, according to the American FTC

Raffaella Aghemo

--

In these dark, extreme emergency times, finding effective solutions is the right push towards a rebirth, and the use of new technologies, which can best express their auxiliary power, is the master. Artificial Intelligence, deep learning and machine learning can significantly increase well-being and productivity, but they also bring with them many unknowns.

The Federal Trade Commission, aware of the current state and strategic role of the new technologies mentioned above, in the person of Andrew Smith, director of the FTC Office for Consumer Protection, on April 8, published a post, just to offer “important lessons on how companies can manage the risks of consumer protection from AI and algorithms”.

In his blog post on the FTC: “FTC law enforcement actions, studies and guidelines stress that the use of artificial intelligence tools should be transparent, accountable, fair and empirically correct, while promoting accountability. We believe that our experience, as well as existing laws, can offer important lessons on how companies can manage the consumer protection risks of AI and algorithms”.

In those five words, deliberately highlighted in bold, Smith dictates the principles, which should inspire the use of AI.

The first warning is: BE TRANSPARENT. Often companies use AI systems in the background, through, for example, the use of chatbots; here, Smith invites companies not to mislead consumers about the use of chatbots, but to outline their boundaries and interactions clearly. Following numerous complaints from consumers about their improper use of dating sites as a means of “solicitation” or as a “social” tool to increase the number of followers, the FTC would like to see more controls on those who are not clearly demonstrating that they are using automated systems!

Equally important is how you get the collection of data sets, which must respect privacy by design, under penalty of disciplinary action, like the one taken against Facebook, for leaving facial recognition by default!

Still, if the consumer is rejected, for example, an application for credit, on the basis of automated cross-checks, to verify its capacity of “solvency”, he must be given notice of so-called “adverse action”: “The notice of adverse action indicates to consumers the right to see the information reported on them and to correct incorrect information.

The second warning is: EXPLAIN YOUR DECISION TO THE CONSUMER. Taking up the previous paragraph, if a consumer has his application for credit rejected, he must not only know what data the automated decision was based on, but also how such a decision was reached, explaining it in full and in detail, indicating the factors, according to the hierarchy used, that led to a negative score, and therefore to the rejection of the loan (at least four should be indicated). If a company modifies the terms of the contract, based on automated decisions, it should be declared to the consumer: this warning derives from a precedent, in which a credit company limited the credit ceiling for those users who used the credit card for voluntary expenditure, using an automated behavioural score.

The third warning says: ENSURE THAT YOUR DECISIONS ARE FAIR. As we know has happened before, we should avoid “discrimination” by knowing, in depth, not only the inputs of the model, but also the results; if we notice an unfair treatment, the company should explain the reason for that model and consider achieving the same results, finding a fairer alternative solution.

At this point, the fourth warning descends directly: ENSURE THAT YOUR DATA AND MODELS ARE ROBUST AND EMPIRICALLY SOUND. If you compile or sell consumer data for automated decisions about services to be provided, you are also subject to US federal regulation FCRA, issued to promote the accuracy, fairness and privacy of consumer information contained in consumer reporting agency files. The use of these methodologies should always be updated, reviewed and revalidated in order to maintain “efficient” predictive capacity.

The fifth warning is perhaps the most ethically delicate, as it recalls a high sense of responsibility, in those who act through AI and algorithms, taking up a concept dear also to our GDPR, accountability: HOLD YOURSELF ACCOUNTABLE FOR COMPLIANCE, ETHICS, FAIRNESS, AND NONDISCRIMINATION. To guide companies towards responsible use of data, the FTC suggests four questions that should be answered.

- How representative are your data sets?

- Does your data model take bias into account?

- How accurate are your forecasts based on big data?

- Does your dependence on big data raise ethical or equity concerns?

-

Last but not least is the aspect of misuse, unauthorized use of the algorithm, which could create serious harm (e.g. voice cloning technologies), and therefore needs to be accompanied by strict access controls, and the use of additional technologies, in order to avoid distorted and malicious use.

All Rights Reserved

Raffaella Aghemo, Lawyer

--

--

Raffaella Aghemo

Innovative Lawyer and consultant for AI and blockchain, IP, copyright, communication, likes movies and books, writes legal features and books reviews