Filter Bubble and regulatory proposals

When we are on the web, our behavior, our searches, our habits are profiled by algorithms, which will then build the path of our future navigation. It is the phenomenon called “the filter bubble”, taken from the title of a work by Eli Pariser, “The Filter Bubble: What The Internet Is hiding from you”.
Through an algorithmic selection work, the searches made by the Internet user, will influence the results he will receive in the future, with the consequence that he will be shown the answers he likes the most, excluding it from a real and true comparison with different and opposite points of view. This phenomenon will relegate the user to an ideological and cultural “bubble”, transforming the Web, according to a definition of its inventor Berners-Lee, into a “closed content warehouse”.
These classification systems, which are practiced through research, advice and post social, are divided into two cases: the “opaque algorithms” and the “transparent input algorithms”.
The first ones are also defined as ranking systems, capable of determining the order or manner in which information is provided to a user on the network, based on his specific data that were not, by him, expressly provided for this purpose.
“Transparent input algorithms” are, on the other hand, classification systems that do not use the user’s specific data to determine the way in which the user is informed, unless he provides them for that purpose. An example is the Twitter “sparkle button” that allows the account to see at the top, the most recent tweets, compared to those chosen by the platform.

On October 31, a group of American senators presented the Filter Bubble Transparency Act (FBTA), an act that would appeal to large online platforms to become more transparent in the use of algorithms, driven by specific user data, prohibiting the use of opaque algorithms without giving prior notice: this notice should be clear and obvious, and shown when a user interacts, for the first time, with the opaque algorithm. Moreover, the user must be given the possibility to choose an unfiltered version of the platform, through transparent algorithms at the entrance. These prohibitions would come into force one year after the issuance of the FBTA.
Under the proposed law, the data expressly provided by the user would be:

- search terms provided by the user, filters and voice schemes, saved preferences and current geographic location of the user;
- data provided that expresses the user’s desire to share information, such as social content to which the user subscribes or chooses to follow.

User data expressly provided would not include:
- history of the connected user’s device, including web browsing and search history, geographic locations, physical activity, device interaction, and financial transactions;
- inferences about you or your device, regardless of whether they are based on data that you have expressly provided.

The FBTA bill would apply to so-called “covered internet platforms”, i.e. to any web platform, web app or mobile app, excluding search web pages (therefore not used for profiling), or to sites that belong to companies with less than 500 employees, an annual turnover of less than fifty million dollars and that process annual user-data in quantities of less than one million.

The Filter Act, if approved, will be managed by the FTC, Federal Trade Commission, in support of the Federal Trade Commission Act, and will clearly be oriented to regulate and regulate digital giants.
In the vote, one of the proposing senators, Senator Blumenthal said: “Much of what we do online is increasingly shaped, without our knowledge or consent, by companies that own these platforms, which use our personal information to choose what information consumers see or do not see. Our bipartisan bill will allow consumers to regain some control over their online experience, allowing them to simply give up on the filter bubble”.

After all, this bill comes after the April bill of the Algorithmic Accountability Act, which has been the subject of alternate judgement, for a possible obstacle to the implementation of technological progress, but born with the intention of pushing the FTC to develop regulations that require, to large companies, to conduct impact assessments on ‘high risk automated decision-making systems’, i.e. those systems which are capable of impacting on reality, with a ‘significant risk’ to the confidentiality or security of individual data or resulting in distorted or unfair decision-making processes or involving sensitive personal data, such as race, political and religious beliefs, gender identity and sexual orientation, and genetic information. In essence, companies would be required to assess whether algorithms feeding automated systems are distorted or discriminatory, and whether they pose a risk to the privacy or security of consumers.
It goes without saying that the mental and regulatory efforts of the American Congress, and not only, are all equally oriented to “armor” the BigTech companies and stop its rampant power.

All Right Reserved

Lawyer Raffaella Aghemo

Lawyer and consultant for innovation technology, IP, copyright, communication & marketing, likes movies and books, writes legal features and books reviews