ad

How to Govern the Algorithms



How to control the algorithms and govern the Artificial Intelligence before she does it? This article draws the main lines of the current and future panorama.

Since 2018, the debate on biases and perverse effects of algorithms has become increasingly important. That was when the case of the British company Cambridge Analytics, which used private data obtained from Facebook to try to influence the elections came to light. This made us reflect on the risks of applying personalized algorithms that exploit the psychological characteristics of the individual for political purposes.

In the words of Chris Wylie, the former employee of the company that uncovered the case:

"Instead of being in the public square, saying what you think and then letting people come, listen to you and have that shared experience of what your narrative is, you are whispering in the ears of each and every one of the voters. And you can whisper one thing to one and another different to another. "

This experience, generalized but not shared, is called filter bubble or echo chamber (filter bubble or echo chamber, in Spanish). The term shows how the same technologies that connect us also isolate us in informative bubbles that reinforce certain opinions and make us increasingly vulnerable to manipulation.

The case of Cambridge Analytics is part of a set of attempts to mass manipulate public opinion through social engineering, which Facebook calls Information Operation. That is, actions undertaken by organized actors (governments or non-state organizations) to distort the political feelings of the population, in order to achieve a specific strategic and geopolitical result.

Surveillance capitalism

The social psychologist Shoshana Zuboff, in her book The age of surveillance capitalism, speaks to us in a more articulated way of something that we already intuited: that the manipulation of opinions and behaviors is an integral part of capitalism based on the digital surveillance.

In this context, the data, especially that of a personal nature, plays a key role for two reasons. The first is that it forms part of the basis of the digital economy, the most promising economic development model we have. The second one, that its analysis allows to guide the collective behavior, each time on a larger scale and more quickly.

Unfortunately, the basic requirement of each computer system that is capable of scaling entails the exponential amplification of the risk of a possible failure. The mathematician Cathy O'Neil spoke in 2017 of weapons of mass destruction (Weapons of Math Destruction) to emphasize the scale, potential damage and opacity of decision-making systems based on machine learning algorithms.

Law professor and artificial intelligence expert Frank Pasquale talks about the problem of introducing accountability mechanisms (which seek equity and identification of responsibilities) in automated processes, instead of treating those processes as a black box. Thus, they hide behind the proprietary rights of the private companies that have developed them.

Solon Barocas, who investigates the ethical and political issues of artificial intelligence, spoke in 2013 about the "governance of algorithms" and the need to question these devices and analyze their effects from a legal and public policy perspective.

The FAT * congress, whose third edition will be held in Barcelona in January 2020, has become the reference meeting for those who want to address issues of transparency, justice and accountability of automated systems.

Europe vs. USA

In Europe, Article 22 of the General Data Protection Regulations (RGPD) obliges us to ensure that there is human supervision and the right to appeal decisions made by automatic and profiling systems.

In the US, failures such as the inability of algorithms, trained with databases with an over representation of photos of white people, to identify people of color, also motivate the willingness to open and audit these systems.

These cases, along with other disturbing ones in the legal and police field, have been taken as examples of the lack of sensitivity among programmers on racial and gender issues. This is argued by Sarah Myers West, Meredith Whittaker and Kate Crawford in the white paper Discriminating Systems: Gender, Race, and Power in AI (Discriminatory systems: gender, race and power in AI).

The publication is in line with other authors such as Andrew Selbst, who investigates the legal effects of technological changes. It aims to draw attention to the need to contextualize any discourse on technological development and analyze it as a techno-social system, as studies in science, technology and society already do.

The automation of decision processes involves a great technical, legal, corporate management and moral challenge. This occurs in all areas, ranging from the detection of false news and fraud to medical diagnosis and the incarceration of suspects. Therefore, the creation of a multidisciplinary dialogue space is necessary.

Artificial intelligence systems will always be the product of biases, heuristics and blind spots of programmers. Open a debate on what values we want to record in our systems so that humanity flourishes is everyone's responsibility.