Published

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

"A robot injuring a human being" by Picasso

"A robot injuring a human being" by Picasso (DALL-E)

Guarantee impartiality is a big challenge for AI technology.

Three Laws of Robotics

The title of this note is known as the first of the three laws of robotics formulated by the science fiction writer Isaac Asimov, which first appeared in the short story "Runaround" published in 1942 at American magazine "Analog Science Fiction and Fact" and, later, in Asimov's book "I, Robot". Although Asimov, a Russian-Jewish naturalized American, was the first to publish them, he acknowledged the laws came from conversations he had with John W. Campbell, editor of "Analog Science Fiction and Fact". The three laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These three rules were perhaps the first expression of concern about the damage autonomous intelligent agents could inflict on humans. Regardless of the psychic consequences while trying deal with a customer service chatbot, or get hit by a robotic vacuum cleaner, we are far away from getting hurt by a robot. However, nowadays we could suffer unintended consequences derived from questionable decisions of AI algorithms because of the data used to learn.

Biased, incomplete, or inadequate data sets

As an example, an AI system could recommend higher prices for supermarkets around a disaster area after a hurricane, or rank resumes favoring certain types of candidates over others (eg. to men because they had more male cases to learn from), or systems that predict health resources distribution based on historical medical expenses, ignoring inequalities in access to these services between different population groups (based on affordability or geographic context). A recent article on eWeek collects real case news on biased AI solutions:

These decisions are generally made by poorly designed algorithms based on biased, incomplete, or inadequate data sets. Among the former we could find, for example, algorithms trained with data belonging mostly to the male, white, or young population, while among the latter, algorithms that receive as part of the input data variables such as gender or race in tasks in which that the biological profile is not necessary.

Ethical challenges

Let's see an example. If a candidate selection algorithm learns from a sample of resumes that are predominantly white male, statistically speaking, majority of suitable candidates will be from that social group. Therefore, our originally naive and unbiased algorithm might learn to associate white male profiles with suitable candidates, in detriment of female or non-white profiles with higher scores. Therefore, once trained, the AI algorithm will frequently take wrong decisions and reject the best qualified candidate. We could find similar situations in diagnostic systems based on medical imaging, if the learning data includes very unequal proportions of both genders and/or different age groups. In these cases, the system will always make better diagnoses in people from the best represented population groups, who will generally match with those with better access to health systems.

As the reader will have seen, one of the biggest ethical challenges associated with the creation of AI technology is guaranteeing the impartiality of decisions. If the system is not capable of providing impartial and transparent decisions, in which it is possible to identify and explain the cause-effect relationship between the input data and the proposed solution, AI systems will continue to be the object of mistrust among professionals whose decisions could be dramatically improved with the support of smart solutions (higher success rate in significantly shorter times).

"Robotic vacuum cleaner at the beach" by Andy Warhol

"Robotic vacuum cleaner at the beach" by Andy Warhol (DALL-E)

For example, in the case of assisted medical diagnosis, AI solutions would not necessarily be intended to replace medical specialists but could be seen as a first or second diagnostic opinion including an explanation of possible causes. This information could potentially increase, in fractions of a second, the quantity and quality of evidence available to make a more informed final decision. In addition, correctly designed intelligent systems for assisted diagnosis could reduce subjectivity in diagnostic judgments, since they could learn from decisions agreed upon by multiple specialists.

"Robots fighting a battle versus XX Roman army with horses in the Danubio river" by Sandro
Botticelli

"Robots fighting a battle versus XX Roman army with horses in the Danubio river" by Sandro Botticelli (DALL-E)

In November 2021, the UNESCO General Conference agreed on a catalogue of AI ethical recommendations that are expected to guide the design of intelligent solutions. In addition to protecting and promoting human rights and human dignity, these recommendations should provide a normative basis for building a rule of law in the digital world.