Human Rights and AI

Artificial Intelligence (AI) is an umbrella term for algorithms that allow machines to understand the world and make predictions. It is ubiquitous, and you interact with an AI algorithm on a regular basis when you:

  1. Type your mail or message and take advantage of auto-suggest to complete a sentence (Natural Language Processing)
  2. Use language translation services (Natural Language Processing)
  3. Ask for directions on your smartphone or ask your voice assistant to play a song (Automatic Speech Recognition)
  4. Check your social media feed or browse suggestions for movies, songs and shopping items (Recommender System)
  5. Use smartphone applications to apply camera filters to your images or drive an autonomous vehicle (Computer Vision)
  6. Play games against a bot (Reinforcement Learning)

You may also unknowingly interact with algorithms used for:

  1. Automated evaluations of radiology images (Computer Vision) or epidemiology studies (Causal Inference) in healthcare
  2. Facial Recognition systems for mass surveillance (Computer Vision)
  3. Smart Cities, personalized ads, affect recognition, behavior modification, social credit scoring, insurance pricing, job hiring, recidivism prediction in judicial decision making, autonomous weapons used by military...

Technology: Issues

AI and algorithms in general are automating decision-making in all spheres of our lives1. These decisions affect our quality of life, our choices and our future. Algorithms decide whether we are able to afford health insurance, allowed access to public utilities, post bail, get a job, get the right healthcare treatment and travel globally. Algorithms encode the biases of historical data on which they are built. Algorithms and the choices made in building technologies have been shown to be discriminatory, misogynistic, racist, and affecting minority groups disproportionately2, 3, 4, 5, 6, 7. For example, one type of predictive policing algorithm tries to predict where crime is likely to happen to proactively police those areas. In the U.S., the data used to build these algorithms contains a majority of data points for neighborhoods with minority communities that have historically been over-policed due to racist policies. Hence, the algorithm makes predictions that are skewed towards over-policing those same neighborhoods, effectively reinforcing racial biases.

Most importantly, technologies and the algorithms they are built upon have the power to decide the kind of government we have. Left unchecked, they can be enablers of authoritarianism, totalitarianism and potentially fascism8, 9, 10, 11. For example, social media is being used to spread misinformation to gain political power and undermine democracy. Social media companies can choose to make a design decision to curb the spread of misinformation – they only have to make sharing of posts more cumbersome. This will force people to reflect on what they are sharing and only share posts which they deem important. However, social media companies try to make apps “frictionless”, minimizing the number of clicks it takes to do a task. This design decision boosts the spread of misinformation.

How can we ensure that technology is ethical?

Discussions surrounding the definition of ethics may not prove fruitful due to differences in individual values. However, we do have a list of agreed-upon, enforceable shared human values – the United Nations Universal Declaration of Human Rights (UDHR). These include values such as equal rights to all without discrimination in any form, the right to privacy, not being treated inhumanely, and having just and favorable working conditions. Researchers have been proposing using the UDHR as a guide for building ethical technology8, 12.

Asking the right questions

What questions should we ask ourselves when deciding what to build and how to build, and also what not to build and how not to build? These questions will enable us to make choices when designing and building ethical technology – technology that is in line with the shared values of the UDHR. A good starting point is the Assessment List produced by the High-Level Expert Group on Artificial Intelligence that is set up by the European Commission.

We will look at that in the next post!

Further Resources

  1. Crawford et al., “AI Now 2019 Report.”
  2. Algorithmic bias – Wikipedia
  3. Barocas, Hardt, and Narayanan, Fairness and Machine Learning.
  4. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
  5. Benjamin, Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life
  6. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism
  7. Criticism of Facebook – Wikipedia
  8. The Dictator's Playbook Revisited – Podcast
  9. Surveillance capitalism – Wikipedia
  10. Yuval Noah Harari: Why fascism is so tempting — and how your data could power it | TED Talk
  11. Misinformation – Wikipedia
  12. Hippocratic License