Biased and Harmful AI projects in Latam

Governments all over Latin America are in the stage of testing and piloting a wide variety of Artificial Intelligence systems to help deploy public services. In this initial mapping exercise, we could identify 5 trending areas: Education; Judicial System; Policing; Public Health, and Social Benefits.

But what are the feminist and human rights implications of using algorithmic decision-making to determine the provision of social benefits and other public services? This is an ongoing process to build case-based anti-colonial feminist critiques of A.I. systems.

The map below illustrates projects where the deployment of artificial intelligence systems is likely to cause or propagate harm and discrimination based on gender and all its intersectionalities of race, class, sexuality, age, territory, etc. Although this is a nonexhaustive ongoing mapping to portrait their own narratives around these projects, it already shows worrisome trends that should be of concern to feminist movements. However, further information and deeper investigation are needed for better impact assessments, something that we have done for two of them. Have you heard of other projects? Help us track them here.