This report published by the European Fundamental Rights Agency looks at the use of artificial intelligence in predictive policing and offensive speech detection. It demonstrates how bias in algorithms appears, can amplify over time and affect people’s lives, potentially leading to discrimination. It corroborates the need for more comprehensive and thorough assessments of algorithms in terms of bias before such algorithms are used for decision-making that can have an impact on people.
Key findings and opinions in this report:
- Artificial intelligence and bias: what is the problem?
- Feedback loops: how algorithms can influence algorithms
- Ethnic and gender bias in offensive speech detection
- Looking forward: Sharpening the fundamental rights focus on artificial intelligence to mitigate bias and discrimination