
Council of Europe Launches New Guidelines to Combat Rising AI-Driven Discrimination
AI and automated decision-making (ADM) systems are increasingly permeating daily life, often exacerbating existing social inequalities rather than eliminating them.
RMN News Human Rights Desk
New Delhi | January 21, 2026
The Council of Europe has unveiled a pair of landmark publications aimed at addressing the serious challenges artificial intelligence (AI) poses to fundamental rights and social equality. The release took place during a recent high-level webinar that brought together representatives from the European Commission, academic experts, and national equality bodies from Belgium, Portugal, and Finland.
The two new reports, titled “Legal protection against algorithmic discrimination in Europe: current frameworks and remaining gaps” and “European policy guidelines on AI and algorithm-driven discrimination for equality bodies and other national human rights structures,” provide a comprehensive snapshot of the current digital landscape. These documents were developed through a project co-funded by the European Union and implemented by the Council of Europe to support public administrations in upholding non-discrimination standards.
The Growing Risk of Algorithmic Bias: According to the sources, AI and automated decision-making (ADM) systems are increasingly permeating daily life, often exacerbating existing social inequalities rather than eliminating them. In the employment sector, selection algorithms trained on historical data have been shown to unfairly favor male candidates by reproducing past discriminatory stereotypes against women and minority groups.
Furthermore, the sources highlight several critical areas where AI deployment has raised alarms:
- Law Enforcement: Facial recognition technologies have been found to exhibit discriminatory biases, sometimes leading to ethnic profiling.
- Migration: Public administrations are using AI for high-stakes decisions regarding asylum, citizenship, and border surveillance, utilizing tools such as language assessment and fraud detection.
- Private Sector: Financial institutions, including banking and insurance companies, are deploying ADM systems for applicant screening and profiling.
- Public Services: AI is currently being utilized to manage resources in welfare, healthcare, education, and justice.
Strengthening Legal and National Frameworks: To counter these risks, the Council of Europe highlighted two major legal instruments adopted in 2024: the EU AI Act and the Council of Europe Framework Convention on artificial intelligence and human rights, democracy and the rule of law. One of the new publications specifically evaluates how these laws protect citizens while identifying the “lacunae,” or legal gaps, that still need to be addressed to ensure full protection against algorithmic bias.
A major focus of the initiative is assisting actors at the national level. The newly released guidelines provide a roadmap for equality bodies and human rights institutions to identify and mitigate risks. These resources are designed to help national stakeholders ensure that AI deployment remains compliant with fundamental rights and to provide clear recommendations for redress when discrimination occurs.
The Council of Europe continues to run extensive programs to combat hate speech and discrimination, emphasizing that the positive management of diversity must extend into the deployment of digital and algorithmic systems. The full publications are now available in multiple languages, including French, Dutch, Portuguese, Finnish, and Swedish, to ensure wide accessibility across member states.
Discover more from RMN News
Subscribe to get the latest posts sent to your email.
