The Community of Madrid, through the Regional Ministry of Family, Youth and Social Affairs, and with funds from the European Union under the Recovery and Resilience Mechanism, has decided to award a grant of €999,370.91 to the IA+IGUAL social innovation pilot project developed by CVA, IN2 and ORH. The project aims to analyze and verify Artificial Intelligence (AI) algorithms in the Human Resources sector. Officially launched on July 1, 2023, its implementation period will last two years, until June 2025.
The IA+IGUAL project will be developed in line with the proposed regulation on AI presented by the European Commission, to which member countries must adapt within two years. In this line, the AI + EQUAL project aims to establish a basis for the future development of AI in talent management processes in order to prevent its algorithms from reflecting social stereotypes and leading to unfair and discriminatory decision making.
The IA+IGUAL project has an Advisory Board made up of renowned professionals in different areas of knowledge: training, jurisprudence, ethics and philosophy, computational language, diversity management, employment policies, new technologies; the scientific collaboration of the Universidad Politécnica de Madrid and a group of companies that use or develop human resources tools supported by AI. In addition, it is scalable and aims to become the starting point for an ethical and highly efficient algorithmic model in the field of AI applied to HR.
Therefore, IA+IGUAL has a strong empirical character that studies, through a system of verification of algorithmic biases, real cases in entities that are already using AI in their recruitment processes. Its action plan will be structured by a training itinerary, a space for dissemination and awareness and the development of a quality seal that contributes to generate a more egalitarian AI within the business ecosystem of the Community of Madrid.
In a context in which AI is a reality implemented in all sectors of society, it is vitally important to detect and analyze the possible risks that its use may entail. One of these areas in which its presence has grown exponentially is that of Human Resources, as it is used in recruitment and selection processes, administrative tasks or face-to-face control, as well as in occupational health and salary conditions. The biases that are triggered in the programming of this AI are invisible, while its effects on people are evident. It is therefore necessary to analyze the learning models of AI algorithms applied in the labor market.
For more information:
IN2 & IA+IGUAL