December 3, 2025|
AI
|1 day agoSwiss-Made 'Transparent' AI Reveals Ingrained Societal Biases
Despite efforts to create a more open and transparent public AI, Switzerland's Apertus model has been shown to reproduce common gender and ethnic stereotypes. This raises critical questions about the challenge of achieving fairness in AI and the potential for automated systems to amplify discrimination.

AI
Generated IllustrationKey Takeaways
AI
- Apertus, a Swiss large language model, produced a stereotypical profile of a 30-year-old male from Zurich for an engineer who plays video games.
- When prompted to describe a cleaner with three children who loves to cook, Apertus generated a profile of a 40-year-old Puerto Rican woman named Maria Rodriguez.
- AI is already being used in hiring, healthcare, and law enforcement, raising concerns about entrenching inequalities.
- Switzerland is working on new legal measures to address AI discrimination, as current laws have gaps.
- Research indicates that algorithms systematically discriminate against women, people of colour, and minorities.
By The Numbers
85%
Percentage of the time AI job screening tools favor male names
100%
Percentage of tests where men of colour were penalized by AI job screening tools
Over 5,000
Number of AI-generated images analyzed in a Bloomberg study on bias
They Said
"Users tend to trust the results of AI, because they perceive it as an objective and neutral technology, even though it is not."
"Itâs not a question of demonising algorithms but of recognising that decisions made in an opaque manner by AI can have political, legal and economic consequences."