|
Artificial intelligence is often described as neutral, but every system is built on human choices. The data it is trained on, the way it is designed, and the goals it is built to serve all reflect the perspectives of its creators. AI does not just mirror the world; it mirrors the world through a particular lens.
Most large AI models are trained on text scraped from the internet, books, and other sources. This gives them a wide range of knowledge, but it also means they absorb stereotypes, biases, and cultural assumptions. If some groups or perspectives are underrepresented, AI will reflect those imbalances in its outputs. This is not a theoretical problem. Research on facial recognition technology revealed that commercial systems made significantly more errors on women and people with darker skin tones compared to lighter-skinned men. These disparities emerged because the datasets used to train the systems contained far fewer examples from underrepresented groups. The outcome was not just technical inaccuracy but social harm, as people were misidentified at higher rates simply because of how the data was collected. Language models show similar issues. They can generate biased or stereotypical responses about gender, race, or professions, not because the developers intended it, but because those patterns exist in the data they learned from. In other cases, biases are subtle and harder to detect. A hiring algorithm, for example, might seem neutral on the surface but still favor résumés that resemble those of historically advantaged groups. The values embedded in AI are not always obvious. They may show up only in certain contexts, and sometimes in ways that surprise even the developers. This makes accountability difficult. If a system produces a biased outcome, who is responsible? The company that built it? The engineers who selected the data? The users who applied it? Without clear answers, responsibility gets diffused, and those affected may have little recourse. Researchers have suggested ways to make these values more visible. One approach is the use of “model cards” and “datasheets for datasets.” These documents describe what data went into a model, highlight known limitations, and outline appropriate uses. While they do not solve every problem, they encourage developers to be transparent about the assumptions and risks associated with AI. AI is not value-free. It reflects the priorities and blind spots of its creators and the societies it is trained. Recognizing this is the first step toward responsible use. If we assume AI is neutral, we risk accepting inherited biases without question. If we acknowledge that it carries values, we can begin to ask harder questions: whose values are they, and whose voices are missing? Sources: Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research. Mitchell, M., et al. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency.
0 Comments
Leave a Reply. |
Archives
May 2024
Categories |
RSS Feed