The impact of AI bias on social justice: challenges and solutions
DOI:
https://doi.org/10.54097/wz9v0f43Keywords:
Artificial Intelligence Bias, Social Justice, Data Bias, Human Bias, Algorithmic BiasAbstract
This paper analyzes the wide-ranging impact of artificial intelligence (AI) bias on social equity, especially in key areas such as healthcare, education, and employment. With the rapid development of AI technology, its potential bias problems have been gradually exposed and attracted widespread attention. To this end, this paper not only explores the manifestations and causes of AI biases, but also proposes specific mitigation strategies, such as through improving data diversity, enhancing algorithm transparency, promoting interdisciplinary cooperation, and establishing a sound regulatory framework, with a view to reducing the negative impact of AI biases on social equity and promoting the fair application of AI technologies in various fields of society.
References
[1] CALISKANA, BRYSON J J, NARAYANAN A. Semantics derived automatically from language corpora contain human-like biases[J]. Science,2017, 356(6334):183-186.
[2] Howard, A., & Borenstein, J. (2018). The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Science and Engineering Ethics, 24(5), 1521-1536.
[3] ZHANG Zhong-zhong,SONG Juan. Advances in prejudice research[J]. Psychological and Behavioral Research, 2007(2): 150-155.
[4] Del Balso, M. And Hermann, J. (2017). "Meet Michelangelo: Uber's Machine Learning Platform". Uber Engineering, 5 Sep 2017.
[5] Drew Roselli, Jeanna Matthews, and Nisha Talagala. 2019. Managing Bias in AI. In Companion Proceedings of the 2019 World Wide Web Conference, San Francisco, CA USA, May 2019 (WWW '19 Companion), 10 pages. Doe: 10.1145/3308560.3317590
[6] S. Townson and M. Zeltkevic, "Why AI bias may be easier to fix than humanity's," World Economic Forum Agenda, Jun. 2023. [Online]. Available: https://www.weforum.org/agenda/2023/06/why-ai-bias-may-be-easier-to-fix-than-humanity-s/. [Accessed: Sep. 01, 2024].
[7] ROSE A. Are face -detection cameras racist? [EB/OL]. (2010 -01 -22) [2021 -01 -26]. Http: //content.time.com/time/business/article/ 0,8599, 1954643, 00.html.
[8] Diakopoulos N. Algorithmicaccountability: Journalistic Investigation of Computational Power Structures [J]. Digitaljournalism, 2015,3(3):398-415.
[9] Yang Thanh Viet,Luo Xianjue. A preliminary study on the comprehensive management of algorithmic discrimination[J]. Science and Society, 2018,8(4):8-19
[10] World Educators Forum. Education 2030: incheon declaration and framework for action towards inclusive and equitable quality education and Inclusive and equitable quality education and lifelong learning for all[M]. Paris: United Nations Educational, Scientific and Cultural Organization, 2015.
[11] HARRIS E. Court vacates Long Island teacher's evaluation tied to test scores [EB/OL]. (2016-05-10) [2021-01-26]. Https: //www. Nytimes.com/2016/05/11/nyregion/court-vacates-long-island-teachers-evaluation-tied-to-student-test-scores.html.
[12] ANGWIN J, Mattu S, LARSON J. The tiger mom tax: Asians are nearly twice as likely to get a higher price from Princeton review [EB/OL]. (2015-09-01) [2021-01-26]. Https://www.propublica.org/article/asians-nearly-twice-as-likely-to-get-higher-price-from Princeton-review
[13] HEROLD B. Custom software helps cities manage school choice [EB/OL]. (2013-11-03) [2021-01-26]. Https://www.edweek.org/ew/Articles/2013/12/04/13algorithm_ep.h33.html.
[14] MHASAWADEV, ZHAO Y, CHUNARA R. Machine learning and algorithmic fairness in public and population health[J]. Nat Mach intell, 2021, 3 (8): 659-666.
[15] ZHANG Yifan,ZHANG Zerui,DONG Jing,et al. Progress and challenges of medical artificial intelligence technology in the era of big model[J]. China Medical Equipment, 2024, 21(06): 189-194.
[16] Zhan Hao . Research on Algorithmic Discrimination in Data Mining in Big Data Era [D]. Changsha: Hunan Normal University, 2020.
[17] LEPRI B,OLIVER N,LETOUZÉ E, et al. Fair, transparent, and accountable algorithmic decision-making processes[J]. Philosophy & Technology,2018, 31: 611-627. DOI: 10.1007/ s13347-017-0279-x.
[18] CHU C H, NYRUP R, LESLIE K, et al. Digital ageism: challenges and opportunities in artificial intelligence for older adults [J]. Gerontologist, 2022, 62(7): 947-955. DOI: 10.1093/ geront/gnab167.
[19] J. Wang. From human bias to algorithmic bias: Can bias be eliminated[J]. Exploration and Controversy,.2021(3): 32-34
[20] FELDMAN R, ALDANA E, STEIN K. Artificial Intelligence in the Health Care Space: How We Can Trust What We Cannot Know[J]. Stanford Law And Policy Review, 2019, 30(2): 399.
[21] DANKWA-MULLAN I, WEERARATNE D. Artificial intelligence and machine learning technologies in cancer care: addressing disparities, bias, and data diversity[J]. Cancer Discov, 2022, 12(6): 1423-1427.
[22] Nature Machine Intelligence. Striving for health equity with machine learning [J]. Nature Machine Intelligence, 2021, 3: 653. DOI: 10.1038/s42256-021-00385-0.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.