Research Article


DOI :10.26650/SJ.2024.44.1.0001   IUP :10.26650/SJ.2024.44.1.0001    Full Text (PDF)

Participatory Management Can Help AI Ethics Adhere to the Social Consensus

Mahmut ÖzerMatjaz PercHayri Eren Suna

Artificial Intelligence (AI) is increasingly pervasive, significantly altering social structures, cultural dynamics, and labor markets. The rapid growth of this ecosystem has sparked worldwide debates about AI’s challenges, including its role in reinforcing biases and social inequalities, ignoring societal values, and impacting diverse sectors like genetics, drug production, defense, and democratic processes. This study examines AI ethics through the social consensus framework, proposing participatory management as a crucial approach to address these challenges. The methodology spans the entire AI lifecycle, advocating for inclusive practices from the design stage to implementation, monitoring, and control. The participatory management model is structured in three phases: Stakeholder Engagement, which involves active participation from diverse stakeholders in developing AI systems, ensuring a range of perspectives in design, modeling, and implementation; Monitoring and Alignment, which focuses on the continuous observation of AI systems’ interaction with their environments, and Macro-level Impact Analysis, which looks at the broader societal impacts of the AI ecosystem, assessing its influence on various sectors like education, culture, health, and safety. This study underscores the importance of a collaborative, inclusive approach in AI development and management, emphasizing the need to align AI advancements with ethical principles and societal well-being.


PDF View

References

  • Acemoğlu, D., & Restrepo, P. (2018). Artificial intelligence, automation and work. NBER Working Paper 24196. National Bureau of Economic Research. google scholar
  • Acemoğlu, D., Autor, D., & Johnson, S. (2023). Can we have pro-worker- AI? Choosing a path of machines in service of minds. CEPR Policy Insight, No.123, 1-12. google scholar
  • Aghion, P., & Howitt, P. (1990). A model of growth through creative destruction. NBER Working Paper 3223 National Bureau of Economic Research. google scholar
  • Aghion, P., Howitt, P. (1994). Growth and unemployment. Rev Econ Stud, 61, 477-494. google scholar
  • Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, 102387. google scholar
  • Aldewereld, H., Dignum, V., & Hua Tan, Y. (2014). Design for values in software development. In Jeroen van den Hoven, Pieter E. Vermaas, I. v. d. P., (Eds), Handbook of ethics, values, and technological design. Springer. google scholar
  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (23 May, 2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. google scholar
  • Aquino, Y. S. J. (2023). Making decisions: Bias in artificial intelligence and data-driven diagnostic tools. Australian Journal of General Practice, 52(7), 439-442. google scholar
  • Arntz, M., Gregory, T., & Zierahn, U. (2016). The risk of automation for jobs in OECD countries: A comparative analysis. OECD Social, Employment and Migration Working Paper 189. google scholar
  • Arseniev-Koehler, A., & Foster, J. G. (2022). Machine learning as a model for cultural learning: Teaching an algorithm what it means to be fat. Sociological Methods & Research, 51(4), 1484-1539. google scholar
  • Awad, E., Dsouza, S., Bonnefon, J. F., Shariff, A., & Rahwan, I. (2020). Crowdsourcing: Moral machines. Communications of ACM, 63(3). google scholar
  • Azjen, I., Brown, T. C., & Carvajal, F. (2004). Explaining the discrepancy between intentions and actions: The case of hypothetical bias in contingent valuation. Personality and Social Psychology Bulletin, 30(9), 1108-1121. google scholar
  • Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32, 1052-1092. google scholar
  • Barocas, S., Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104, 671-732. google scholar
  • Bartelsman, E., Haltiwanger, J., & Scarpetta, S. (2004). Microeconomic evidence of creative destruction in industrial and developing countries. The World Bank. google scholar
  • Bates, D. W., Saria, S., Ohno-Machado, L., Shah, A., & Escobar, G. (2014). Big data in health care: using analytics to identify and manage high-risk and high-cost patients. Health Affairs, 33(7), 1123-1131. google scholar
  • Berman, B. (1989).The computer metaphor: Bureaucratizing the mind. Science as Culture, 1(7), 7-42. google scholar
  • Berman, B. (1992). Artificial intelligence and the ideology of capitalist reconstruction. AI & Society, 6(2), 103-114. google scholar
  • Biggio, B. et al. (2013). Evasion attacks against machine learning at test time. In Proc Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 387-402). google scholar
  • Birhane, A., Isaac, W., Prabhakaran, V., Diaz, M., Elish, M. C., Gabriel, I., & Mohamed. S. (2022). Power to the people? Opportunities and challenges for participatory AI. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ‘22). Association for Computing Machinery, New York, NY, USA, Article 6, 1-8. google scholar
  • Bondi, E., Xu, L., Acosta-Navas, D., & Killian., J. A. (2021). Envisioning communities: A participatory approach towards AI for social good. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘21). Association for Computing Machinery, New York, NY, USA, 425-436. google scholar
  • Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293):1573-1576. google scholar
  • Bornmann, L., Haunschild, R., & Mutz, R. (2021).Growth rates of modern science: A latent piecewise growth curve approach to model publication numbers from established and new literature databases. Humanities and Social Sciences Communications 8, 224. google scholar
  • Boutyline, A., Arseniev-Koehler, A., & Cornell, D. J. (2023). School, studying, and smarts: Gender stereotypes and education across 80 years of American print media, 1930-2009. Social Forces, 102(1), 263-286. google scholar
  • Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662-679. google scholar
  • Bratteteig, T., & Verne, G. (2018). Does AI make PD obsolete? Exploring challenges from artificial ıntelligence to participatory design. In Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial, 2, 8, 1-5. google scholar
  • Brauner, P., Hick, A., Philipsen, R., & Ziefle, M. (2023). What does the public think about artificial intelligence?—A criticality map to understand bias in the public perception of AI. Frontiers in Computer Science, 5, 1113903. google scholar
  • Bozkurt, V., & Gürsoy, D. (2023). The artificial intelligence paradox: Opportunity or threat for humanity?. International Journal of Human-Computer Interaction, doi: 10.1080/10447318. 2023.2297114. google scholar
  • Bozkurt, V., & Gürsoy, D. (2023). The artificial intelligence paradox: Opportunity or threat for humanity?. International Journal of Human-Computer Interaction, doi: 10.1080/10447318.2023.2297114. google scholar
  • Brinkmann, L., Baumann, F., Bonnefon, J. F. et al. (2023). Machine culture. Nature Human Behavior, 7(11), 1855-1868. google scholar
  • Calacci, D (2023). Building dreams beyond labor: Worker autonomy in the age of AI. Intereactions Mag, 48-51. google scholar
  • Capraro, V., Lentsch, A., Acemoğlu, D., et al. (2023). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. arXiv preprint. arXiv:2401.05377. google scholar
  • Citron, D. K., Pasquale, F. A. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89. google scholar
  • Conitzer, V., Brill, M., & Freeman, R. (2015). Crowdsourcing societal tradeoffs. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, (pp. 12131217). International Foundation for Autonomous Agents and Multiagent Systems. google scholar
  • Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538, 311-313. google scholar
  • de Laat, P. B. (2018). Algorithmic decision-making based on machine learning from big data: can transparency restore accountability? Philos Technol, 31, 525-541. google scholar
  • da Silva, J. A. T. (2021). The Matthew effect impacts science and academic publishing by preferentially amplifying citations, metrics and status. Scientometrics, 126, 5373-5377. google scholar
  • Daugherty, P. R., & Wilson, H. J. (2018). Human + machine: Reimagining work in the age ofAI. Harvard Business Review Press. google scholar
  • Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023). The participatory turn in ai design: Theoretical foundations and the current state of practice. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ‘23). Association for Computing Machinery, New York, NY, USA, Article 37, 1-23. google scholar
  • Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structure. Digital Journalism, 3(3), 1-18. google scholar
  • Didier, E. (2015). Gabriel Tarde and statistical movement. In V. Vargas (Ed), The social after Gabriel Tarde (pp. 299-325). Routledge. google scholar
  • Erdi, P. (2020). Ranking: The unwritten rules of the social game we all play. Oxford University Press. google scholar
  • Etzioni, A., & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18(2), 149-156. google scholar
  • Frank, M. R., Autor, D., Bessen, J. E., et al. (2019). Toward understanding the impact of artificial intelligence on labor. PNAS, 116(14), 6531-6539. google scholar
  • Friedman, B. (1996). Value-sensitive design. Interactions, 3(6), 16-23. google scholar
  • Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330-347. google scholar
  • Fu, R., Huang, Y., & Singh, P. V. (2020). AI and algorithmic bias: Source, detection, mitigation and implications, SSRN, doi:/10.2139/ssrn.3681517 google scholar
  • Gerlich, M. (2023). Perceptions and acceptance of artificial intelligence: A multi-dimensional study. Social Sciences, 12(9), 502. google scholar
  • Harari, Y. N. (2017). Reboot for the AI revolution. Nature, 550(19), 324-327. google scholar
  • Hernandez-Ramirez, R. ().Technology and self-modification: Understanding technologies of the self after Foucault. Journal of Science and Technology of the Arts, 9(3), 45-57. google scholar
  • Hossain, S. Q., & Ahmed, S. I. (2021). Towards a new participatory approach for designing artificial intelligence and data-driven technologies. ACM, New York, NY, USA, google scholar
  • Hussain, S., Sanders, E. B. N., & Steinert. M. (2012). Participatory design with marginalized people in developing countries: Challenges and opportunities experienced in a field study in Cambodia. International Journal of 'Design 6, 2, 91-109. google scholar
  • Joyce, K., Smith-Doerr, L., Alegria, S., Bell, S., Cruz, T., Hoffman, S. G., Noble, S. U., & Shestakofsky, B. (2021). Toward a sociology of artificial intelligence: A call for research on inequalities and structural change. Socius, 7. google scholar
  • Kostick-Quenet, K. M., Gerke, S. (2022). AI in the hands of imperfect users. NPJ Digital Medicine, 5, 197. google scholar
  • Lee, M. K., Kusbit, D., Kahng, A. et al. (2019). WeBuildAI: Participatory framework for algorithmic governance. Proc. ACM Hum.-Comput. Interact., 3(CSCW), 181. google scholar
  • Lewis, K., Kaufman, J., Gonzales, M., Wimmer, A., & Christakis, N. (2008). Tastes, ties, and time: A new social network dataset using Facebook.com. Social Networks, 30, 330-342. google scholar
  • Liu, B. (2012). Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1), 1-167. google scholar
  • Liu, Z. (2020). Sociological perspectives on artificial intelligence: A typological reading. Sociology Compass, 15(3), e12851. google scholar
  • Lum, K., & Isaac, W. (2016). To predict and serve?. Significance, 13(5), 14-19. google scholar
  • Manyika, J., & Sneader, K. (2018). AI, automation, and the future of work: Ten things to solve for. McKinsey Global Institute. google scholar
  • Mayer-Schönberger, V., & Cukier, K. (2014). Big data. Houghton Mifflin Harcourt. google scholar
  • Merton, R. K. (1968). The Matthew effect in science. Science, 159, 53-63. google scholar
  • Mittermaier, M., Raza, M. M., & Kvedar, J. C. (2023). Bias in AI-based models for medical applications: Challenges and mitigation strategies. NPJ Digital Medicine, 6, 113. google scholar
  • Montesano, A. (2011). Ricardo on machinery. What matters: Technical progress or substitution of machines for circulating capital?. History of Economic Ideas, 19(1), 103-124. google scholar
  • Mu, W. (2023). How artificial intelligence affects workforces: The impact of biased recruitment and job displacement risk. Highlights in Business, Economics and Management, 23, 19-25. google scholar
  • Muller, M. J. (2009). Participatory design: The third space in HCI. In Human-computer interaction (pp. 181-202). CRC Press. google scholar
  • Nazer, L. H., Zatarah, R., Waldrip, S., Ke, J. X. C, Moukheiber, M., Khanna, A. K., et al. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health, 2(6):e0000278. google scholar
  • Ntoutsi, E., Fafalios, P., Gadiraju, U., et al. (2020). Bias in data-driven artificial intelligence systems-an introductory survey. WIREs Data Mining Knowl Discov, 10(3):e1356. google scholar
  • Nevmyvaka, Y., Feng, Y., & Kearns, M. (2006). Reinforcement learning for optimized trade execution. In Proc 23rd International Conference on Machine Learning (pp. 673-680). ACM. google scholar
  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366, 447-453. google scholar
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Books. google scholar
  • Özer, M. (2023a). The Matthew effect in Turkish education system. Bartın University Journal of Faculty of Education, 12(4):704-712. google scholar
  • Özer, M. (2023b). Matta Etkisi. Uluslararası Yönetim İktisat ve İşletme Dergisi, 19(4):974-984. google scholar
  • Özer, M. (2024). Başarı oyununda Matta Etkisi ve ödülün asimetrik dağılımı. Reflektif Journal of Social Sciences, 5(1):187-197. google scholar
  • Özer, M., & Perc, M. (2020). Dream and realities of school tracking and vocational education. Palgrave Communications, 6(1), 1-7. google scholar
  • Özer, M., & Perc, M. (2021). Impact of social networks on the labor market inequalities and school-to-work transitions. Journal of Higher Education, 11(1):38-50. google scholar
  • Özer, M., Perc, M., & Suna, H. E. (in press). AI bias and the amplification of inequalities in the labor market. Journal of Economy, Culture and Society. google scholar
  • Özer, M. (in press). Potential benefits and risks of artificial intelligence in education. Bartın University Journal of Faculty of Education. google scholar
  • Pajarinen M., Rouvinen P., & Ekeland, A. (2015). Computerization threatens one-third of Finnish and Norwegian employment. ETLA Brief, 34. google scholar
  • Panch, T., Mattie, H., & Atun, R. (2019). Artificial intelligence and algorithmic bias: implications for health systems. Journal of Global Health, 9(2), 010318. google scholar
  • Parkes, D. C., & Wellman, M. P. (2015). Economic reasoning and artificial intelligence. Science, 349, 267-272. google scholar
  • Pasquale, F. (2011). Restoring transparency to automated authority. Journal of Telecommunications & High Technology Law, 9, 235-256. google scholar
  • Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. google scholar
  • Pelacho, M., Ruiz, G., Sanz, F., Tarancon, A., & Clemento-Gallardo, J. (2020). Analysis of the evolution and collaboration networks of citizen science scientific publications. Scientometrics, doi: 10.1007/s11192-020-03724-x. google scholar
  • Perc, M. (2014). The Matthew effect in empirical data. Journal of the Royal Society Interface, 11, 98. google scholar
  • Perc, M., Özer, M., & Hojnik, J. (2019). Social and juristic challenges of artificial intelligence. Palgrave Communications, 5, 61. google scholar
  • Piano, S. L. (2020). Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanities & Social Sciences Communications, 7, 9. google scholar
  • Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5-14. google scholar
  • Rahwan, I., Cebrian, M., Obradovich, N. et al. (2019). Machine behavior. Nature, 568, 477-486. google scholar
  • Reinsel, D., Gantz, J., & Rydning, J. (2018). The digitization of the world: From edge to core. IDC White Paper. google scholar
  • Rigney, D (2010). The Matthew effect: How advantage begets further advantage. Columbia University Press. google scholar
  • Schwartz, R. D. (1989). Artificial intelligence as a sociological phenomenon. The Canadian Journal of Sociology, 14(2), 179-202. google scholar
  • Shinde, P. P., & Shah, S. (2018). A review of machine learning and deep learning applications. 2018 Fourth International Conference on Computing Communication Control and Automation, 1-6. google scholar
  • Sjoding, M. W., Dickson, R. P., Iwashyna, T. J., Gay, S. E., & Valley, T. S. (2020). Racial bias in pulse oximetry measurement. google scholar
  • Sonnenburg, S. et al. (2007). The need for open source software in machine learning. J Mach Learn Res, 8, 2443-2466. google scholar
  • Soori, M., Arezoo, B., & Dastres, R. (2023). Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cognitive Robotics, 3, 54-70. google scholar
  • Stahl, B. C. (2023). Embedding responsibility in intelligent systems: from AI ethics to responsible AI ecosystems. Scientific Reports, 13, 7586. google scholar
  • Strevens, M. (2006). The role of the Matthew effect in science. Studies in History and Philosophy of Science Part A, 37(2), 159-170. google scholar
  • Suleyman, M. (2023). The coming wave: Technology, power, and the twenty-first century’s greatest dilemma. Crown: New York. google scholar
  • Thimbleby, H. (2003). Explaining code for publication. Softw: Pract Experience, 33, 975-1001. google scholar
  • Thompson, E. P. (1967). Time, work-discipline and industrial capitalism. Past & Present, 38, 56-97. google scholar
  • Tramer, F. et al. (2017). Ensemble adversarial training: attacks and defences. arxiv.org/abs/1705.07204. google scholar
  • Turkle, S. (1984).The second self: Computers and the human spirit. Simon and Schuste. google scholar
  • Ulnicane, I., & Aden, A. (2023). Power and politics in framing bias in Artificial Intelligence policy. Review of Policy Research, 40, 665-687. google scholar
  • Ye, F. (2017). Scientific metrics: Towards analytical and quantitative sciences. Science Press Beijing & Springer. google scholar
  • Wan, Y., & Chang, K. W. (2024). The male CEO and the female assistant: Probing gender biases in text-to-image models through paired stereotype test. arXiv: 2402.11089. google scholar
  • Wolfe, A. (1991). Mind, self, society, and computer: Artificial intelligence and the sociology of mind. American Journal of Sociology, 96(5), 1073-1096. google scholar
  • Woolgar, S. (1985). Why not a sociology of machines? The case of sociology and artificial intelligence. Sociology, 19(4), 557-572. google scholar
  • Zarsky, T. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology & Human Values, 41(1), 118-132. google scholar
  • Zimmer, M (2008). More on the ‘Anonymity’ of the Facebook dataset: It’s Harvard College. MichaelZimmer.org Blog. google scholar
  • Zuckerman, H. (1989). Accumulation of Advantage and Disadvantage: The Theory and Its Intellectual Biography. In Ed. C Mongardini and S Tabboni (Eds). Robert K. Merton and contemporary sociology (pp.153-176). New Brunswick, NJ: Transaction. google scholar

Citations

Copy and paste a formatted citation or use one of the options to export in your chosen format


EXPORT



APA

Özer, M., Perc, M., & Suna, H.E. (2024). Participatory Management Can Help AI Ethics Adhere to the Social Consensus. İstanbul University Journal of Sociology, 44(1), 221-238. https://doi.org/10.26650/SJ.2024.44.1.0001


AMA

Özer M, Perc M, Suna H E. Participatory Management Can Help AI Ethics Adhere to the Social Consensus. İstanbul University Journal of Sociology. 2024;44(1):221-238. https://doi.org/10.26650/SJ.2024.44.1.0001


ABNT

Özer, M.; Perc, M.; Suna, H.E. Participatory Management Can Help AI Ethics Adhere to the Social Consensus. İstanbul University Journal of Sociology, [Publisher Location], v. 44, n. 1, p. 221-238, 2024.


Chicago: Author-Date Style

Özer, Mahmut, and Matjaz Perc and Hayri Eren Suna. 2024. “Participatory Management Can Help AI Ethics Adhere to the Social Consensus.” İstanbul University Journal of Sociology 44, no. 1: 221-238. https://doi.org/10.26650/SJ.2024.44.1.0001


Chicago: Humanities Style

Özer, Mahmut, and Matjaz Perc and Hayri Eren Suna. Participatory Management Can Help AI Ethics Adhere to the Social Consensus.” İstanbul University Journal of Sociology 44, no. 1 (Oct. 2024): 221-238. https://doi.org/10.26650/SJ.2024.44.1.0001


Harvard: Australian Style

Özer, M & Perc, M & Suna, HE 2024, 'Participatory Management Can Help AI Ethics Adhere to the Social Consensus', İstanbul University Journal of Sociology, vol. 44, no. 1, pp. 221-238, viewed 11 Oct. 2024, https://doi.org/10.26650/SJ.2024.44.1.0001


Harvard: Author-Date Style

Özer, M. and Perc, M. and Suna, H.E. (2024) ‘Participatory Management Can Help AI Ethics Adhere to the Social Consensus’, İstanbul University Journal of Sociology, 44(1), pp. 221-238. https://doi.org/10.26650/SJ.2024.44.1.0001 (11 Oct. 2024).


MLA

Özer, Mahmut, and Matjaz Perc and Hayri Eren Suna. Participatory Management Can Help AI Ethics Adhere to the Social Consensus.” İstanbul University Journal of Sociology, vol. 44, no. 1, 2024, pp. 221-238. [Database Container], https://doi.org/10.26650/SJ.2024.44.1.0001


Vancouver

Özer M, Perc M, Suna HE. Participatory Management Can Help AI Ethics Adhere to the Social Consensus. İstanbul University Journal of Sociology [Internet]. 11 Oct. 2024 [cited 11 Oct. 2024];44(1):221-238. Available from: https://doi.org/10.26650/SJ.2024.44.1.0001 doi: 10.26650/SJ.2024.44.1.0001


ISNAD

Özer, Mahmut - Perc, Matjaz - Suna, HayriEren. Participatory Management Can Help AI Ethics Adhere to the Social Consensus”. İstanbul University Journal of Sociology 44/1 (Oct. 2024): 221-238. https://doi.org/10.26650/SJ.2024.44.1.0001



TIMELINE


Submitted05.01.2024
Accepted25.03.2024
Published Online04.04.2024

LICENCE


Attribution-NonCommercial (CC BY-NC)

This license lets others remix, tweak, and build upon your work non-commercially, and although their new works must also acknowledge you and be non-commercial, they don’t have to license their derivative works on the same terms.


SHARE




Istanbul University Press aims to contribute to the dissemination of ever growing scientific knowledge through publication of high quality scientific journals and books in accordance with the international publishing standards and ethics. Istanbul University Press follows an open access, non-commercial, scholarly publishing.