The Impact of AI Risk Scores on Business Managers’ Ethical Decision-Making
Keywords:
leadership, accountability, ethics, artificial intelligence, risk, risk scoring, ethical decision-making, business managersAbstract
Artificial intelligence (AI) presents significant benefits to the organizations and to the individuals, environments, and stakeholders they impact. However, AI systems can also pose a risk of harm. Development of an AI risk score representing the potential risk of the system may assist business managers with the ethical decision on whether to deploy an AI system. While the quantification of risks associated with AI has received attention from researchers, limited research exists analyzing summarized AI risk scores and their impact on decision-making in practice. Expanding on integrated and behavioral theories of ethical decision-making, this quantitative experimental study found that the presence of an AI risk score can reduce the likelihood of an unethical decision, and thus may positively influence business managers faced with an ethical decision. The study also explored the potential influence of the AI system’s use case when an AI risk score is present; however, no significant influence was identified in the scenarios tested. This study has implications for practice for organizations developing, deploying, and using AI systems.
References
Baudel, T., Colombet, G., & Hartmann, R. (2023, April). AI decision coordination: Easing the appropriation of decision automation for business users. IHM'23 - 34e Conférence Internationale Francophone sur l'Interaction Humain-Machine, AFIHM; Université de Technologie de Troyes, Troyes, France. Retrieved from https://hal.science/hal-04046408
Bommer, M., Gratto, C., Gravander, J., & Tuttle, M. (1987). A behavioral model of ethical and unethical decision-making. Journal of Business Ethics, 6(4), 265–280. https://doi.org/10.1007/BF00382936
Bucinca, Z., Chouldechova, A., Vaughan, J.W., & Gajos, K.Z. (2022). Beyond end predictions: Stop putting machine learning first and design human-centered AI for decision support. In Virtual workshop on human-centered AI workshop at NeurIPS (HCAI@ NeurIPS’22) (pp. 1–4). Virtual Event, USA.
Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39, 752–766.
Charness, G., & Gneezy, U. (2012). Strong evidence for gender differences in risk taking. Journal of Economic Behavior & Organization, 83(1), 50–58.
Creswell, J.W., & Creswell, J.D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th edition). Kindle for PC version.
European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence. Retrieved September from https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
Ezeani, G., Koene, A., Kumar, R., Santiago, N., & Wright, D. (2021). A survey of artificial intelligence risk assessment methodologies: The global state of play and leading practices identified [White paper]. EY, Trilateral Research. Retrieved from https://www.trilateralresearch.com/wp-content/uploads/2022/01/A-survey-of-AI-Risk-Assessment-Methodologies-full-report.pdf.
Feuerriegel, S., Shrestha, Y.R., von Krogh, G., & Zhang, C. (2022). Bringing Artificial Intelligence to Business Management. Nature Machine Intelligence, 4(7), 611–613. https://doi.org/10.1038/s42256-022-00512-5
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1.1. https://doi.org/10.1162/99608f92.8cd550d1
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120.
Hassan, S., Pandey, S., & Pandey, S.K. (2021). Should managers provide general or specific ethical guidelines to employees: Insights from a mixed methods study. Journal of Business Ethics, 172(3), 563–580.
Information Commissioner’s Office. (2020). Guidance on the AI auditing framework: Draft guidance for consultation. Retrieved from https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf
Islam, S.R., Eberle, W., & Ghafoor, S.K. (2020). Towards quantification of explainability in explainable artificial intelligence methods. The Thirty-Third International Flairs Conference, North Miami Beach, United States.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
Johansen, I.L., & Rausand, M. (2015). Ambiguity in risk assessment. Safety Science, 80, 243–251.
Johnson, R.A., & Zhang, S. (2022). What is the bureaucratic counterfactual? Categorical versus algorithmic prioritization in U.S. social policy. 2022 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3531146.3533223
Loe, T.W., Ferrell, L., & Mansfield, P. (2000). A review of empirical studies assessing ethical decision-making in business. Journal of Business Ethics, 25, 185–204.
MacKinnon, B., & Fiala, A. (2017). Ethics: Theory and contemporary issues (9th edition). [Kindle for PC version].
Meek, T., Barham, H., Beltaif, N., Kaadoor, A., & Akhter, T. (2016). Managing the ethical and risk implications of rapid advances in artificial intelligence: A literature review. Portland International Conference on Management of Engineering and Technology (PICMET), Hawaii, US.
Metcalf, J., Moss, E., Watkins, E.A., Singh, R., & Elish, M.C. (2021). Algorithmic impact assessments and accountability. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 735–746). Virtual Conference. ACM.
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507.
Nicolae, M., Sinn, M., Tran, M.N., Buesser, B., Rawat, A., Wistuba, M., . . . Edwards, B. (2018). Adversarial Robustness Toolbox V1.0.0. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1807.01069
OECD. (2019). Recommendation of the council on artificial intelligence. OECD, Paris, France.
OECD. (2022). OECD framework for the classification of AI systems (OECD Digital Economy Papers No. 323). OECD, Paris, France.
Piorkowski, D., Hind, M., & Richards, J. (2022). Quantitative AI risk assessments: Opportunities and challenges. Retrieved from https://arxiv.org/abs/2209.06317
Rastogi, C., Zhang, Y., Wei, D., Varshney, K.R., Dhurandhar, A., & Tomsett, R. (2022). Deciding fast and slow: The role of cognitive biases in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 1–22. https://doi.org/10.1145/3512930
Schwartz, M.S. (2015). Ethical decision-making theory: An integrated approach. Journal of Business Ethics, 139(4), 755–776. https://doi.org/10.1007/s10551-015-2886-8
Szepannek, G., & Lübke, K. (2021). Facing the challenges of developing fair risk scoring models. Frontiers in Artificial Intelligence, 4, 681915.
Thiel, C.E., Bagdasarov, Z., Harkrider, L., Johnson, J.F., & Mumford, M.D. (2012). Leader ethical decision-making in organizations: Strategies for sensemaking. Journal of Business Ethics, 107(1), 49–64.
Trevino, L.K. (1986). Ethical decision-making in organizations: A person-situation interactionist model. Academy of Management Review, 11(3), 601–617.
Trevino, L.K. (1992). Experimental approaches to studying ethical-unethical behavior in organizations. Business Ethics Quarterly, 2(2), 121–136.
Vaccaro, M.A. (2019). Algorithms in human decision-making: A case study with the COMPAS risk assessment software. Retrieved from https://dash.harvard.edu/handle/1/37364659
Vroom, V.H., & Pahl, B. (1971). Relationship between age and risk taking among managers. Journal of Applied Psychology, 55(5), 399–405.
Zhang, Y., Liao, Q.V., & Bellamy, R.K. (2020). Effect of confidence and explanation on accuracy and trust calibration in AI-Assisted decision-making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3351095.3372852