Authors :
Rashik K. Badgami
Volume/Issue :
Volume 10 - 2025, Issue 12 - December
Google Scholar :
https://tinyurl.com/ywxywpde
Scribd :
https://tinyurl.com/4rv4nd4v
DOI :
https://doi.org/10.38124/ijisrt/25dec1016
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Artificial intelligence (AI) systems are increasingly embedded in organizational decision-making processes,
providing recommendations that shape strategic, operational, and administrative outcomes. While human actors retain
formal authority, AI systems often exert substantial implicit influence over decisions without corresponding accountability
or governance. This paper introduces the concept of AI as a Shadow Executive, defined as a non-human system that
materially influences organizational decisions without formal decision rights or fiduciary responsibility. The study
proposes a novel conceptual and analytical framework to quantify AI’s implicit executive power, examines behaviour
shifts in human leadership resulting from prolonged AI exposure, and explores governance mechanisms to rebalance
human–AI authority. Through simulation-based modelling and behaviour analysis, this research advances understanding
of AI’s role as an organizational actor rather than a neutral tool. The findings have significant implications for ethical AI
governance, leadership accountability, workforce decision-making, and public-sector institutions, supporting the need for
proactive regulatory and organizational frameworks.
Keywords :
Artificial Intelligence Governance, Decision Support Systems, Organizational Leadership, Ethical AI, Automation, Accountability.
References :
- Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116.
- Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126.
- Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.
- Mosier, K. L., & Skitka, L. J. (1996). Human decision makers and automated decision aids. Human Factors, 38(3), 333–347.
Artificial intelligence (AI) systems are increasingly embedded in organizational decision-making processes,
providing recommendations that shape strategic, operational, and administrative outcomes. While human actors retain
formal authority, AI systems often exert substantial implicit influence over decisions without corresponding accountability
or governance. This paper introduces the concept of AI as a Shadow Executive, defined as a non-human system that
materially influences organizational decisions without formal decision rights or fiduciary responsibility. The study
proposes a novel conceptual and analytical framework to quantify AI’s implicit executive power, examines behaviour
shifts in human leadership resulting from prolonged AI exposure, and explores governance mechanisms to rebalance
human–AI authority. Through simulation-based modelling and behaviour analysis, this research advances understanding
of AI’s role as an organizational actor rather than a neutral tool. The findings have significant implications for ethical AI
governance, leadership accountability, workforce decision-making, and public-sector institutions, supporting the need for
proactive regulatory and organizational frameworks.
Keywords :
Artificial Intelligence Governance, Decision Support Systems, Organizational Leadership, Ethical AI, Automation, Accountability.