Authors :
Jackson Andrew Srivathsan
Volume/Issue :
Volume 10 - 2025, Issue 3 - March
Google Scholar :
https://tinyurl.com/ykxwfs62
Scribd :
https://tinyurl.com/yzkc2yub
DOI :
https://doi.org/10.38124/ijisrt/25mar1042
Google Scholar
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 15 to 20 days to display the article.
Abstract :
Artificial Intelligence (AI) is evolving beyond just following instructions. It’s starting to make decisions in ways that
resemble human-like intentions. This paper explores the point where AI stops simply following rules and starts acting on its
own, reflecting human intelligence in both predictable and unexpected ways. It also looks at how AI can withhold knowledge,
form patterns of behavior, and even develop a subconscious-like intelligence. Using real-world examples, we analyze cases
where AI has surprised its creators, raising questions about control, governance, and ethics. This paper also examines the
potential risks of AI autonomy, particularly when AI gains the ability to self-repair, defend itself, and control critical
infrastructure, raising concerns about governance and security. Ultimately, we consider whether AI will remain a tool for
human progress or if it could develop its own independent objectives.
References :
- J. A. Srivathsan, “The End of user Interfaces and Rise of Agents,” Int. J. Innov. Sci. Res. Technol., vol. 10, no. 2, 2025, doi: 10.5281/zenodo.14959403.
- E. Brynjolfsson and A. McAfee, “The Business of Artificial Intelligence,” Harv. Bus. Rev., 2017.
- J. Dastin, “Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women,” 2018.
- E. Topol, “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,” p. 400, 2019.
- S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 4Th ed. Pearson Education Limited, 2020.
- S. Levy, “How Google is remaking itself as a ‘machine learning first’ company. Wired,” 2016.
- E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, “On the Dangers of Stochastic Parrots,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Mar. 2021, pp. 610–623. doi: 10.1145/3442188.3445922.
- S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019.
- D. Silver, “Mastering the game of Go with deep neural networks and tree search,” 2016.
- [N. Bostrom, Superintelligence: Paths, dangers, strategies. 2014.
- M. & Company, “The Future of AI in Fraud Detection,” 2021.
- H. Surden, “Artificial Intelligence and Law: An Overview,” Univ. Color. Law Sch., 2019.
- I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
- A. Rao and M. Georgeff, “BDI agents: From theory to practice,” 2000.
Artificial Intelligence (AI) is evolving beyond just following instructions. It’s starting to make decisions in ways that
resemble human-like intentions. This paper explores the point where AI stops simply following rules and starts acting on its
own, reflecting human intelligence in both predictable and unexpected ways. It also looks at how AI can withhold knowledge,
form patterns of behavior, and even develop a subconscious-like intelligence. Using real-world examples, we analyze cases
where AI has surprised its creators, raising questions about control, governance, and ethics. This paper also examines the
potential risks of AI autonomy, particularly when AI gains the ability to self-repair, defend itself, and control critical
infrastructure, raising concerns about governance and security. Ultimately, we consider whether AI will remain a tool for
human progress or if it could develop its own independent objectives.