Limitations of Generative AI in Real-Time Decision-Making


Authors : Chaitenya Chand

Volume/Issue : Volume 10 - 2025, Issue 6 - June


Google Scholar : https://tinyurl.com/af5j69u7

DOI : https://doi.org/10.38124/ijisrt/25jun935

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : Generative AI has emerged as a groundbreaking technology, offering transformative capabilities in domains like natural language processing and image generation. Despite its successes, the application of generative AI in real-time decision-making systems remains a challenge due to issues such as computational latency, output reliability, and lack of interpretability. This study investigates these limitations through a detailed literature review and experimental analysis. We adopted a hybrid methodology involving lightweight model architectures and rule-based constraints to mitigate these challenges. Results show that our approach reduces latency by 20% and enhances reliability by 15% compared to traditional generative models. The findings underscore the importance of optimizing generative AI for time-sensitive applications and highlight future directions for research.

Keywords : Generative AI, Real-Time Systems, Latency, Model Interpretability, Hybrid AI Models.

References :

  1. Lu, Y., Shen, M., Wang, H., et al. (2023). Machine learning for synthetic data generation: A review.
  2. Weidinger, L., Mellor, J., Rauh, M., et al. (2021). Ethical and social risks of harm from language models.
  3. Hernandez, M., Epelde, G., Alberdi, A., et al. (2022). Synthetic data generation for tabular health records: A systematic review.
  4. Figueira, A., & Vaz, B. (2022). Survey on synthetic data generation, evaluation methods, and GANs.
  5. Raghunathan, T. E. (2021). Synthetic data. Annual Review of Statistics and Its Application, 8(1), 129-140.
  6. Sutton, R. S., & Barto, A. G. (1999). Reinforcement learning: An introduction.
  7. Bender, E. M., & Friedman, B. (2018). Data statements for natural language processing: Toward mitigating system bias.
  8. Thomas, G. (2023). Synthetic data generation: Building trust by ensuring privacy and quality.
  9. Mitchell, M., Wu, S., Zaldivar, A., et al. (2019). Model cards for model reporting.
  10. Goodfellow, I., et al. (2014). Generative adversarial networks. Advances in Neural Information Processing Systems.
  11. Amodei, D., et al. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.
  12. Brown, T., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems.

Generative AI has emerged as a groundbreaking technology, offering transformative capabilities in domains like natural language processing and image generation. Despite its successes, the application of generative AI in real-time decision-making systems remains a challenge due to issues such as computational latency, output reliability, and lack of interpretability. This study investigates these limitations through a detailed literature review and experimental analysis. We adopted a hybrid methodology involving lightweight model architectures and rule-based constraints to mitigate these challenges. Results show that our approach reduces latency by 20% and enhances reliability by 15% compared to traditional generative models. The findings underscore the importance of optimizing generative AI for time-sensitive applications and highlight future directions for research.

Keywords : Generative AI, Real-Time Systems, Latency, Model Interpretability, Hybrid AI Models.

CALL FOR PAPERS


Paper Submission Last Date
30 - June - 2025

Paper Review Notification
In 2-3 Days

Paper Publishing
In 2-3 Days

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe