Authors :
Vishrutha S. N.; Dr. J. Vimala Devi
Volume/Issue :
Volume 10 - 2025, Issue 11 - November
Google Scholar :
https://tinyurl.com/22yp4nn8
Scribd :
https://tinyurl.com/4w7p2kau
DOI :
https://doi.org/10.38124/ijisrt/25nov1206
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
In today’s online world, cyberbullying is an escalating problem that can trigger deep emotional pain and isolation
from others. There is an increasing necessity to regulate content shared on online platforms. Cyberbullying material can
spread quickly across online platforms and may remain visible for a long time, sometimes without ever being removed. Such
persistent exposure to harmful content can take a serious toll on the mental, emotional, and psychological health of young
people. In extreme situations, ongoing harassment in digital spaces can contribute to thoughts of self-harm or suicide. The
proposed system utilizes Long Short-Term Memory (LSTM) networks to assess user-generated text and compute a bullying
likelihood for every sentence. This system monitors user activity in real time and decreases a reputation score whenever
harmful or bullying content is identified. Once the score falls below a set limit, the user is automatically prevented from
continuing interactions on the platform. By merging deep learning with a reputation- based penalty method, the design
works to curb cyberbullying while ensuring moderation remains both fair and proactive. The approach is adaptable,
efficient, and capable of supporting healthier online spaces.
Keywords :
Cyberbullying; Deep Learning; LSTM.
References :
- Das, Kumar & Garai, Buddhadeb & Das, Srijan & Patra, Braja. (2021). Profiling Hate Speech Spreaders on Twitter-Notebook for PAN at CLEF 2021. Media,"
- M. K. A. Aljero and N. Dimililer, "Genetic Programming Approach to Detect Hate Speech in Social in IEEE Access, vol.9, pp.115115- 115125,2021, doi: 10.1109/ACCESS.2021.3104535.
- K. Sreelakshmi, B. Premjith, K.P. Soman, “Detection of Hate Speech Text in Hindi-English code-mixed Data”, Procedia Computer Science, Vol. No. 171, Page No. 737-744, 2020
- Al-Makhadmeh, Zafer, and Amr Tolba. "Automatic Cyberbullying detection using killernatural language processing optimizing ensemble deep learning approach." Computing 102, no. 2 (2020): 501-522.
- Ibrohim, Muhammad Okky, and Indra Budi. "Multi-label hate speech and abusive language detection in Indonesian twitter." In Proceedings of the Third Workshop on Abusive Language Online, pp. 46-57. 2019.
- Aditya Gayadhani, Vikrant Doma, Shrikant Kndre, Laxmi Bhagwat, “Detecting Hate Speech and Offensive Language on Twitter using Machine Learning: An N-gram and TFIDF based Approach”, IEEE International Advance Computing Conference (2018), 2018.
- Fauzi, M. Ali, and Anny Yuniarti. "Ensemble method for Indonesian twitter Cyberbullying detection." Indonesian Journal of Electrical Engineering and Computer Science 11.1 (2018): 294- 299.
- N. A. Setyadi, M. Nasrun and C. Setianingsih, "Text Analysis for Cyberbullying detection Using Backpropagation Neural Network," 2018 International Conference on Control, Electronics, Renewable Energy and Communications (ICCEREC), 2018, pp. 159-165, doi: 10.1109/ICCEREC.2018.8712109.
- Kiilu, Kelvin & Okeyo, George & Rimiru, Richard & Ogada, Kennedy. (2018). “Using Naïve Bayes Algorithm in detection of Hate Tweets. International Journal of Scientific and Research Publications” (IJSRP). 8. 10.29322/IJSRP.8.3. 2018.p7517.
- Rui Zhao, Kezhi Mao “Cyber Bullying Detection based on Semantic - Enhanced Marginalized Denoising Auto-encoders”. IEEE Transaction on Affective Computing, 2015.
- Elaheh Raisi, Bert Huang “Weakly Supervised Cyberbullying Detection with Participant Vocabulary Consistency” Social Network Analysis and Mining, May 24,2018.
- Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Houng Wei, Hao, Bo Xu “Attention- based Bi-directional Long Short Term Memory Network for Relation Classification” proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 207 212, August 12,2016.
- Nitish Srivastava, Geoffrey Honton, Alex Krizhevsky, Iiya Sutskever, Ruslan Salakhutdinov “Dropout: Simple way to Prevent Neural Networks from Overfitting” Journal of Machine Learning Research 1929- 1958,2015
- Alexis Conneau, Holger Schwenk, Yann Le cun “Very Deep CNN for Text Classification” Association for Computational Linguistics, Volume1, pages 1107-1116,7 April 2017.
- MS. Snehal Bhoir, Tushar Ghorpade, Vanita Mane “Comparative Analysis of Different Word Embedding Models” IEEE,2017.
- Elaheh Raisis, Bert Huang “Cyberbullying Detection with Weakly Supervised Machine Learning” International Conference on Advances in Social Networks Analysis and Mining IEEE/ACM,2017.
- Haipeng Zeng, Hammad Haleem, Xavier Plantaz, Nan Cao and Huamin Qu “CNN Comparator: Comparative Analytics of CNN” arXiv,15 Oct,2017
In today’s online world, cyberbullying is an escalating problem that can trigger deep emotional pain and isolation
from others. There is an increasing necessity to regulate content shared on online platforms. Cyberbullying material can
spread quickly across online platforms and may remain visible for a long time, sometimes without ever being removed. Such
persistent exposure to harmful content can take a serious toll on the mental, emotional, and psychological health of young
people. In extreme situations, ongoing harassment in digital spaces can contribute to thoughts of self-harm or suicide. The
proposed system utilizes Long Short-Term Memory (LSTM) networks to assess user-generated text and compute a bullying
likelihood for every sentence. This system monitors user activity in real time and decreases a reputation score whenever
harmful or bullying content is identified. Once the score falls below a set limit, the user is automatically prevented from
continuing interactions on the platform. By merging deep learning with a reputation- based penalty method, the design
works to curb cyberbullying while ensuring moderation remains both fair and proactive. The approach is adaptable,
efficient, and capable of supporting healthier online spaces.
Keywords :
Cyberbullying; Deep Learning; LSTM.