Authors :
Jebaraj Vasudevan
Volume/Issue :
Volume 10 - 2025, Issue 3 - March
Google Scholar :
https://tinyurl.com/3x3p86xk
Scribd :
https://tinyurl.com/53x4mxx2
DOI :
https://doi.org/10.38124/ijisrt/25mar416
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
This study compares the classification performance of the Gradient Boosting (XGBoost), and Transformer based
model with multi-head self-attention for Tabular Data. While the methods exhibit broadly similar performance, the
Transformer model particularly excels in Recall by about 8% showing that it would be better suited to applications such as
Fraud Detection in Payment processing and Medical Diagnostics.
Keywords :
Transformer, Gradient Boosting, XGBoost, Tabular Data.
References :
- T. Chen and C. Guestrin, "XGBoost: A Scalable Tree Boosting System," 2016.
- X. Huang, A. Khetan, M. Cvitkovic and Z. Karnin, "TabTransformer: Tabular Data Modeling Using Contextual Embeddings," 2020.
- "Kaggle," [Online]. Available: https://www.kaggle.com/datasets/blastchar/telco-customer-churn/data.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones and A. Gomez, "Attention is all you need," 2017.
This study compares the classification performance of the Gradient Boosting (XGBoost), and Transformer based
model with multi-head self-attention for Tabular Data. While the methods exhibit broadly similar performance, the
Transformer model particularly excels in Recall by about 8% showing that it would be better suited to applications such as
Fraud Detection in Payment processing and Medical Diagnostics.
Keywords :
Transformer, Gradient Boosting, XGBoost, Tabular Data.