Comparitive Analysis of Gradient Boosting and Transformer Based Models for Binary Classification in Tabular Data
Authors : Jebaraj Vasudevan
Volume/Issue : Volume 10 - 2025, Issue 3 - March
Google Scholar : https://tinyurl.com/3x3p86xk
Scribd : https://tinyurl.com/53x4mxx2
DOI : https://doi.org/10.38124/ijisrt/25mar416
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 15 to 20 days to display the article.
Abstract : This study compares the classification performance of the Gradient Boosting (XGBoost), and Transformer based model with multi-head self-attention for Tabular Data. While the methods exhibit broadly similar performance, the Transformer model particularly excels in Recall by about 8% showing that it would be better suited to applications such as Fraud Detection in Payment processing and Medical Diagnostics.
Keywords : Transformer, Gradient Boosting, XGBoost, Tabular Data.
References :
Keywords : Transformer, Gradient Boosting, XGBoost, Tabular Data.