Comparison of the Adaboost Method and the Extreme Learning Machine Method in Predicting Heart Failure

Keywords: Adaboost, Extreme Learning Machine, Heart Failure

Abstract

Heart disease, which is classified as a non-communicable disease, is the main cause of death every year. The involvement of experts is considered very necessary in the process of diagnosing heart disease, considering its complex nature and potential severity. Machine Learning Algorithms have emerged as powerful tools capable of effectively predicting and detecting heart diseases, thereby reducing the challenges associated with their diagnosis. Notable examples of such algorithms include Extreme Learning Machine Algorithms and Adaptive Boosting, both of which represent Machine Learning techniques adapted for classification purposes. This research tries to introduce a new approach that relies on the use of one parameter. Through careful optimization of algorithm parameters, there is a marked improvement in the accuracy of machine learning predictions, a phenomenon that underscores the importance of parameter tuning in this domain. In this research, the Heart Failure dataset serves as the focal point, with the aim of demonstrating the optimal level of accuracy that can be achieved through the use of Machine Learning algorithms. The results of this study show an average accuracy of 0.83 for the Extreme Learning Machine Algorithm and 0.87 for Adaptive Boosting, the standard deviation for both methods is “0.83±0.02” for Extreme Machine Learning Algorithm and “0.87±0.03” for Adaptive Boosting thus highlighting the efficacy of these algorithms in the context of heart disease prediction. In particular, entering the Learning Rate parameter into Adaboost provides better results when compared with the previous algorithm. Our research findings underline the supremacy of Extreme Learning Machine Algorithms and Adaptive Improvement, especially when combined with the introduction of a single parameter, it can be seen that the addition of parameters results in increased accuracy performance when compared to previous research using standard methods alone.

Downloads

Download data is not yet available.

References

P. Ghosh et al., “Efficient prediction of cardiovascular disease using machine learning algorithms with relief and lasso feature selection techniques,” IEEE Access, vol. 9, pp. 19304–19326, 2021, doi: 10.1109/ACCESS.2021.3053759.

K. A. C. Ramya Perumal, “Early Prediction of Coronary Heart Disease from Cleveland Dataset using Machine Learning Techniques,” Int. J. Adv. Sci. Technol., vol. 29, no. 06 SE-Articles, pp. 4225–4234, May 2020, [Online]. Available: http://sersc.org/journals/index.php/IJAST/article/view/16428

V. S. K. Reddy, P. Meghana, N. V. S. Reddy, and B. A. Rao, “Prediction on Cardiovascular disease using Decision tree and Naïve Bayes classifiers,” J. Phys. Conf. Ser., vol. 2161, no. 1, 2022, doi: 10.1088/1742-6596/2161/1/012015.

C. De Silva and P. Kumarawadu, “Performance Analysis of Machine Learning Classification Algorithms in the Case of Heart Failure Prediction,” 2022 Int. Wirel. Commun. Mob. Comput. IWCMC 2022, no. July, pp. 1160–1165, 2022, doi: 10.1109/IWCMC55113.2022.9824214.

N. Pandiyan and S. Narayan, “Prediction of Cardiac Disease using Kernel Extreme Learning Machine Model,” Int. J. Eng. Trends Technol., vol. 70, no. 11, pp. 364–377, 2022, doi: 10.14445/22315381/IJETT-V70I11P238.

A. K. Yadav, G. K. Pal, and S. Gangwar, “INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING Early Stage Prediction of Heart Disease Features using AdaBoost Ensemble Algorithm and Tree Algorithms,” vol. 12, no. 3, pp. 545–551, 2024.

P. K. Mall et al., “Optimizing Heart Attack Prediction Through OHE2LM: A Hybrid Modelling Strategy,” J. Electr. Syst., vol. 20, no. 1, pp. 66–75, 2024, doi: 10.52783/jes.665.

M. H. Rizky, M. R. Faisal, I. Budiman, and D. Kartini, “Effect of Hyperparameter Tuning Using Random Search on Tree-Based Classification Algorithm for Software Defect Prediction,” vol. 18, no. 1, pp. 95–106, 2024, [Online]. Available: https://doi.org/10.22146/ijccs.90437

A. Javeed, S. Zhou, L. Yongjian, I. Qasim, A. Noor, and R. Nour, “An Intelligent Learning System Based on Random Search Algorithm and Optimized Random Forest Model for Improved Heart Disease Detection,” IEEE Access, vol. 7, pp. 180235–180243, 2019, doi: 10.1109/ACCESS.2019.2952107.

A. S. M. Al-Rawahnaa and A. Y. B. Al Hadid, “Data mining for Education Sector, a proposed concept,” J. Appl. Data Sci., vol. 1, no. 1, pp. 1–10, 2020, doi: 10.47738/jads.v1i1.6.

C. Saranya and G. Manikandan, “A study on normalization techniques for privacy preserving data mining,” vol. 5, pp. 2701–2704, Jun. 2013.

Siti Napi’ah, Triando Hamonangan Saragih, Dodon Turianto Nugrahadi, Dwi Kartini, and Friska Abadi, “Implementation of Monarch Butterfly Optimization for Feature Selection in Coronary Artery Disease Classification Using Gradient Boosting Decision Tree,” J. Electron. Electromed. Eng. Med. Informatics, vol. 5, no. 4, pp. 301–310, 2023, doi: 10.35882/jeeemi.v5i4.331.

I. Izonin, R. Tkachenko, N. Shakhovska, B. Ilchyshyn, and K. K. Singh, “A Two-Step Data Normalization Approach for Improving Classification Accuracy in the Medical Diagnosis Domain,” Mathematics, vol. 10, no. 11, pp. 1–18, 2022, doi: 10.3390/math10111942.

S. Ribaric and I. Fratric, “Experimental evaluation of matching-score normalization techniques on different multimodal biometric systems,” Proc. Mediterr. Electrotech. Conf. - MELECON, vol. 2006, pp. 498–501, 2006, doi: 10.1109/melcon.2006.1653147.

C. Khammassi and S. Krichen, “A GA-LR wrapper approach for feature selection in network intrusion detection,” Comput. Secur., vol. 70, pp. 255–277, 2017, doi: 10.1016/j.cose.2017.06.005.

H. Aljamaan and A. Alazba, “Software defect prediction using tree-based ensembles,” PROMISE 2020 - Proc. 16th ACM Int. Conf. Predict. Model. Data Anal. Softw. Eng. Co-located with ESEC/FSE 2020, pp. 1–10, 2020, doi: 10.1145/3416508.3417114.

H. Mo, H. Sun, J. Liu, and S. Wei, “Developing window behavior models for residential buildings using XGBoost algorithm,” Energy Build., vol. 205, p. 109564, 2019, doi: 10.1016/j.enbuild.2019.109564.

M. Niu, Y. Li, C. Wang, and K. Han, “RFAmyloid: A web server for predicting amyloid proteins,” Int. J. Mol. Sci., vol. 19, no. 7, 2018, doi: 10.3390/ijms19072071.

O. Kosheleva, S. P. Shary, G. Xiang, and R. Zapatrin, Olga Kosheleva Sergey P. Shary Gang Xiang Roman Zapatrin.

A. Peña Yañez, “El anillo esofágico inferior.,” Rev. Esp. Enferm. Apar. Dig., vol. 26, no. 4, pp. 505–516, 1967.

P. K. Harvey and T. S. Brewer, “On the neutron absorption properties of basic and ultrabasic rocks: The significance of minor and trace elements,” Geol. Soc. Spec. Publ., vol. 240, pp. 207–217, 2005, doi: 10.1144/GSL.SP.2005.240.01.16.

P. Viola and M. Jones, “Robust Real-time Face Detection,” vol. 20, p. 7695, 2001.

R. Wang, “AdaBoost for Feature Selection, Classification and Its Relation with SVM, A Review,” Phys. Procedia, vol. 25, pp. 800–807, 2012, doi: 10.1016/j.phpro.2012.03.160.

R. Wang, “AdaBoost for Feature Selection, Classification and Its Relation with SVM, A Review,” Phys. Procedia, vol. 25, pp. 800–807, 2012, doi: 10.1016/j.phpro.2012.03.160.

D. C. Feng et al., “Machine learning-based compressive strength prediction for concrete: An adaptive boosting approach,” Constr. Build. Mater., vol. 230, p. 117000, 2020, doi: 10.1016/j.conbuildmat.2019.117000.

B. Schölkopf, Z. Luo, and V. Vovk, “Empirical inference: Festschrift in honor of Vladimir N. Vapnik,” Empir. Inference Festschrift Honor Vladimir N. Vapnik, pp. 1–287, 2013, doi: 10.1007/978-3-642-41136-6.

G. Huang, G. Huang, S. Song, and K. You, “Trends in extreme learning machines : A review,” Neural Networks, vol. 61, pp. 32–48, 2015, doi: 10.1016/j.neunet.2014.10.001.

R. A. Berk, Classification and Regression Trees ( CART ). 2008. doi: 10.1007/978-0-387-77501-2.

K. Kumar and R. Chezian, “Support vector machine and K- nearest neighbor based analysis for the prediction of hypothyroid,” Int. J. Pharma Bio Sci., vol. 5, pp. B447–B453, Jan. 2014.

G. Bin Huang, Q. Y. Zhu, and C. K. Siew, “Extreme learning machine: Theory and applications,” Neurocomputing, vol. 70, no. 1–3, pp. 489–501, 2006, doi: 10.1016/j.neucom.2005.12.126.

W. Zong, G. Bin Huang, and Y. Chen, “Weighted extreme learning machine for imbalance learning,” Neurocomputing, vol. 101, pp. 229–242, 2013, doi: 10.1016/j.neucom.2012.08.010.

J. S. Manoharan, “Study of Variants of Extreme Learning Machine (ELM) Brands and its Performance Measure on Classification Algorithm,” J. Soft Comput. Paradig., vol. 3, no. 2, pp. 83–95, 2021, doi: 10.36548/jscp.2021.2.003.

Published
2024-07-08
How to Cite
[1]
Muhammad Nadim Mubaarok, Triando Hamonangan Saragih, Muliadi, Fatma Indriani, Andi Farmadi, and A. Rizal, “Comparison of the Adaboost Method and the Extreme Learning Machine Method in Predicting Heart Failure ”, j.electron.electromedical.eng.med.inform, vol. 6, no. 3, pp. 253-263, Jul. 2024.
Section
Research Paper