Unlocking Early Detection and Intervention Potential: Analyzing Visual Evoked Potentials (VEPs) in Adolescents/Teenagers with Narcotics Abuse Tendencies from the TelUnisba Neuropsychology EEG Dataset (TUNDA)
Abstract
Narcotics abuse has extensive negative impacts on individuals, families, and society, including physical harm to organs and mental health disorders. Addressing teenage narcotics problems requires collaborative efforts involving educational institutions, families, and psychologists. Currently, narcotics has increasingly targeted teenagers, becoming a serious issue that demands special attention in prevention and treatment. Handling narcotic problems at the adolescent level necessitates close collaboration among educational institutions, families, and the community, including psychologists. Emphasizing the importance of early detection and prevention, this study proposes a method to detect the possibility of narcotic abuse in adolescents using the Go/No-Go Association Task (GNAT) test designed by psychologists. The study introduced the TelUnisba Neuropsychology EEG Dataset (TUNDA), an open EEG dataset with data on the emotional and habitual aspects of drug abuse in Indonesia, classified into "normal" and "risk" by psychologists. The processed EEG signal is the visual evoked potential (VEP) within 1000 milliseconds following the visual stimulus onset. The data is classified as “slow” and “fast” based on respondent's responses using MobileNetV2 architecture. Results showed MobileNetV2 achieved the highest accuracy for both normal and risk categories, with accuracies of 0.86 and 0.85 respectively. This study obtained ethical clearance and received funding support from Telkom University and Universitas Islam Bandung, with technical assistance from the Smart Data Sensing Laboratory. The authors declare no conflicts of interest related to this study.
Downloads
References
[2] A. Irianto et al., National Survey On Drug Abuse 2021. 2021.
[3] J. S. Kumar and P. Bhuvaneswari, “Analysis of Electroencephalography (EEG) Signals and Its Categorization–A Study,” Procedia Eng, vol. 38, pp. 2525–2536, 2012, doi: 10.1016/j.proeng.2012.06.298.
[4] J. Sergent, “Brain-imaging studies of cognitive functions,” Trends Neurosci, vol. 17, no. 6, pp. 221–227, Jun. 1994, doi: 10.1016/0166-2236(94)90002-7.
[5] D. J. N. Armbruster-Genç, K. Ueltzhöffer, and C. J. Fiebach, “Brain Signal Variability Differentially Affects Cognitive Flexibility and Cognitive Stability,” The Journal of Neuroscience, vol. 36, no. 14, pp. 3978–3987, Apr. 2016, doi: 10.1523/JNEUROSCI.2517-14.2016.
[6] S. Manikandan, M. R., S. M. M., and S. R., “Sequential Convolutional Neural Networks for classification of cognitive tasks from EEG signals,” Appl Soft Comput, vol. 111, p. 107664, Nov. 2021, doi: 10.1016/j.asoc.2021.107664.
[7] A. Botti Benevides, A. Silva da Paz Floriano, M. Sarcinelli-Filho, and T. Freire Bastos-Filho, “Review of the Human Brain and EEG Signals,” in Introduction to Non-Invasive EEG-Based Brain–Computer Interfaces for Assistive Technologies, Boca Raton : CRC Press, 2020.: CRC Press, 2020, pp. 1–49. doi: 10.1201/9781003049159-1.
[8] S. Das, S. A. Mumu, M. A. H. Akhand, A. Salam, and M. A. S. Kamal, “Epileptic Seizure Detection from Decomposed EEG Signal through 1D and 2D Feature Representation and Convolutional Neural Network,” Information, vol. 15, no. 5, p. 256, May 2024, doi: 10.3390/info15050256.
[9] A. K. A, G. M, and L. R, “Emotional Classification of EEG Signal using Image Encoding and Deep Learning,” in 2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII), IEEE, Mar. 2021, pp. 1–5. doi: 10.1109/ICBSII51839.2021.9445187.
[10] F.-J. Hsiao et al., “Machine learning–based prediction of heat pain sensitivity by using resting-state EEG,” Frontiers in Bioscience-Landmark, vol. 26, no. 12, pp. 1537–1547, Dec. 2021, doi: 10.52586/5047.
[11] J. Yang, X. Huang, H. Wu, and X. Yang, “EEG-based emotion classification based on Bidirectional Long Short-Term Memory Network,” Procedia Comput Sci, vol. 174, pp. 491–504, 2020, doi: 10.1016/j.procs.2020.06.117.
[12] F. Wang et al., “Emotion recognition with convolutional neural network and EEG-based EFDMs,” Neuropsychologia, vol. 146, p. 107506, Sep. 2020, doi: 10.1016/j.neuropsychologia.2020.107506.
[13] S. Kanoga, A. Kanemura, and H. Asoh, “A COMPARATIVE STUDY OF FEATURES AND CLASSIFIERS IN SINGLE-CHANNEL EEG-BASED MOTOR IMAGERY BCI,” in 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), IEEE, Nov. 2018, pp. 474–478. doi: 10.1109/GlobalSIP.2018.8646636.
[14] T. H. Shovon, Z. Al Nazi, S. Dash, and Md. F. Hossain, “Classification of Motor Imagery EEG Signals with multi-input Convolutional Neural Network by augmenting STFT,” in 2019 5th International Conference on Advances in Electrical Engineering (ICAEE), IEEE, Sep. 2019, pp. 398–403. doi: 10.1109/ICAEE48663.2019.8975578.
[15] M. S. Alam, M. M. Rashid, R. Roy, A. R. Faizabadi, K. D. Gupta, and M. M. Ahsan, “Empirical Study of Autism Spectrum Disorder Diagnosis Using Facial Images by Improved Transfer Learning Approach,” Bioengineering, vol. 9, no. 11, p. 710, Nov. 2022, doi: 10.3390/bioengineering9110710.
[16] S. Kolonne, C. Fernando, H. Kumarasinghe, and D. Meedeniya, “MobileNetV2 Based Chest X-Rays Classification,” in 2021 International Conference on Decision Aid Sciences and Application (DASA), IEEE, Dec. 2021, pp. 57–61. doi: 10.1109/DASA53625.2021.9682248.
[17] M. Hu et al., “Learning to Recognize Chest-Xray Images Faster and More Efficiently Based on Multi-Kernel Depthwise Convolution,” IEEE Access, vol. 8, pp. 37265–37274, 2020, doi: 10.1109/ACCESS.2020.2974242.
[18] R. R. N. Tobias et al., “CNN-based Deep Learning Model for Chest X-ray Health Classification Using TensorFlow,” in 2020 RIVF International Conference on Computing and Communication Technologies (RIVF), IEEE, Oct. 2020, pp. 1–6. doi: 10.1109/RIVF48685.2020.9140733.
[19] S. A. Arani, Y. Zhang, M. T. Rahman, and H. Yang, “Melanlysis: A mobile deep learning approach for early detection of skin cancer,” in 2022 IEEE 28th International Conference on Parallel and Distributed Systems (ICPADS), IEEE, Jan. 2023, pp. 89–97. doi: 10.1109/ICPADS56603.2022.00020.
[20] S. Garg and P. Singh, “Transfer Learning Based Lightweight Ensemble Model for Imbalanced Breast Cancer Classification,” IEEE/ACM Trans Comput Biol Bioinform, vol. 20, no. 2, pp. 1529–1539, Mar. 2023, doi: 10.1109/TCBB.2022.3174091.
[21] M. Dubuson, X. Noël, C. Kornreich, C. Hanak, M. Saeremans, and S. Campanella, “A Comparative Event-Related Potentials Study between Alcohol Use Disorder, Gambling Disorder and Healthy Control Subjects through a Contextual Go/NoGo Task,” Biology (Basel), vol. 12, no. 5, p. 643, Apr. 2023, doi: 10.3390/biology12050643.
[22] B. A. Nosek, M. R. Banaji, and B. Nosek, “The Go/No-Go Association Task.” Accessed: Sep. 20, 2023. [Online]. Available: www.briannosek.com
[23] H. Stanislaw and N. Todorov, “Calculation of signal detection theory measures,” Behavior Research Methods, Instruments, & Computers, vol. 31, no. 1, pp. 137–149, Mar. 1999, doi: 10.3758/BF03207704.
[24] American Optometric Association, “Computer Vision Syndrome.” Accessed: Oct. 20, 2023. [Online]. Available: https://www.aoa.org/healthy-eyes/eye-and-vision-conditions/computer-vision-syndrome?sso=y
[25] M. Sánchez-Brau, B. Domenech-Amigot, F. Brocal-Fernández, J. A. Quesada-Rico, and M. Seguí-Crespo, “Prevalence of Computer Vision Syndrome and Its Relationship with Ergonomic and Individual Factors in Presbyopic VDT Workers Using Progressive Addition Lenses,” Int J Environ Res Public Health, vol. 17, no. 3, p. 1003, Feb. 2020, doi: 10.3390/ijerph17031003.
[26] S. Kotte and J. R. K. Kumar Dabbakuti, “Methods for removal of artifacts from EEG signal: A review,” J Phys Conf Ser, vol. 1706, no. 1, p. 012093, Dec. 2020, doi: 10.1088/1742-6596/1706/1/012093.
[27] A. Tharwat, “Independent component analysis: An introduction,” Applied Computing and Informatics, vol. 17, no. 2, pp. 222–249, Apr. 2021, doi: 10.1016/j.aci.2018.08.006.
[28] X. Jiang, G.-B. Bian, and Z. Tian, “Removal of Artifacts from EEG Signals: A Review,” Sensors, vol. 19, no. 5, p. 987, Feb. 2019, doi: 10.3390/s19050987.
[29] A. Delorme and S. Makeig, “EEGLAB: an open source tooslbox for analysis of single-trial EEG dynamics including independent component analysis.” Accessed: Sep. 20, 2023. [Online]. Available: http://www.sccn.ucsd.edu/eeglab/
[30] D. Mika, G. Budzik, and J. Józwik, “Single Channel Source Separation with ICA-Based Time-Frequency Decomposition,” Sensors, vol. 20, no. 7, p. 2019, Apr. 2020, doi: 10.3390/s20072019.
[31] K. Dong, C. Zhou, Y. Ruan, and Y. Li, “MobileNetV2 Model for Image Classification,” in 2020 2nd International Conference on Information Technology and Computer Application (ITCA), IEEE, Dec. 2020, pp. 476–480. doi: 10.1109/ITCA52113.2020.00106.
[32] M. Sadat Shahabi, A. Shalbaf, and A. Maghsoudi, “Prediction of drug response in major depressive disorder using ensemble of transfer learning with convolutional neural network based on EEG,” Biocybern Biomed Eng, vol. 41, no. 3, pp. 946–959, Jul. 2021, doi: 10.1016/j.bbe.2021.06.006.
[33] S. Hwang, K. Hong, G. Son, and H. Byun, “Learning CNN features from DE features for EEG-based emotion recognition,” Pattern Analysis and Applications, vol. 23, no. 3, pp. 1323–1335, Aug. 2020, doi: 10.1007/s10044-019-00860-w.
[34] J. Huang et al., “Speed/accuracy trade-offs for modern convolutional object detectors,” Nov. 2016, [Online]. Available: http://arxiv.org/abs/1611.10012
Copyright (c) 2024 Inung Wijayanto, Tobias Mikha Sulistyo, Yohanes Juan Nur Pratama, Ayu Sekar Safitri, Thalita Dewi Rahmaniar, Sofia Sa’idah, Sugondo Hadiyoso, Raiyan Adi Wibowo, Rima Ananda Kurnia Ismanto, Athaliqa Ananda Putri, Andhita Nurul Khasanah, Faizza Haya Diliana, Salwa Azzahra, Melsan Gadama, Ayu Tuty Utami
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International (CC BY-SA 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).