Analysis of Academic Success Using Machine Learning: Addiction and ChatGPT
DOI:
https://doi.org/10.26439/interfases2024.n020.7390Keywords:
ChatGPT, addiction, machine learningAbstract
This paper analyzes the impact of the variables phone addiction, pornography addiction, number of times the phone is unlocked per hour, and level of confidence in ChatGPT on the academic success of a group of 4278 students from eight universities in Ecuador. The decision trees (DT), random forest (RF), and support vector machine (SVM) methods are used. The results obtained indicate similar levels of precision achieved in the three algorithms; in terms of accuracy, in the case of SMOTE, DT is the algorithm that presents the highest accuracy (accuracy = 0,64); and, in the case of RandomOverSampler, the SVM algorithm had the highest accuracy (accuracy = 0,59).
Downloads
References
Albalooshi, F., AlObaidy, H., & Ghanim, A. (2019). Mining students outcomes: An empirical study. International Journal of Computing and Digital Systems, 8(3), 229-241. https://doi.org/10.12785/ijcds/080303
Alghamdi, A. S., & Rahman, A. (2023). Data mining approach to predict success of secondary school students: A Saudi Arabian case study. Education Sciences, 13(3), 293. https://doi.org/10.3390/educsci13030293
Batool, S., Rashid, J., Nisar, M. W., Kim, J., Kwon, H. Y., & Hussain, A. (2023). Educational data mining to predict students’ academic performance: A survey study. Education and Information Technologies, 28, 905-971. https://doi.org/10.1007/s10639-022-11152-y
Beaulac, C., & Rosenthal, J. S. (2019). Predicting university students’ academic success and major using random forests. Research in Higher Education, 60, 1048-1064. https://doi.org/10.1007/s11162-019-09546-y
Blagus, R., & Lusa, L. (2013). SMOTE for high-dimensional class-imbalanced data. BMC Bioinformatics, 14(106), 1-16. https://doi.org/10.1186/1471-2105-14-106
Cai, Q., Lin, Y., & Yu, Z. (2023). Factors influencing learner attitudes towards ChatGPT-assisted language learning in higher education. International Journal of Human–Computer Interaction. Publicación anticipada en línea. https://doi.org/10.1080/10447318.2023.2261725
Chaudhury, P., & Tripathy, H. K. (2018). A study on impact of smartphone addiction on academic performance. International Journal of Engineering and Technology, 7(2.6), 50-53. https://doi.org/10.14419/ijet.v7i2.6.10066
Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16, 321-357. https://doi.org/10.1613/jair.953
Chen, Y.-C. (2006). A study of comparing the use of augmented reality and physical models in chemistry education. VRCIA’06: Virtual Reality Continuum and Its Applications 2006, 1, 369-372. https://doi.org/10.1145/1128923.1128990
Chen, Y., & Zhai, L. (2023). A comparative study on student performance prediction using machine learning. Education and Information Technologies, 28, 12039-12057. https://doi.org/10.1007/s10639-023-11672-1
Cui, W., Sangsongfar, A., & Amdee, N. (2024). A comparative study of the applicability of regression models in predicting student academic performance. Naresuan University Engineering Journal, 19(1), 39-49. https://ph01.tci-thaijo.org/index.php/nuej/article/view/255799
Elkhodr, M., Gide, E., Wu, R., & Darwish, O. (2023). ICT students’ perceptions towards ChatGPT: An experimental reflective lab analysis. STEM Education, 3(2), 70-88. https://doi.org/10.3934/steme.2023006
ElSharkawy, G., Helmy, Y., & Yehia, E. (2022). Employability prediction of information technology graduates using machine learning algorithms. International Journal of Advanced Computer Science and Applications, 13(10), 359-367. https://doi.org/10.14569/IJACSA.2022.0131043
Estabrooks, A., Jo, T., & Japkowicz, N. (2004). A multiple resampling method for learning from imbalanced data sets. Computational Intelligence, 20(1), 18-36. https://doi.org/10.1111/j.0824-7935.2004.t01-1-00228.x
García, V., Sánchez, J. S., Marqués, A. I., Florencia, R., & Rivera, G. (2020). Understanding the apparent superiority of over-sampling through an analysis of local information for class-imbalanced data. Expert Systems with Applications, 158, artículo 113026. https://doi.org/10.1016/j.eswa.2019.113026
Gutiérrez-Aguilar, O., Huarsaya-Rodriguez, E., & Duche-Pérez, A. (2024). The mediating effect of academic performance on ChatGPT satisfaction in university students. En G. F. Olmedo Cifuentes, D. G. Arcos Avilés y H. V. Lara Padilla (Eds.), Emerging research in intelligent systems – Proceedings of the CIT 2023 (v. 2, pp. 353-365). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-52258-1_26
Hellas, A., Ihantola, P., Petersen, A., Ajanovski, V. V., Gutica, M., Hynninen, T., Knutas, A., Leinonen, J., Messom, C., & Liao, S. N. (2018). Predicting academic performance: A systematic literature review. En G. Rößling y B. Scharlau (Eds.), ITiCSE 2018 companion: Proceedings companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (pp. 175-199). Association for Computing Machinery. https://doi.org/10.1145/3293881.3295783
Hong, Y., Rong, X., & Liu, W. (2024). Construction of influencing factor segmentation and intelligent prediction model of college students’ cell phone addiction model based on machine learning algorithm. Heliyon, 10(8), e29245. https://doi.org/10.1016/j.heliyon.2024.e29245
Kovács, G. (2019). An empirical comparison and evaluation of minority oversampling techniques on a large number of imbalanced datasets. Applied Soft Computing, 83, artículo 105662. https://doi.org/10.1016/j.asoc.2019.105662
Musso, M. F., Rodríguez, C. F., & Cascallar, E. C. (2020). Predicting key educational outcomes in academic trajectories: A machine-learning approach. Higher Education, 80, 875-894. https://doi.org/10.1007/s10734-020-00520-7
Nachouki, M., Mohamed, E. A., Mehdi, R., & Abou Naaj, M. (2023). Student course grade prediction using the random forest algorithm: Analysis of predictors’ importance. Trends in Neuroscience and Education, 33, 100214. https://doi.org/10.1016/j.tine.2023.100214
Nayak, P., Vaheed, S., Gupta, S., & Mohan, N. (2023). Predicting students’ academic performance by mining the educational data through machine learning-based classification model. Education and Information Technologies, 28, 14611-14637. https://doi.org/10.1007/s10639-023-11706-8
Newaz, A., Hassan, S., & Haq, F. S. (2022). An empirical analysis of the efficacy of different sampling techniques for imbalanced classification. arXiv. Publicación anticipada en línea. https://doi.org/10.48550/arXiv.2208.11852
Sharma, N., Appukutti, S., Garg, U., Mukherjee, J., & Mishra, S. (2023). Analysis of student’s academic performance based on their time spent on extra-curricular activities using machine learning techniques. International Journal of Modern Education and Computer Science, 15(1), 46-57. https://doi.org/10.5815/ijmecs.2023.01.04
Imbalanced learn. (2014). RandomOverSampler. https://imbalanced-learn.org/stable/references/generated/imblearn.over_sampling.RandomOverSampler.html#imblearn.over_sampling.RandomOverSampler
Wainer, J. (2024). An empirical evaluation of imbalanced data strategies from a practitioner’s point of view. Expert Systems with Applications, 256, 124863. https://doi.org/10.1016/j.eswa.2024.124863
Wongvorachan, T., He, S., & Bulut, O. (2023). A comparison of undersampling, oversampling, and SMOTE methods for dealing with imbalanced classification in educational data mining. Information, 14(1), 54. https://doi.org/10.3390/info14010054
Downloads
Published
Issue
Section
License
Authors who publish with this journal agree to the following terms:
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under an Attribution 4.0 International (CC BY 4.0) License. that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
Last updated 03/05/21