Ensemble Learning Improvement through Reinforcement Learning Idea

Authors

1 Faculty of Computer and Information Technology Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran

2 Department of Computer Engineering, Alzahra University, Tehran, Iran

Abstract

Ensemble learning is one of the learning methods to create a strong classifier through the integration of basic classifiers that includes the benefits of all of them. Meanwhile, weighting classifiers in the ensemble learning approach is a major challenge. This challenge arises from the fact that in ensemble learning all constructor classifiers are considered to be at the same level of distinguishing ability. While in different problem situations and especially in dynamic environments, the performance of base learners is affected by the problem space and data behavior. The solutions that have been presented in the subject literature assumed that problem space condition is permanent and static. While for each entry in real, the situation has changed and a completely dynamic environment is created. In this paper, a method based on the reinforcement learning idea is proposed to modify the weight of the base learners in the ensemble according to problem space dynamically. The proposed method is based on receiving feedback from the environment and therefore can adapt to the problem space. In the proposed method, learning automata is used to receive feedback from the environment and perform appropriate actions. Sentiment analysis has been selected as a case study to evaluate the proposed method. The diversity of data behavior in sentiment analysis is very high and it creates an environment with dynamic data behavior. The results of the evaluation on six different datasets and the ranking of different values of learning automata parameters reveal a significant difference between the efficiency of the proposed method and the ensemble learning literature.

Keywords


[1] B. Liu,“Sentiment analysis and opinion mining,” Synth. Lect. Hum. Lang. Technol., vol. 5, no. 1, pp. 1–167, 2012.
[2] S. Zhou, Q.Chen, and X. Wang, “Active deep learning method for semi-supervised sentiment classification,” Neurocomputing, vol. 120, pp. 536–546, 2013.
[3] R. Prabowo and M. Thelwall, “Sentiment analysis: A combined approach,” J. Informetr., vol. 3, no. 2, pp. 143–157, 2009.
[4] A. Bermingham and A. Smeaton, “On using Twitter to monitor political sentiment and predict election results,” in Proceedings of the Workshop on Sentiment Analysis where AI meets Psychology (SAAIP 2011), 2011, pp. 2–10.
[5] A. Ortigosa, J. M. Martin, and R. M. Carro, “Sentiment analysis in Facebook and its application to e-learning,” Comput. Human Behav., vol. 31, pp. 527–541, 2014.
[6] M. Thelwall, K. Buckley, and G. Paltoglou, “Sentiment strength detection for the social web,” J. Am. Soc. Inf. Sci. Technol., vol. 63, no. 1, pp. 163–173, 2012.
[7] X. Li, H. Xie, L. Chen, J. Wang, and X. Deng, “News impact on stock price return via sentiment analysis,” Knowledge-Based Syst., vol. 69, pp. 14–23, 2014.
[8] E. Boldrini, A. Balahur, P. Martinez-Barco, and A. Montoyo, “Using EmotiBlog to annotate and analyze subjectivity in the new textual genres,” Data Min. Knowl. Discov., vol. 25, no. 3, pp. 603–634, 2012.
[9] Y. Zhang, G. Lai, M. Zhang, Y. Zhang, Y. Liu, and S. Ma, “Explicit factor models for explainable recommendation based on phrase-level sentiment analysis,” in Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, 2014, pp. 83–92.
[10] G. Mishne, N. S. Glance, and others, “Predicting movie sales from blogger sentiment.,” in AAAI spring symposium: computational approaches to analyzing weblogs, 2006, pp. 155–158.
[11] R. Feldman, “Techniques and applications for sentiment analysis,” Commun. ACM, vol. 56, no. 4, pp. 82–89, 2013.
[12] S. Mukherjee and P. Bhattacharyya, “Feature specific sentiment analysis for product reviews,” in International Conference on Intelligent Text Processing and Computational Linguistics, 2012, pp. 475–487.
[13] G. Wang, J. Sun, J. Ma, K. Xu, and J. Gu, “Sentiment classification: The contribution of ensemble learning,” Decis. Support Syst., vol. 57, pp. 77–93, 2014.
[14] O. Sagi and L. Rokach, “Ensemble learning: A survey,” Wiley Interdiscip. Rev. Data Min. Knowl. Discov., vol. 8, no. 4, p. e1249, 2018.
[15] H. Werbin-Ofir, L. Dery, and E. Shmueli, “Beyond majority: Label ranking ensembles based on voting rules,” Expert Syst. Appl., vol. 136, pp. 50–61, 2019.
[16] G. Cherubin, “Majority vote ensembles of conformal predictors,” Mach. Learn., vol. 108, no. 3, pp. 475–488, 2019.
[17R. B. Morton, M. Piovesan, and J.-R. Tyran, “The dark side of the vote: Biased voters, social information, and information aggregation through majority voting,” Games Econ. Behav., vol. 113, pp. 461–481, 2019.
[18] N. Saleena and others, “An ensemble classification system for twitter sentiment analysis,” Procedia Comput. Sci., vol. 132, pp. 937–946, 2018.
[19] Y. Zhang, D. Miao, J. Wang, and Z. Zhang, “A cost-sensitive three-way combination technique for ensemble learning in sentiment classification,” Int. J. Approx. Reason., vol. 105, pp. 85–97, 2019.
[20] Y. Wei, S. Sun, J. Ma, S. Wang, and K. K. Lai, “A decomposition clustering ensemble learning approach for forecasting foreign exchange rates,” J. Manag. Sci. Eng., vol. 4, no. 1, pp. 45–54, 2019.
[21] K. Basaran, A. Özçift, and D. Kilinç, “A new approach for prediction of solar radiation with using ensemble learning algorithm,” Arab. J. Sci. Eng., vol. 44, no. 8, pp. 7159–7171, 2019.
[22] Y. Zhang, G. Cao, B. Wang, and X. Li, “A novel ensemble method for k-nearest neighbor,” Pattern Recognit., vol. 85, pp. 13–25, 2019.
[23] V. Gupta, A. Mehta, A. Goel, U. Dixit, and A. C. Pandey, “Spam detection using ensemble learning,” in Harmony search and nature inspired optimization algorithms, Springer, 2019, pp. 661–668.
[24] A. Renda, M. Barsacchi, A. Bechini, and F. Marcelloni, “Comparing ensemble strategies for deep learning: An application to facial expression recognition,” Expert Syst. Appl., vol. 136, pp. 1–11, 2019.
[25] A. Ezzat, M. Wu, X. Li, and C.-K. Kwoh, “Computational prediction of drug-target interactions via ensemble learning,” in Computational methods for drug repurposing, Springer, 2019, pp. 239–254.
[26] Y. Zhou and P. Wang, “An ensemble learning approach for XSS attack detection with domain knowledge and threat intelligence,” Comput. Secur., vol. 82, pp. 261–269, 2019.
[27] X. Wang and W. Q. Yan, “Cross-view gait recognition through ensemble learning,” Neural Comput. Appl., vol. 32, no. 11, pp. 7275–7287, 2020.
[28] D. Liang, G. Fan, G. Lin, W. Chen, X. Pan, and H. Zhu, “Three-stream convolutional neural network with multi-task and ensemble learning for 3d action recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019, p. 0.
[29] J. Zhang, Z. Li, K. Nai, Y. Gu, and A. Sallam, “DELR: A double-level ensemble learning method for unsupervised anomaly detection,” Knowledge-Based Syst., vol. 181, p. 104783, 2019.
[30] J. Hajewski and S. Oliveira, “Distributed SmSVM ensemble learning,” in INNS big data and deep learning conference, 2019, pp. 7–16.
[31] S. S. Rathore and S. Kumar, “An empirical study of ensemble techniques for software fault prediction,”
Appl. Intell., pp. 1–30, 2020.
[32] S. Hakak, M. Alazab, S. Khan, T. R. Gadekallu, P. K. R. Maddikunta, and W. Z. Khan, “An ensemble machine learning approach through effective feature extraction to classify fake news,” Futur. Gener. Comput. Syst., vol. 117, pp. 47–58, 2021.
[33] J. Jiang, Y. Yu, Z. Wang, S. Tang, R. Hu, and J. Ma, “Ensemble super-resolution with a reference dataset,” IEEE Trans. Cybern., vol. 50, no. 11, pp. 4694–4708, 2019.
[34] S. Rudra, M. Uddin, and M. M. Alam, “Forecasting of breast cancer and diabetes using ensemble learning,” Int. J. Comput. Commun. Informatics, vol. 1, no. 1, pp. 1–5, 2019.
[35] Y. Cai, X. Liu, Y. Zhang, and Z. Cai, “Hierarchical ensemble of extreme learning machine,” Pattern Recognit. Lett., vol. 116, pp. 101–106, 2018.
[36] K. Raza, “Improving the prediction accuracy of heart disease with ensemble learning and majority voting rule,” in U-Healthcare Monitoring Systems, Elsevier, 2019, pp. 179–196.
[37] D. Chakraborty, V. Narayanan, and A. Ghosh, “Integration of deep feature extraction and ensemble learning for outlier detection,” Pattern Recognit., vol. 89, pp. 161–171, 2019.
[38] S. Mao, J.-W. Chen, L. Jiao, S. Gou, and R. Wang, “Maximizing diversity by transformed ensemble learning,” Appl. Soft Comput., vol. 82, p. 105580, 2019.
[39] A. Galicia, R. Talavera-Llames, A. Troncoso, I. Koprinska, and F. Martinez-Álvarez, “Multi-step forecasting for big data time series based on ensemble learning,” Knowledge-Based Syst., vol. 163, pp. 830–841, 2019.
[40] J. Xiao, “SVM and KNN ensemble learning for traffic incident detection,” Phys. A Stat. Mech. its Appl., vol. 517, pp. 29–35, 2019.
[41] A. Kumar and A. Jaiswal, “Systematic literature review of sentiment analysis on Twitter using soft computing techniques,” Concurr. Comput. Pract. Exp., vol. 32, no. 1, p. e5107, 2020.
[42] S. L. Lo, R. Chiong, and D. Cornforth, “Ranking of high-value social audiences on Twitter,” Decis. Support Syst., vol. 85, pp. 34–48, 2016.
[43] I. Perikos and I. Hatzilygeroudis, “Recognizing emotions in text using ensemble of classifiers,” Eng. Appl. Artif. Intell., vol. 51, pp. 191–201, 2016.
[44] P. Pérez-Gállego, J. R. Quevedo, and J. J. del Coz, “Using ensembles for problems with characterizable changes in data distribution: A case study on quantification,” Inf. Fusion, vol. 34, pp. 87–100, 2017.
[45] M. Bouazizi and T. Ohtsuki, “A pattern-based approach for multi-class sentiment analysis in Twitter,” IEEE Access, vol. 5, pp. 20617–20639, 2017.
[46] S. Tuarob, C. S. Tucker, M. Salathe, and N. Ram, “An ensemble heterogeneous classification methodology for discovering health-related knowledge in social media messages,” J. Biomed. Inform., vol. 49, pp. 255–268, 2014.
[47] S. Liu, X. Cheng, F. Li, and F. Li, “TASC: Topic-adaptive sentiment classification on dynamic tweets,” IEEE Trans. Knowl. Data Eng., vol. 27, no. 6, pp. 1696–1709, 2014.
[48] J. Kranjc, J. Smailović, V. Podpečan, M. Grčar, M. Žnidaršič, and N. Lavrač, “Active learning for sentiment analysis on data streams: Methodology and workflow implementation in the ClowdFlows platform,” Inf. Process. Manag., vol. 51, no. 2, pp. 187–203, 2015.
[49] R. A. Igawa et al., “Account classification in online social networks with LBCA and wavelets,” Inf. Sci. (Ny)., vol. 332, pp. 72–83, 2016.
[50] N. Oliveira, P. Cortez, and N. Areal, “The impact of microblogging data for stock market prediction: Using Twitter to predict returns, volatility, trading volume and survey sentiment indices,” Expert Syst. Appl., vol. 73, pp. 125–144, 2017.
[51] Z. Jianqiang and G. Xiaolin, “Comparison research on text pre-processing methods on twitter sentiment analysis,” IEEE Access, vol. 5, pp. 2870–2879, 2017.
[52] P. Bachman, O. Alsharif, and D. Precup, “Learning with pseudo-ensembles,” in Advances in neural information processing systems, 2014, pp. 3365–3373.
[53] M. Saini, S. Verma, and A. Sharan, “Multi-view Ensemble Learning Using Rough Set Based Feature Ranking for Opinion Spam Detection,” in Advances in Computer Communication and Computational Sciences, Springer, 2019, pp. 3–12.
[54] R. Alnashwan, A. P. O‟Riordan, H. Sorensen, and C. Hoare, “Improving sentiment analysis through ensemble learning of meta-level features,” in CEUR Workshop Proceedings, 2016, vol. 1748.
[55] N. F. F. Da Silva, E. R. Hruschka, and E. R. Hruschka Jr, “Tweet sentiment analysis with classifier ensembles,” Decis. Support Syst., vol. 66, pp. 170–179, 2014.
[56] M. T. AL-Sharuee, F. Liu, and M. Pratama, “Sentiment analysis: an automatic contextual analysis and ensemble clustering approach and comparison,” Data Knowl. Eng., vol. 115, pp. 194–213, 2018.
[57] M. Kang, J. Ahn, and K. Lee, “Opinion mining using ensemble text hidden Markov models for text classification,” Expert Syst. Appl., vol. 94, pp. 218–227, 2018.
[58] Y. Liu, C. Jiang, and H. Zhao, “Using contextual features and multi-view ensemble learning in product defect identification from online discussion forums,” Decis. Support Syst., vol. 105, pp. 1–12, 2018.
[59] A. Onan, S. Korukouglu, and H. Bulut, “A hybrid ensemble pruning approach based on consensus clustering and multi-objective evolutionary algorithm for sentiment classification,” Inf. Process. Manag., vol. 53, no. 4, pp. 814–833, 2017.
[60] S. Piri, D. Delen, T. Liu, and H. M. Zolbanin, “A data analytics approach to building a clinical decision support system for diabetic retinopathy: developing and deploying a model ensemble,” Decis. Support Syst., vol. 101, pp. 12–27, 2017.
[61] K. S. Narendra and M. A. L. Thathachar, Learning automata: an introduction. Courier corporation, 2012.
[62] A. Rezvanian, A. M. Saghiri, S. M. Vahidipour, M. Esnaashari, and M. R. Meybodi, “Learning automata theory,” in Recent Advances in Learning Automata, Springer, 2018, pp. 3–19.
[63] A. Rezvanian, A. M. Saghiri, S. M. Vahidipour, M. Esnaashari, and M. R. Meybodi, Recent advances in learning automata, vol. 754. Springer, 2018.
[64] A. Go, R. Bhayani, and L. Huang, “Twitter sentiment classification using distant supervision,” CS224N Proj. report, Stanford, vol. 1, no. 12, p. 2009, 2009.
[65] A. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, “Learning word vectors for sentiment analysis,” in Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, 2011, pp. 142–150.
[66] B. Pang and L. Lee, “Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales,” arXiv Prepr. cs/0506075, 2005.
[67] B. Pang, L. Lee, and S. Vaithyanathan, “Thumbs up? Sentiment classification using machine learning techniques,” arXiv Prepr. cs/0205070, 2002.
[68] D. Dua and C. Graff, “UCI Machine Learning Repository [http://archive. ics. uci. edu/ml]. Irvine, CA: University of California, School of Information and Computer Science, zuletzt abgerufen am: 14.09. 2019,” Google Sch., 2019.
[69] C. López-Vázquez and E. Hochsztain, “Extended and updated tables for the Friedman rank test,” Commun. Stat. Methods, vol. 48, no. 2, pp. 268–281, 2019.