Home > Published Issues > 2018 > Volume 6, No. 1, June 2018 >

Multimodal Sentiment Analysis of Arabic Videos

Hassan Najadat 1 and Ftoon Abushaqra 2
1. Department of Computer Information Systems, Jordan University of Science and Technology, Irbid, Jordan
2. Department of Computer Science, University of Science and Technology, Irbid, Jordan

Abstract—A huge number of videos posted every day online, these videos have an enormous of information about people reactions and opinions. Processing this data required an effective method. In this paper we propose a multimodal sentiment analysis classifier, using voice and facial features of the person. In our study, a new dataset for Arabic videos from the YouTube is collected. Several features were extracted including linguistic, audio, and visual data from the videos. The Arabic videos dataset includes a class label either positive, negative, or neutral. We utilized different classifiers including Decision Tree, k Nearest Neighbor (KNN), naive Bayes classifier, Support Vector Machine (SVM), and neural network. Overall accuracy is 76%.

Index Terms—sentiment analysis, multimodal sentiment analysis, Arabic dataset, features extraction

Cite: Hassan Najadat and Ftoon Abushaqra, "Multimodal Sentiment Analysis of Arabic Videos," Journal of Image and Graphics, Vol. 6, No. 1, pp. 39-43, June 2018. doi: 10.18178/joig.6.1.39-43