Abstract—A huge number of videos posted every day online, these videos have an enormous of information about people reactions and opinions. Processing this data required an effective method. In this paper we propose a multimodal sentiment analysis classifier, using voice and facial features of the person. In our study, a new dataset for Arabic videos from the YouTube is collected. Several features were extracted including linguistic, audio, and visual data from the videos. The Arabic videos dataset includes a class label either positive, negative, or neutral. We utilized different classifiers including Decision Tree, k Nearest Neighbor (KNN), naive Bayes classifier, Support Vector Machine (SVM), and neural network. Overall accuracy is 76%.
Index Terms—sentiment analysis, multimodal sentiment analysis, Arabic dataset, features extraction
Cite: Hassan Najadat and Ftoon Abushaqra, "Multimodal Sentiment Analysis of Arabic Videos," Journal of Image and Graphics, Vol. 6, No. 1, pp. 39-43, June 2018. doi: 10.18178/joig.6.1.39-43
Copyright © 2012-2023 Journal of Image and Graphics, All Rights Reserved