The rapid growth of user-generated unstructured data through social media has raised several challenges and research opportunities. These data constitute a rich source of information for sentiment analysis and help the understanding of spontaneously expressed opinions. In the past few years, many scientific proposals have addressed sentiment analysis issues. However, most of them do not take into account both spatial and temporal dimensions, which would enable a more accurate analysis. To the best of our knowledge, this approach has not received much attention in the literature. In this article, we formalized a spatiotemporal sentiment analysis technique and applied this technique to a case study of tweets about the FIFA 2014 World Cup. Our approach exploits the summarization of sentiment analysis using the spatial and temporal dimensions and automatically generates opinion change flow maps through both dimensions. The results enable the tracking of opinion change flow maps through spatial and temporal analysis.

Contents

1. Introduction

The growing interaction between Web services and users generates large volumes of information. The latest generation of the Web does not simply involve browsing it; users actively contribute content through applications, resulting in collective intelligence (O’Reilly, 2007). As a consequence, there has been a proliferation of unstructured information such as blogs, discussion forums, online product evaluation Web sites, microblogs and several types of social networks, bringing new challenges and opportunities to information retrieval (IR). This collective intelligence has spread to several areas, especially those related to everyday life, such as commerce, tourism, education and health, causing an exponential expansion of the social web (Appelquist, 2010). Thus, the ability to understand what people are thinking is fundamental to decision-making processes, especially in a context in which people express themselves in a voluntary way to cooperate with and influence each other (Cambria, et al., 2013; Malinen and Koivula, 2020).

Currently, a field of study that focuses on understanding people’s moods is sentiment analysis. Sentiment analysis aims to identify opinions about a specific topic using polarity detection. Polarity detection categorizes opinions as negative, neutral or positive. Several studies address sentiment polarity detection techniques (Pang and Lee, 2008; Ravi and Ravi, 2015; Sharma and Dey, 2012). Sentiment analysis may be defined as a method for identifying opinions, evaluations, attitudes and emotions regarding many different entities or subjects such as products, services, organizations, individuals, questions, events, and topics, among others (Liu, 2015; Pang and Lee, 2008; Medhat, et al., 2014; Yun, et al., 2018).

Sentiment analysis has prompted the interest of the scientific community and the business world (Zhang and Liu, 2014; Cambria, 2016). According to Feldman (2013), sentiment analysis has been one of the most active research areas of natural language processing (NLP), which is a research field that helps computers communicate with humans in their own language.

In order to provide a better comprehension of a given opinion, we need to consider the spatial and temporal dimensions since, for instance, an opinion about a specific product can change over time or at different geographic locations. A given mobile phone model can be considered a top choice in the current year and obsolete next year. An electronic music festival may be viewed favorably in Belgium and at the same time negatively in Hawaii.

As opinions can be influenced by time and geographic location, sentiment analysis has to consider the spatial and temporal dimensions when assessing an opinion change according to both space and time. In this paper, polarity detection techniques are analyzed in terms of spatial and temporal dimensions. The temporal dimension is used to track changes in opinions over time. For example, political campaigns alter their marketing strategies based on opinion changes prior to an election. The spatial dimension can be very useful in applications that need to understand the sentiments in segmented opinions according to geographic regions. For example, a political party may adapt an electoral campaign to focus on specific regions according to a planned strategy.

In order to demonstrate how the proposed spatiotemporal approach of social media sentiment works, we used a dataset of approximately 200,000 tweets (Twitter messages) related to the 2014 World Cup held in Brazil. We then adopted existing techniques for the polarity recognition of tweets and geographic information retrieval (GIR) techniques to identify geographic references mentioned in the tweet body.

Our main contribution is the proposal of an innovative spatiotemporal social media sentiment analysis approach, which includes spatial, temporal, and spatiotemporal analytics. To the best of our knowledge, this is the first work to address these dimensions together.

The remainder of this article is organized as follows. Section 2 discusses the related work. Section 3 highlights the main concepts related to spatiotemporal sentiment analysis and the formalization of problems addressed in this research work. Section 4 focuses on the description of the proposed approach. Section 5 addresses the proposed evaluation. Section 6 discusses the results. Finally, Section 7 presents the conclusions and highlights further work to be undertaken.

2. Related work

Sentiment analysis has been applied in several applications and purposes, such as the identification of stock exchange companies, identifying market moods based on expert opinions (Koppel and Shtrimberg, 2004; O’Hare, et al., 2009), market prediction (Khadjeh Nassirtoussi, et al., 2015; Zhenkun, 2016), analysis of consumer reviews of products or services (Eirinaki, et al., 2012; Hu and Liu, 2004; Yun, et al., 2018), analysis of places or tourist regions (Bjørkelund, et al., 2012), analysis of politicians (Awadallah, et al., 2012; Tumasjan, et al., 2010), and topics related to politics (Fang, 2012), among others.

Activities related to sentiment analysis involve the detection of subjective or opinionated content, classification of content polarity, and sentiment summarization. Text sentiment detection occurs at different granularities: document, sentence, entity, or aspect levels (Schouten and Frasincar, 2016; Pang and Lee, 2008). The main approaches for the classification of content polarity have been based on machine learning, semantic analysis and statistical techniques, and lexical analysis or dictionaries. A predominance of the state-of-the-art of proposals used techniques based on dictionaries and machine learning, where the latter obtained better results (Feldman, 2013; Sharma and Dey, 2013), especially with deep learning techniques (Zhang, et al., 2018).

Most of the published work related to sentiment analysis addressed methods to detect sentiment polarity, which is simply sentiment detection. A major limitation in the use of supervised learning is the need for labeled data in order to perform both training and testing tasks. There are several available labeled datasets on movie comments (Sharma and Dey, 2013), products (Hu and Liu, 2004; Pontiki, et al., 2015; Yun, et al., 2018) and hotels (Bjørkelund, et al., 2012) using English vocabulary. These works can be used for training and testing in these specific domains. However, other languages and domains lack labeled data for training classification models.

Some research used automatic techniques utilizing known terms expressing positive and negative feelings as a starting point to collect labeled data (Calais Guerra, 2011). Emoticons and emojis play key roles in online communication on Twitter (Roele, et al., 2020). In microtexts, e.g., tweets, some approaches used known hashtags (#) (Wang, et al., 2011) or emoticons (Pak and Paroubek, 2010; Read, 2005) to collect labeled data automatically, reducing a dependency on machine learning. Hashtags are used by Twitter users to group topics that are discussed by many users while emoticons are tokens that convey emotions. Li and Li (2013) found that 87 percent of the tweets containing emoticons have the same sentiments represented by the emoticons in the text. Although emoticons have been shown to be strongly correlated with the sentiments expressed in tweets, they are present in less than 10 percent of the tweets worldwide (Gonçalves, et al., 2013). Hence, considering a sentiment analysis technique that relies solely on emoticons to determine the polarity of sentiments would considerably limit tweet coverage as many tweets would be ignored.

Pak and Paroubek (2010) used the emoticon dataset to train a Naïve Bayes classifier that categorizes opinionated tweets as either positive or negative. Using part-of-speech taggers (POS taggers) and n-grams, the authors studied the distribution of grammatical classes contained in texts to differentiate objective and subjective sentences. Experimental evaluations showed that the proposed techniques were efficient.

Saif, et al. (2016) presented SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. This approach takes into account the co-occurrence of word patterns in different contexts to capture their semantics. Furthermore, the sentiment polarity of each term is not static or predefined, and it is updated according to its context. The authors’ main contribution was a solution based on lexical analysis independent of the domain, not needing a training set.

As seen, many related works performed sentiment analysis in specific domains focused solely on the topic of discussion. However, a more robust sentiment analysis approach needs to include two other dimensions naturally related to almost every topic of discussion: space (Where does the topic/subject apply?) and time (When does the topic/subject apply?). In this context, there were few proposed solutions for sentiment analysis that address both spatial and temporal dimensions. Indeed, some research only used the spatial dimension to visualize sentiment demographic density in different geographic regions (Bjørkelund, et al., 2012; Pino, et al., 2016; Dias, 2012; Cho, et al., 2014; Agarwal, et al., 2018).

Cho, et al. (2014) built a sentiment polarity dictionary for the Korean language to develop a sentiment classifier for tweets that considered geocoded messages to explore temporal and spatial views of the summarization of sentiment on a brand’s reputation. Temporal analysis was performed considering the variation in the numbers of positive and negative posts. This spatial analysis divided South Korea into six geographical areas, analyzing spatial trends in sentiment analysis. The results presented temporal and spatial sentiment changes toward the brand, but failed to address spatiotemporal sentiment changes.

Agarwal, et al. (2018) used the Brexit (British Exit) referendum event to perform spatial sentiment analysis based on the location of the event versus geospatial tweet distribution at a global level. This analysis highlighted both the positive and negative sentiments in many regions around the world. However, the temporal dimension that would allow for analyzing eventual sentiment changes over time was not explored.

Bjørkelund, et al. (2012) conducted spatial and temporal sentiment analysis of online traveler feedback, aiming to provide a better experience when choosing hotels on the Web. The sentiment analysis process was applied to extracted features from various hotels and their geographic regions according to the guests’ point of view. Their main contribution was the use of the temporal dimension to assist the identification of opinion changes and visualization using maps of regional sentiment analysis. The comments on the hotels were collected from the TripAdvisor and Booking.com Web sites, which were already geocoded according to the geographical locations of hotels. The authors used two approaches for sentiment classification: SentiWordNet to obtain a binary classification and the Naïve Bayes classifier to obtain degrees of classification related to five categories. A system was developed to enable users to see hotels according to the sentiment analysis performed on comments since the method easily identified geographic regions that contain the best or worst hotels. Additionally, by clicking on a particular hotel on the map, users could view the history of the sentiment detected on it. However, the spatial dimension was used only to provide a geographical view of the current state of sentiment analysis. It was not able to be viewed through maps of opinion changes over time, which could be accomplished by spatiotemporal analysis.

In order to better address the spatial dimension of sentiment analysis, GIR techniques must be taken into account. GIR — geographic information retrieval — is an ongoing research field that is part of the broader research stream on information retrieval (IR). GIR comprises many proposals addressing geographical information extraction from semistructured documents using natural language processing (NLP) and geoprocessing techniques. GIR algorithms are designed to process text from Web resources (e.g., Web pages, blogs, and social networks) and then assign specific geographic locations to them (Purves and Jones, 2011). Within this context, gazetteers are known as huge geographical knowledge databases and have been fundamental for GIR tasks to be able to connect place names (also known as toponyms) to geographical features or footprints (Keßler, et al., 2009).

A geoparser system (or just geoparser) generally identifies geographical locations in two stages: toponym recognition (geoparsing) and toponym resolution (georeferencing). Geoparsing identifies candidate terms that might refer to a geographical location. Georeferencing assigns real-world coordinates to candidate terms. The key topic concerning GIR in sentiment analysis is the automated identification of the geographical locations mentioned in texts such as tweets, which can enrich data with latitude and longitude coordinates. This kind of spatial data is useful for the spatial visualization and analysis powered by dynamic maps.

Dias (2012) used GIR techniques to georeference documents from textual evidence, to perform spatial sentiment analysis, and then to generate thematic maps through the summarization of results. The detection of sentiment polarity is performed through a text classification model provided by the LingPipe tool using two polarity scales: binary (positive and negative) and Likert-based, where 1 means a very negative sentiment, 2 means a negative sentiment, 3 means a neutral sentiment, 4 means a positive sentiment, and 5 means a very positive sentiment. To validate the polarity detection techniques, comments from the Yelp.com Web site were collected in several areas including restaurants, hotels and several shops near some universities in the United States. The method achieved an accuracy of 80 percent using the binary scale and an accuracy of 50 percent using the Likert-based scale.

To improve spatial sentiment polarity classification, some works consider only the tweets that have been geocoded (Yang, 2014; Bjørkelund, et al., 2012; Cho, et al., 2014). However, although Twitter enables users to share geocoded messages, several related studies (Oliveira, et al., 2015) demonstrated that only a few tweets contained geographic information. Moreover, there were no guarantees that the addressed text in the message had any relation to the tweet’s geocode. Therefore, to increase the rate of geocoded tweets, it is necessary to use GIR techniques in order to identify geographical locations mentioned in a given tweet.

Finally, our research work differs from related work by proposing a spatiotemporal approach for social media sentiment. The work proposed in this paper enriches the spatial visualization of sentiment through the automatic generation of sentiment heat maps and opinion change flow maps. This level of sentiment summarization enables a powerful and robust sentiment analysis for each topic.

3. Problem statement

This work uses social media data related to predefined subjects. The proposed sentiment analysis approach considers the document level so that any detected opinion is necessarily related to a predefined subject. For this reason, named entity recognition (NER) techniques, for instance, were not applied because the proposed approach does not perform entity detection.

Although we have focused on Twitter, this research can be applied to other kinds of social media. Furthermore, we consider that a Twitter message presents only one predominant sentiment, although some studies have considered multiple emotions associated with a message (Liu and Chen, 2015).

Once the NER problem is discarded and the entities or themes of the microtexts are known, we use state-of-the-art techniques for the sentiment polarity detection and geocoding of collected microtexts. Therefore, in this section, we formalize the problems addressed in this work and provide a definition of opinion to detect sentiment considering both spatial and temporal dimensions.

The sentiment polarity detection problem is a text classification task. Formally, a classification task seeks to find a function that approximates a classification function $$f : T \rightarrow C$$ where $$f(t_i) = c_j$$ such that $$T$$ represents the texts and $$C=\{c_1, ..c_n\}$$ represents a set of $$n$$ predefined classes for classification. The function $$f$$ associates a text $$t_i \in T$$ with a class $$c_j \in C$$. In this work, the set $$T$$ represents all social media texts, and $$c_j \in C=\{positive, neutral, negative\}$$ represents the predominant polarity (semantic orientation of sentiment) classes. Thus, we want to find a function that generalizes the classification of a comment with a positive or negative sentiment.

The text preprocessing of social media texts is performed using NLP techniques to improve polarity detection. These include slang word and abbreviation identification, eventual spelling correction, stop word removal, emoticon and emoji detection, and lemmatization (Rathan, 2018).

The geocoding process uses geographical information retrieval (GIR) techniques in order to identify geographical locations in text. Basically, GIR techniques seek to identify place names into phrases and translate place names into geographical coordinates. This is a challenging task that has some known open issues since textual references for place names may be ambiguous, misspelled, and too vague, leading to wrong place name interpretation.

We extended Liu’s (2015) definition of opinion by adding geographic context. Hence, our definition of opinion is represented by the following 6-tuple:

$O_{t_i} = (e_{t_i}, a_{j}^{t_i}, c_j, h_{t_i}, d, l_m)$ (1)

where

• $$e_{t_i}$$ represents an entity evaluated, e.g., products (photography, camera, or smartphone) or events (World Cup or Brexit).

• $$a_{j}^{t_i}$$ is an aspect (feature) of entity $$e_{t_i}$$, with $$a_{j}^{t_i} \in A= (a_{1}^{t_i} , ..., a_{m}^{t_i})$$, where $$A$$ represents a set of features of entity $$e_{t_i}$$ (e.g., “battery”, “display”, or “smartphone camera”). This element is used only when a greater level of detail of entities is required through an aspect level approach.

• $$c_j$$ is a class that defines the sentiment polarity in entity $$e_{t_i}$$, or the aspect $$a_{j}^{t_i}$$ when it is considered. This class is obtained through the function $$f$$ described previously.

• $$h_{t_i}$$ is an opinion holder (or source) of message $$t_i$$. For example, in the sentence “I hate this game”, the opinion holder is the text’s author. named entity recognition (NER) techniques can be applied to correctly identify the opinion holder in a sentence. In this context, it seems to be the Twitter user.

• $$d$$ is the date and time (instant) that the opinion was expressed by $$h_{t_i}$$.

• $$l_m$$ represents the geographic location associated with $${t_i}$$, and it was obtained through through GIR techniques. The geoparsing system performs the automated identification of the locations mentioned in the text.

$$O_{t_i} = (e_{t_i}, a_{j}^{t_i}, c_j, h_{t_i}, d, l_m)$$ gathers all necessary features for the spatiotemporal approach of social media sentiment analysis. In this work, we consider that the geographical location related to an opinion ($$l_m$$) from the text ($$t_i$$) comes from the output of GIR systems and the aspects $$a_{j}^{t_i}$$ were not considered.

Hence, time and geographic location are also important dimensions for the problem definition, which concerns sentiment analysis. As time passes or geographic locations change, people may maintain or change their viewpoints about the same subject (themes).

4. Our proposed approach

This section presents our proposed approach based on the formally defined spatiotemporal sentiment analysis and social media data collected from Twitter. We address the details of the identification of sentiment polarity, how the temporal dimension is considered and, finally, how spatiotemporal summarization of sentiment is performed.

4.1. The spatiotemporal sentiment analysis process

This paper uses sentiment analysis techniques to determine the opinion polarity (positive or negative) expressed in social media and GIR techniques to infer geographic locations from textual evidence. Then, we provide information summarization mechanisms that present a spatiotemporal view of the sentiment in several geographic regions and provide sentiment variation analysis over space and time. The summarization proposed here aims to improve the data analysis task since it gathers both geographical and temporal information regarding detected opinions.

The approach proposed in this study is composed of four steps: Extraction, Classification, Geocoding and Summarizing. This approach is shown in Figure 1.

 Figure 1: Overview of the proposed spatial-temporal sentiment analysis.

In the extraction stage, tweet data is stored in the database to be analyzed, both to process sentiment polarity Classification and in the process of Geocoding, and it is used to identify geographic regions associated with tweet content. The temporal information consists of the date and time when the tweet was posted. Lastly, through the results obtained in this process, the Summarization stage is responsible for providing the mechanisms of spatial-temporal sentiment analysis, including the detection of opinion changes.

4.2. Sentiment polarity classification and text geocoding

As shown in Figure 1, our proposed approach has a step for sentiment polarity classification and another step for geocoding spatial information from text.

Neural networks have emerged as a powerful machine learning technique; they have produced relevant results in many application domains, including computer vision, speech recognition, NLP, and deep learning, which has recently become very popular in sentiment analysis (Zhang, et al., 2018; R. Sharma, et al., 2018). Following this trend, we evaluated and compared the performances of neural network-based and SVM-based classifier models by performing sentiment polarity classification to automatically identify positive, negative, or neutral sentiment into tweets, as described in the following:

• We developed a sentiment classifier based on a multilayer perceptron (MLP) (Marsland, 2014). In this article, we used a MLP with three layers: input, hidden, and output. The input layer contains three nodes, the hidden layer contains 36 nodes, and the output layer contains three nodes. Each node corresponds to a sentiment of interest: positive, neutral, or negative. We used the sigmoid as an activation function in addition to backpropagation and a limited-memory BFGS (L-BFGS) as a convergence optimization algorithm.

• The SVM-based sentiment classifier comes from a previous work (Alves, et al., 2014), where it presented good results for tweets written in Portuguese (accuracy up to 88 percent), especially when applied to a dataset extracted from social media that presents several challenges (Fersini, , et al., 2016) in the NLP and sentiment analysis contexts. Our sentiment classifier uses lexical analysis and supervised machine learning to provide an output regarding the sentiment polarity of a text (e.g., tweet): +1 indicates a positive sentiment, -1 indicates a negative sentiment, and 0 indicates that the inferred sentiment is neutral. The lexical analysis comprises special token analysis (e.g., emoticons), analysis based on the term frequencyinverse document frequency (TF-IDF), and part-of-speech Tagging (POS tagging). The supervised machine learning model combines the bag-of-words approach with the SVM classifier and manual labeling to determine the sentiment polarity in each text.

We adopted GIR techniques in the step that geocodes spatial information from text to increase the rate of geocoded tweets. We decided to adopt the GeoSEn (geographic search engine) system (Campelo and de Souza Baptista, 2009) mainly due to its good results when geocoding Web-based documents and, recently, tweets (Oliveira, et al., 2015). The GeoSEn system performs text geoparsing in documents by identifying the textual expressions that reference geospatial locations and then returning their respective geographical coordinates, which, for instance, is useful for showing some information on a dynamic map. Oliveira, et al. (2015) incorporated some adjustments into the GeoSEn system that enabled it to geoparse microtexts, such as tweets, at a low-level spatial city granularity, such as districts, streets, and points-of-interest (POIs). Moreover, the GeoSEn system uses a geographic database powered by OpenStreetMap (OSM) with several locations in Brazil (geo database module) at the spatial granularity of cities, comprising all Brazilian cities. These facts supported our decision to use the GeoSEn geoparsing module to make geographical inferences in our proposal. However, other geoparsers can be used in place of the GeoSEn system. The choice will mainly depend on the spatial context (geographic database and spatial granularity) related to the data to be classified.

4.3. Capturing spatiotemporal opinion change

A person’s opinion about an event, object, or another person is directly associated with his or her interest. These personal interests are subjective and particular and can change over time. Thus, a person’s opinion about something can also change as a consequence of a change in his or her personal interests. This change in opinion can occur over time and space. For example, before a match starts, a crowd may extol the way a team plays and by the end of that match criticize the same way a team plays. This occurs in relation to space, for example, when a fan praises the World Cup event in a city and criticizes the same event in another city.

An opinion change occurs when there is more than one opinion from the same user on a determined theme (the World Cup, for example), and these opinions have different polarities. Thus, to verify whether a user has changed his or her opinion, the user must have at least two opinions. There is a change in opinion when the polarities are different; otherwise, the opinion does not change. In situations where a user (opinion holder) has issued a positive opinion (or negative) and then issued a neutral opinion (text objective), these were not regarded as an opinion change because texts classified as objective (not opinionated) are discarded in the sentiment summarization process. Texts in which there are no expressions containing sentiment or opinions are not interesting for sentiment analysis purposes.

Let $$u$$ be any user and $$O_u$$ be the set of opinions emitted by user $$u$$ on the same subject in a given geographic location $$l_i$$. The algorithm (CSTOC — capture of spatiotemporal opinion change) from Table 1 shows how an opinion change is detected. The CSTOC algorithm starts ordering all user reviews by the dates that the opinions were posted (Line 1). In Line 4, a set $$OC_u$$, that stores the opinion changes of user $$u$$ in different locations is defined. Then, in lines 6 to 13, the collection of user opinions is iterated to check whether the current opinion is valid (positive or negative) and whether the polarity and location $$l_i$$ have changed. When the condition in line 7 is satisfied, two pairs, one containing both the last location and opinion polarity and another containing both the new location and opinion polarity, are added to the $$OC_u$$ list. At the end of the algorithm, all opinion changes are identified.

 Table 1: Algorithm CSTOC (capture of spatiotemporal opinion change).

4.4. Opinion summarization

Once the opinion polarity detection and geocoding processes are performed, the results are indexed in the database for the opinion summarization stage, which is performed with the Data Summarization module. The summarization proposal of this study explores the following aspects:

• Temporal analysis of sentiment: Associates positive and negative sentiments over time;
• Spatial visualization of sentiment: Generates maps of detected sentiments using heat maps; and,
• Spatiotemporal visualization of sentiment: Exploits both spatial and temporal dimensions to generate opinion change flow maps, which is the novel approach proposed by this work.

We performed temporal analysis using temporal distribution graphics. These graphics group the number of messages by date, enabling us to trace the behavior of sentiment and assisting in the detection of time intervals regarding opinion changes. An alternative to visualizing the variation in the positive and negative polarities of tweets is using fractions (or percentages) of positive and negative tweets per day. This alternative allows one to analyze the sentiment variation regardless of the number of tweets with sentiments, enabling the perception of the sentiment orientation per day throughout the observation period. Another way to obtain the semantic orientation of the general sentiment expressed in microtexts is by taking the number of messages with positive sentiments and subtracting the number of messages with negative sentiment.

We used heat maps to generate sentiment maps. A heat map shows the analyzed information density or magnitude related to geographic locations. Heat maps enable spatial sentiment analysis to detect behavior in different regions, which in our case are Brazilian states. To illustrate the density distribution (heat) on the map, we applied rendering functions according to defined color styles and the georeferenced dataset (sentiments). A typical transformation calculates the aggregation to be performed according to the input data, thereby visualizing the map. Besides, we used a rendering function that uses a set of weighted geographical points to create an area of heat on the map. We obtained the weight for each geographic region using the proportion of positive and negative sentiment in each detected location, and we set the style that illustrates the density distribution according to the color scale presented in Figure 2. The color scale is used by a rasterization function that uses the weight (number of tweets) from each geographical region and the geographical groupings on the map to achieve the color scale. Thus, our tool obtains the heat map through database queries that return the geographic locations; and because the groupings are achieved, it visualizes the density.

 Figure 2: Color scale defined by the proportion of positive and negative sentiment used to weigh geographic data in the sentiment heat map.

Our new spatiotemporal sentiment analysis approach uses the algorithm from Table 1 to generate opinion change maps. The main idea is to create maps with opinion changes that are portrayed in the form of colored arrows. The color shows the polarity of the new sentiment associated with the issuer. The direction shows the chronological change in the region associated with the opinion. The width represents the number of opinions. The wider the arrow, the greater the number of opinions related to the context that it represents. We can visualize the changes in the spatial and temporal information of a group of people by grouping opinion changes for the same person. Thus, it comprises an important tool for understanding the facts and performing decision-making. All approaches described in this section are used in Section 6.

5. Evaluation

This section addresses experiments performed to evaluate the proposed approach, including the characteristics of the tweets dataset used and metrics adopted to analyze results.

5.1. Tweet dataset

For the experiments related to this paper, the Twitter API was used to create a tweet crawler. The crawler collected approximately 200,000 tweets in Portuguese related to the theme of the FIFA World Cup held in Brazil in 2014. It was set to run every day and collected the day-before tweets with some predefined terms. Figure 3 shows the amount of tweets obtained with the terms used in the collection.

 Figure 3: Number of tweets by searched terms in our dataset.

The data collection period was from 29 April to 13 July 2014, covering the period of the competition, which took place in Brazil from 12 June to 13 July 2014. The data period is important to understand the behaviour of the sentiment expressed by users. Figure 4 illustrates the daily number of tweets during the collected period that comprises our dataset.

 Figure 4: Daily distribution of tweets in our dataset.

The preprocessing of the texts was performed using NLP techniques to identify and eliminate terms that do not contribute to sentiment polarity identification, such as stopwords, links (URLs), and user mentions. To help this process, the Apache OpenNLP was used, a machine learning based toolkit for natural language processing.

In order to train and evaluate the supervised machine learning algorithms at detecting sentiment polarity, we needed to separate a set of tweets that have already been labeled with sentiment polarities. The tweet sentiment labeling task was performed using two distinct methods: automatic labeling using emoticons; and manual labeling, where human volunteers identified sentiment in tweets. In the automatic labeling, all tweets that had emojis were separated in the dataset. For the automatic labeling, we chose only tweets with emoticons because it was easier to identify tweet polarity, e.g., tweets with “=)” mean a positive opinion, and tweets with “=(” represent a negative opinion. The manual labeling task used a total of 1,500 tweets that were selected randomly for classification. Sixteen people performed the labeling task, where they assigned a value of 0 for neutral opinions, a value of 1 for positive opinions, and a value of -1 for negative opinions. After labeling, we selected only comments that were revised by at least three people. These two labeling methods were used in both the training and testing steps. Table 2 presents the number of tweets labeled with sentiment polarity according to the labeling method. We used only positive and negative polarity in automatic labeling because we did not have a good emoji to represent neutral polarity. However, during manual labeling, we labeled neutral tweets because all tweets were labeled by humans, which means that we had better control when labeling neutral polarity.

 Table 2: Labelling task results. Method Positive Negative Neutral Total Automatic labelling 1,468 492 — 1,960 Manual labelling 461 333 353 1,227

In manual labelling, of the 1,500 tweets randomly assigned to a dataset, 80 tweets were not labelled by volunteers and 193 were discarded, using majority vote, due to divergence in sentiment polarity identification among volunteers. Hence, only 1,227 manually labelled tweets were actually considered, and among these, considering only opinionated tweets (positive and negative), approximately 58 percent tweets had positive feelings. Compared to auto-labelling, it is clear that manual labelling showed a better balance between quantitative feelings detected by volunteers.

We processed the entire tweet dataset by applying the GeoSEn geoparser to the tweets following the methodology described by de Oliveira, et al. (2014). It was possible to automatically identify geographic locations in 28,787 tweets with 74.08 percent accuracy, 92.30 percent precision, and 52.55 percent recall. In order to obtain these statistical metrics, we performed supervised classification including manual validation based on the Likert scale and considering a random sample of these tweets that were validated by volunteers. The Likert scale is useful in this kind of application as an inferred location may not exactly match the location expressed in the tweet (which would lead to five stars) but may be geographically near the precise location. For instance, a tweet may mention a specific city, but the geoparser correctly identifies only the state or county where the city is geographically located.

The maps in Figure 5(a) and Figure 5(b) highlight the Brazilian federal states that clustered more tweets. The heat map in Figure 5(b) highlights a concentration of tweets situated in the southeast region, mainly due to the number of tweets related to the state of São Paulo. This tweet distribution can be explained as the cities that hosted most of the matches in the FIFA 2014 World Cup are located in the southeast and northeast regions of Brazil.

 Figure 5: a) Distribution of tweets by Brazilian Federal States; b) Heat map of the tweets dataset.

5.2. Evaluation metrics

Metrics such as accuracy, precision, recall, and F-measure are frequently used in the information retrieval literature (Egghe, 2008). In this article, we used these metrics to evaluate the results of the polarity detection algorithms. To evaluate the algorithm that uses supervised machine learning techniques to identify sentiment polarity, a fraction of labelled tweets is reserved to train the model, while the other fraction is used to test the sentiment classifier and compare results with marked labels. To evaluate the generalization capacity of classification models, the k-fold cross-validation method was used, with $$k=10$$, or simply $$10-fold$$.

6. Results

In this section, we present the results obtained in the sentiment identification of tweets, and explain temporal and spatial summarization.

6.1. Sentiment identification

Table 3 presents the results obtained from the SVM and MLP classification models. In the $$k-fold$$ cross-validation method, the dataset used for training and testing uses all of the labeled tweets.

 Table 3: Sentiment classifier results. Classifier Accuracy Class Precision Recall F-score MLP — Multilayer perceptron 0.901 Positive 0.943 0.967 0.956 Negative 0.804 0.727 0.765 Weighted average 0.898 0.888 0.893 SVM 0.805 Positive 0.839 0.873 0.873 Negative 0.715 0.657 0.685 Weighted average 0.799 0.802 0.800

After comparing the results, we decided to use an MLP classifier model to set the sentiment polarity for each gathered tweet.

6.2. Summarization

After indexing the results of the sentiment polarity of tweets, the next step was to perform the summarization of the results to understand the sentiment of the population during the period of competition. This summarization was made considering the temporal, spatial, and spatiotemporal dimensions.

Temporal distribution graphics, which group the amount of messages by date, make it possible to trace the behaviour of the general sentiment detected. Figure 6 illustrates a temporal distribution graphic of the behaviour of positive and negative feelings detected in the analyzed period.

 Figure 6: Number of positive and negative tweets.

Another alternative to visualizing the behavior of the variation of tweets between positive and negative polarities is through the fractions (percentages) of positive and negative tweets per day. Hence, it is possible to analyze the variation of sentiment, regardless of the number of tweets with sentiments, while being able to perceive the direction of the sentiment each day throughout the observed period. Figure 7 illustrates the proportions of positive and negative tweets per day. For example, we found that on 24 June 2014, approximately 90 percent of the daily tweets showed positive sentiments, and only 10 percent showed negative sentiments.

 Figure 7: Fraction of positive and negative tweets.

It can be inferred from Figure 7, in general, the predominant feeling detected through sentiment analysis is positive during the entire period, with an average of approximately 80 percent tweets with positive feelings and 20 percent with negative feelings. The fact is that Brazilians were satisfied with the 2014 World Cup being hosted in Brazil. However, based on the graphic in Figure 7, it is clear that on 8 July 2014, there was an atypical growth of tweets with negative sentiments compared to the average for the whole period. On this day, approximately 38 percent of the day’s tweets showed negative polarity.

Using the semantic orientation, we can secure expressed sentiment in the micro-texts, generating the graphic shown in Figure 8, that illustrates semantic orientation of overall sentiment.

 Figure 8: Semantic orientation.

Given the geographical inferences in tweets that have sentiments (positive or negative), the maps in Figure 9 were generated to assist the spatial sentiment analysis at state and regional level. These maps help the end users to understand the sentiment detected in several geographic locations regardless of the amount of tweets in states or regions. For example, it is clear that in all regions of Brazil, the sentiment was positive according to the temporal graphics of Figure 10 and Figure 11. However, given the analysis at the state level, it can be seen that there was a state with a higher prevalence of tweets with negative sentiments. It is still apparent by the color tones that some Brazilian states had a higher fraction of positive tweets than negative tweets.

 Figure 9: a) Sentiment polarity by state; b) Sentiment polarity by region.

 Figure 10: Spatial distribution of sentiment polarity.

The heat map in Figure 10 illustrates the concentration of tweets in regions considering the amount of tweets and the sentiment variation detected near the regions. There are regions that are disregarded in the analysis because the number of tweets are insignificant compared to other regions.

In Figure 11(a), the regions in blue represent the density of opinion changes detected in all tweets. There is a concentration of opinion change in the northeast and southeast regions. Figure 11(b) shows the density of opinion changes by state. The predominance of opinion changes in the states of São Paulo and Minas Gerais is evident. It can be observed that areas and regions in blue are causing opinion change.

 Figure 11: a) Opinion change concentration; b) Opinion change concentration by Brazilian states.

6.2.1. Spatiotemporal summarization

Figure 12 presents an opinion change flow map produced by the algorithm from Table 1. The new approach presented in Figure 12 summarizes opinion change flows over time considering the geographical location by using directional and variable-width arrows and intuitive colors on a map. The approach is described in more detail in Section 4.3. In the opinion change flow map, the center point for each of the five Brazilian regions is considered as we have analyzed the macro regions in Brazil. For example, there was a change in opinion polarity from positive in the south region (S) to negative in the north region (N). Similarly, there was a change in the opinion polarity from the north region, where their opinions were negative, to the northeast region, where the opinions became positive. In order to better visualize spatiotemporal opinion change, we produced two maps (Figure 13): the first map contains only the negative polarity flows (Figure 13(a)), and the second map contains only the positive polarity flows (Figure 13(b)).

 Figure 12: a) Opinion change flow map.

 Figure 13: Opinion change flow maps: a) Negative; b) Positive.

For example, the line that connects the north and northeast Brazilian regions is green, indicating a change of opinion to positive polarity. As the arrow points to the northeast, the change of opinion was in favour of this region, showing that users had negative opinions on the World Cup in the north; and this transitioned into positive opinions in the northeast. However, the number of opinions relating to these regions (north and northeast) is much lower than the opinions related to the northeast and southeast regions, which have a wider arrow that is green and bidirectional, which links them together. This indicates a change of opinion to the positive polarity in both directions; i.e., users issued negative opinions about both regions and then had positive reviews related to these regions.

6.2.2. Historical match: Brazil 1 x 7 Germany

This paper also analyzed tweets from 8 July 2014, the day that the Brazilian team suffered one of its biggest defeats in history to the German team with a 1x7 score.

Figure 14 shows two sentiment maps at two different periods of that match: a) tweets sent during the first half of the match; and, b) tweets sent during the second half, at which point the score was already 5x0 in favour of Germany. Comparing the map of Figure 14(b) with Figure 14(a), it can be seen that tweets with negative polarity increased. This can be explained by the fractional growth in negative tweets, as shown in Figure 7.

 Figure 14: Spatial sentiment during a) first half and b) second half of that match.

It is important to keep in mind that the sentiment maps presented in this paper include only tweets that have geographical references inferred by the GEoSen system.

We also performed a study regarding opinion changes that occurred during the game. The maps in Figure 15 presents the opinions changes: a) regions that had a larger concentration of opinion changes; b) regions in which opinion changed from negative to positive; and, c) regions in which opinion changed from positive to negative.

 Figure 15: The concentration of opinions changes: a) positive and negative; b) only positive; and, c) only negative.

7. Conclusion

The growing volume of subjective content provided by Web 2.0 as a result of social media has made sentiment analysis an increasingly attractive research field. Sentiment analysis offers organizations the ability to monitor opinions from social media in real time, thus providing support for their decision-making processes. In this article, we proposed an approach to the spatiotemporal sentiment analysis of social mdia. Through the application of sentiment polarity techniques and GIR techniques, opportunities for sentiment summarization and visualization considering the spatial, temporal, and spatiotemporal dimensions can be offered.

In addition to the implementation and comparison of two sentiment classification models, our proposal includes GIR techniques to infer geographic locations from textual evidence contained in tweets and to explore sentiment summarization in a spatiotemporal scenario. In the temporal sentiment analysis, the goal was to generate graphics that make it possible to trace sentiment over the chosen period using quantitative texts with positive and negative sentiments. These graphics allow us to understand the semantic orientation of the general sentiment expressed by the population. In spatial sentiment analysis, this study addressed the generation of geographical sentiment maps, whereby the amount of sentiment detected in the geographic locations is reflected by the intensity indicators in the heat map. Finally, while considering the spatial, temporal, and spatiotemporal dimensions, we could generate sentiment flow maps, in which users who have issued different opinions over the time in different geographical locations have been identified. With this approach, it is possible to graphically visualize through maps the changes in opinion that have occurred in geographical locations through time.

Based on the results obtained in this research, In future work, we plan to explore temporal series to predict sentiments in various geographic regions using a sentiment analysis approach at the aspect level. Thus, through named entity recognition (NER), the detection of polarity at the level of the analyzed entity characteristics will be enabled. In addition, we plan to insert a spatiotemporal dimension into sentiment analysis to better capture sentiments in the presence of slang and terms that suffer from a temporal influence. We also considered improvements based on the recognition of figurative language, such as sarcasm and irony.

André Luiz Firmino Alves is a Professor at the Federal Institute of Paraiba and a Ph.D. candidate at the University of Campina Grande, Brazil. He holds a MSc. and BSc. in computer science from the University of Campina Grande, Brazil. His research interests are in natural language processing with an emphasis on the analysis of sentiment and emotions.
E-mail: andre [dot] alves [at] ifpb [dot] edu [dot] br

Cláudio de Souza Baptista received the B.S. and M.S. degrees in computer science from the Federal University of Paraiba, Brazil and the Ph.D. degree in computer science from the University of Kent at Canterbury. He is a Professor in the Computer Science Department at the Federal University of Campina Grande, Brazil, where he leads the Information Systems Laboratory (LSI). He is the author of more than 130 articles. His research interests include text mining, databases, GIS, decision support systems and multimedia.
E-mail: baptista [at] computacao [dot] ufcg [dot] edu [dot] br

Davi Oliveira Serrano de Andrade received the B.S. (2012) and M.S (2015) degrees in computer science from University of Campina Grande, Campina Grande, Paraíba, Brazil. His research interests include machine learning applications, computer science applied into finance and natural language processing.
E-mail: davi [dot] teife [at] gmail [dot] com

Maxwell Guimarães de Oliveira is Lecturer and Researcher at the Systems and Computing Department of the Federal University of Campina Grande (UFCG), Brazil. He holds a Ph.D. in Computer Science from UFCG, with a sandwich period at University College Dublin (UCD), Ireland, as well as a master’s and bachelor’s degrees in computer science from UFCG and Federal University of Alagoas (UFAL), respectively. He currently is a member SINBAD (Information Systems and Database Group). His research focuses on geographic data and data science, with contributions involving artificial intelligence, machine learning, natural language processing, information retrieval and social media mining in several application domains.
E-mail: maxwell [at] computacao [dot] ufcg [dot] edu [dot] br

Aillkeen Bezerra de Oliveira received a degree in computer science from the Federal University of Campina Grande (UFCG), Campina Grande, Paraíba, PB, Brazil. He works as a developer and researcher at the Information Systems Laboratory (LSI/UFCG). He has experience in databases, information systems, artificial intelligence, data mining and systems for mobile devices. His research interests include artificial intelligence, natural language processing and data mining.
E-mail: aillkeenoliveira [at] gmail [dot] com

References

A. Agarwal, R. Singh, and D. Toshniwal, 2018. “Geospatial sentiment analysis using twitter data for UK-EU referendum,” Journal ofInformation and Optimization Sciences, volume 39, number 1, pp. 303–317.
doi: https://doi.org/10.1080/02522667.2017.1374735, accessed 25 July 2021.

D. Appelquist, D. Brickley, M. Carvahlo, R. Iannella, A. Passant, C. Perey, and H. Story, 2010. “A standards-based, open and privacy-aware social Web,” W3C Incubator Group Report (6 December), at https://www.w3.org/2005/Incubator/socialweb/XGR-socialweb-20101206/, accessed 25 July 2021.

R. Awadallah, M. Ramanath, and G. Weikum, 2012. “PolariCQ: Polarity classification of political quotations,” CIKM ’12: Proceedings of the 21st ACM International Conference on Information and Knowledge Management, pp. 1,945–1,949.
doi: https://doi.org/10.1145/2396761.2398549, accessed 25 July 2021.

E. Bjørkelund, T.H. Burnett, and K. Nørvåg, 2012.. “A study of opinion mining and visualization of hotel reviews,” IIWAS ’12: Proceedings of the 14th International Conference on Information Integration and Web-Based Applications & Services, pp. 229–238.
doi: https://doi.org/10.1145/2428736.2428773, accessed 25 July 2021.

P.H. Calais Guerra, A. Veloso, W. Meira, and V. Almeida, 2011. “From bias to opinion: A transfer-learning approach to real-time sentiment analysis,” KDD ’11: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158.
doi: https://doi.org/10.1145/2020408.2020438, accessed 25 July 2021.

E. Cambria, 2016. “Affective computing and sentiment analysis,” IEEE Intelligent Systems, volume 31, number 2, pp. 102–107.
doi: https://doi.org/10.1109/MIS.2016.31, accessed 25 July 2021.

E. Cambria, B. Schuller, Y. Xia, and C. Havasi, 2013. “New avenues in opinion mining and sentiment analysis,” IEEE Intelligent Systems, volume 28, number 2, pp. 15–21.
doi: https://doi.org/10.1109/MIS.2013.30, accessed 25 July 2021.

C.E.C. Campelo and C. de Souza Baptista, 2009. “A model for geographic knowledge extraction on Web documents,” In: C.A. Heuser and G. Pernul (editors). Advances in conceptual modeling — Challenging perspectives. Lecture Notes in Computer Science, volume 5833. Berlin: Springer, pp. 317–326.
doi: https://doi.org/10.1007/978-3-642-04947-7_38, accessed 25 July 2021.

S.W. Cho, M.S. Cha, S.Y. Kim, J.C. Song, and K.-A. Sohn, 2014. “Investigating temporal and spatial trends of brand images using Twitter opinion mining,” 2014 International Conference on Information Science & Applications (ICISA).
doi: https://doi.org/10.1109/ICISA.2014.6847417, accessed 25 July 2021.

D.C. Dias, 2012. “Text mining methods for mapping opinions from georeferenced documents,” Master’s thesis in information systems and computer engineering, Universidade Técnica de Lisboa, at https://fenix.tecnico.ulisboa.pt/downloadFile/395144612095/extended_abstract.pdf, accessed 25 July 2021.

L. Egghe, 2008. “The measures precision, recall, fallout and miss as a function of the number of retrieved documents and their mutual interrelations,” Information Processing & Management, volume 44, number 2, pp. 856–876.
doi: https://doi.org/10.1016/j.ipm.2007.03.014, accessed 25 July 2021.

M. Eirinaki, S. Pisal, and J. Singh, 2012. “Feature-based opinion mining and ranking,” Journal of Computer and System Sciences, volume 78, number 4, pp. 1,175–1,184.
doi: https://doi.org/10.1016/j.jcss.2011.10.007, accessed 25 July 2021.

Y. Fang, L. Si, N. Somasundaram, and Z. Yu, 2012. “Mining contrastive opinions on political texts using cross-perspective topic model,” WSDM ’12: Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, pp. 63–72.
doi: https://doi.org/10.1145/2124295.2124306, accessed 25 July 2021.

R. Feldman, 2013. “Techniques and applications for sentiment analysis,” Communications of the ACM, volume 56, number 4, pp. 82–89.
doi: https://doi.org/10.1145/2436256.2436274, accessed 25 July 2021.

E. Fersini, E. Messina, and F. Pozzi, 2016. “Expressive signals in social media languages to improve polarity detection,” Information Processing & Management, volume 52, number 1, pp. 20–35.
doi: https://doi.org/10.1016/j.ipm.2015.04.004, accessed 25 July 2021.

P. Gonçalves, M. Araújo, F. Benevenuto, and M. Cha, 2013. “Comparing and combining sentiment analysis methods,” COSN ’13: Proceedings of the First ACM Conference on Online Social Networks, pp. 27–38.
doi: https://doi.org/10.1145/2512938.2512951, accessed 25 July 2021.

M. Hu and B. Liu, 2004. “Mining and summarizing customer reviews,” KDD ’04: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 168–177.
doi: https://doi.org/10.1145/1014052.1014073, accessed 25 July 2021.

C. Keßler, K. Janowicz, and M. Bishr, 2009. “An agenda for the next generation gazetteer: Geographic information contribution and retrieval,” GIS ’09: Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 91–100.
doi: https://doi.org/10.1145/1653771.1653787, accessed 25 July 2021.

M. Koppel and I. Shtrimberg, 2004. “Good news or bad news? Let the market decide,” AAAI Spring Symposium on Exploring Attitude and Affect in Text, at https://aaai.org/Library/Symposia/Spring/2004/ss04-07-016.php, accessed 25 July 2021.

Y.-M. Li and T-Y. Li, 2013. “Deriving marketing intelligence over microblogs,” Decision Support Systems, volume 55, number 1. pp. 206–217.
doi: https://doi.org/10.1016/j.dss.2013.01.023, accessed 25 July 2021.

B. Liu, 2015. Sentiment analysis: Mining opinions, sentiments, and emotions. Cambridge: Cambridge University Press.
doi: https://doi.org/10.1017/CBO9781139084789, accessed 25 July 2021.

S.M. Liu and J.-H. Chen, 2015. “A multi-label classification based approach for sentiment classification,” Expert Systems with Applications, volume 42, number 3, pp. 1,083–1,093.
doi: https://doi.org/10.1016/j.eswa.2014.08.036, accessed 25 July 2021.

S. Malinen and A. Koivula, 2020. “Influencers and targets on social media: Investigating the impact of network homogeneity and group identification on online influence,” First Monday, volume 25, number 4, at https://firstmonday.org/article/view/10453/9412, accessed 25 July 2021.
doi: https://doi.org/10.5210/fm.v25i4.10453, accessed 25 July 2021.

S. Marsland, 2014. Machine learning: An algorithmic perspective. Second edition. Boca Raton, Fla.: Chapman & Hall/CRC.
doi: https://doi.org/10.1201/b17476, accessed 25 July 2021.

W. Medhat, A. Hassan, H. Korashy, 2014. “Sentiment analysis algorithms and applications: A survey,” Ain Shams Engineering Journal, volume 5, number 4, pp. 1,093–1,113.
doi: https://doi.org/10.1016/j.asej.2014.04.011, accessed 25 July 2021.

A.K. Nassirtoussi, S. Aghabozorgi, T.Y. Wah, and D.C.L. Ngo, 2015. “Text mining of news-headlines for FOREX market prediction: A multi-layer dimension reduction algorithm with semantics and sentiment,” Expert Systems with Applications, volume 42, number 1, pp. 306–324.
doi: https://doi.org/10.1016/j.eswa.2014.08.004, accessed 25 July 2021.

N. O’Hare, M. Davy, A. Bermingham, P. Ferguson, P. Sheridan, C. Gurrin, and A.F. Smeaton, 2009. “Topic-dependent sentiment analysis of financial blogs,” TSA ’09: Proceedings of the First International CIKM Workshop on Topic-Sentiment Analysis for Mass Opinion, pp. 9–16.
doi: https://doi.org/10.1145/1651461.1651464, accessed 25 July 2021.

M.G. de Oliveira, C.E.C. Campelo, C. de Souza Baptista, and M. Bertolotto, 2015. ”Leveraging VGI for gazetteer enrichment: A case study for geoparsing Twitter messages,&tdquo; In: J. Gensel and M. Tomko (editors). Web and wireless geographical information systems. Lecture Notes in Computer Science, volume 9080. Cham, Switzeerland: Springer, pp. 20–36.
doi: https://doi.org/10.1007/978-3-319-18251-3_2, accessed 25 July 2021.

M.G. de Oliveira, C. de Souza Baptista, C.E.C. Campelo, J.A.M. Acioli Filho, and A.G.R. Falcão, 2014. “Automated production of volunteered geographic information from social media,” Proceedings of the Brazilian Symposium on GeoInformatics, Campos do Jordão, Brazil, pp. 118–129.

T. O’Reilly, 2007. “What is Web 2.0: Design patterns and business models for the next generation of software,” Communications & Strategies, number 65, pp. 17–37.

A. Pak and P. Paroubek, 2010. “Twitter as a corpus for sentiment analysis and opinion mining,” Proceedings of the Seventh International Conference on Language Resources and Evaluation, pp. 1,320–1,326, and at http://www.lrec-conf.org/proceedings/lrec2010/pdf/385_Paper.pdf, accessed 25 July 2021.

B. Pang and L. Lee, 2008. “Opinion mining and sentiment analysis,” Foundations and Trends in Information Retrieval, volume 2, numbers 1–2, pp. 1–135.
doi: https://doi.org/10.1561/1500000011, accessed 25 July 2021.

C. Pino, I. Kavasidis, and C. Spampinato, 2016. “Assessment and visualization of geographically distributed event-related sentiments by mining social networks and news,” 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC), pp. 354–358.
doi: https://doi.org/10.1109/CCNC.2016.7444806, accessed 25 July 2021.

M. Pontiki, D. Galanis, H. Papageorgiou, S. Manandhar, and I. Androutsopoulos, 2015. “SemEval-2015 Task 12: Aspect based sentiment analysis,” Proceedings of the Ninth International Workshop on Semantic Evaluation (SemEval 2015), pp. 486–495, and at https://aclanthology.org/S15-2082.pdf, accessed 25 July 2021.

R. Purves and C. Jones, 2011. “Geographic information retrieval,” SIGSPATIAL Special, volume 3, number 2, pp. 2–4.
doi: https://doi.org/10.1145/2047296.2047297, accessed 25 July 2021.

K. Ravi and V. Ravi, 2015. “A survey on opinion mining and sentiment analysis: Tasks, approaches and applications,” Knowledge-Based Systems, volume 89, pp. 14–46.
doi: https://doi.org/10.1016/j.knosys.2015.06.015, accessed 25 July 2021.

J. Read, 2005. “Using emoticons to reduce dependency in machine learning techniques for sentiment classification,” ACLstudent ’05: Proceedings of the ACL Student Research Workshop, pp. 43–48.

M. Roele, J. Ward, and M. van Duijn, 2020. “Tweet with a smile: The selection and use of emoji on Twitter in the Netherlands and England,” First Monday, volume 25, number 4, at https://firstmonday.org/article/view/9373/9406, accessed 25 July 2021.
doi: https://doi.org/10.5210/fm.v25i4.9373, accessed 25 July 2021.

H. Saif, Y. He, M. Fernandez, and H. Alani, 2016. “Contextual semantics for sentiment analysis of Twitter,” Information Processing & Management, volume 52, number 1, pp. 5–19.
doi: https://doi.org/10.1016/j.ipm.2015.01.005, accessed 25 July 2021.

K. Schouten and F. Frasincar, 2016. “Survey on aspect-level sentiment analysis,” IEEE Transactions on Knowledge and Data Engineering, volume 28, number 3, pp. 813–830.
doi: https://doi.org/10.1109/TKDE.2015.2485209, accessed 25 July 2021.

A. Sharma and S. Dey, 2013. “A boosted SVM based sentiment analysis approach for online opinionated text,” RACS ’13: Proceedings of the 2013 Research in Adaptive and Convergent Systems, pp. 28–34.
doi: https://doi.org/10.1145/2513228.2513311, accessed 25 July 2021.

A. Sharma and S. Dey, 2012. “A comparative study of feature selection and machine learning techniques for sentiment analysis,” RACS ’12: Proceedings of the 2012 ACM Research in Applied Computation Symposium, pp. 1–7.
doi: https://doi.org/10.1145/2401603.2401605, accessed 25 July 2021.

R. Sharma, N.L. Tan, and F. Sadat, 2018. “Multimodal sentiment analysis using deep learning,” 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 1,475–1,478.
doi: https://doi.org/10.1109/ICMLA.2018.00240, accessed 25 July 2021.

A. Tumasjan, T. Sprenger, P. Sandner, and I. Welpe, 2010. “Predicting elections with Twitter: What 140 characters reveal about political sentiment,” Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media, pp. 178–185, and at https://ojs.aaai.org/index.php/ICWSM/article/view/14009, accessed 25 July 2021.

X. Wang, F. Wei, X. Liu, M. Zhou, and M. Zhang, 2011. “Topic sentiment analysis in Twitter: A graph-based hashtag sentiment classification approach,” CIKM ’11: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, pp. 1,031–1,040.
doi: https://doi.org/10.1145/2063576.2063726, accessed 25 July 2021.

Y. Yun, D. Hooshyar, J. Jo, and H. Lim, 2018. “Developing a hybrid collaborative filtering recommendation system with opinion miningon purchase review,” Journal of Information Science, volume 44, number 3, pp. 331–344.
doi: https://doi.org/10.1177/0165551517692955, accessed 25 July 2021.

L. Zhang and B. Liu, 2014. “Aspect and entity extraction for opinion mining,” In: W.W. Chu (editor). Data mining and knowledge discovery for big data: Methodologies, challenge and opportunities. Berlin: Springer, pp. 1–40.
doi: https://doi.org/10.1007/978-3-642-40837-3_1, accessed 25 July 2021.

L. Zhang, S. Wang, and B. Liu, 2018. “Deep learning for sentiment analysis: A survey,” Wiley Interdisciplinary Reviews: Data Miningand Knowledge Discovery, volume 8, number 4, e1253.
doi: https://doi.org/10.1002/widm.1253, accessed 25 July 2021.

Editorial history

Received 15 May 2020; revised 27 October 2020; accepted 26 July 2021.

A spatiotemporal approach for social media sentiment analysis
by André Luiz Firmino Alves, Cláudio de Souza Baptista, Davi Oliveira Serrano de Andrade, Maxwell Guimarães de Oliveira, and Aillkeen Bezerra de Oliveira.
First Monday, Volume 26, Number 8 - 2 August 2021