Emergency-relief coordination on social media: Automatically matching resource requests and offers
First Monday

Emergency-relief coordination on social media: Automatically matching resource requests and offers by Hemant Purohit, Carlos Castillo, Fernando Diaz, Amit Sheth, and Patrick Meier



Abstract
Disaster affected communities are increasingly turning to social media for communication and coordination. This includes reports on needs (demands) and offers (supplies) of resources required during emergency situations. Identifying and matching such requests with potential responders can substantially accelerate emergency relief efforts. Current work of disaster management agencies is labor intensive, and there is substantial interest in automated tools.

We present machine–learning methods to automatically identify and match needs and offers communicated via social media for items and services such as shelter, money, clothing, etc. For instance, a message such as “we are coordinating a clothing/food drive for families affected by Hurricane Sandy. If you would like to donate, DM us” can be matched with a message such as “I got a bunch of clothes I’d like to donate to hurricane sandy victims. Anyone know where/how I can do that?” Compared to traditional search, our results can significantly improve the matchmaking efforts of disaster response agencies.

Contents

1. Introduction
2. Related work
3. Problems definition
4. Construction of corpora of resource–related messages
5. Matching requests and offers
6. Discussion
7. Conclusion

 


 

1. Introduction

During emergencies, individuals and organizations use any communication medium available to them for sharing experiences and the situation on the ground. Citizens as sensors (Sheth, 2009) provide timely and valuable situational awareness information via social media (Vieweg, 2012) that is often not available through other channels, especially during the first few hours of a disaster. For instance, during the 2012 Hurricane Sandy (http://en.wikipedia.org/wiki/Hurricane_Sandy), the microblogging service Twitter reported that more than 20 million messages (also known as “tweets”) were posted on their platform (http://techcrunch.com/2012/11/02/twitter-releases-numbers-related-to-hurricane-sandy-more-than-20m-tweets-sent-between-october-27th-and-november-1st/). For example, during recent disasters caused by natural hazards, people broadcast messages about resources related to emergency relief via Twitter (Sarcevic, et al., 2012). Overall, the use of social media to support and improve coordination between affected persons, crisis response agencies and other remotely situated helpers is increasing. Our research direction is to leverage social media messages into actionable information that can aid in decision–making. It mainly includes challenges of information abstraction and prioritization for needs by geography and time, resource distribution, assessing severity of needs, information credibility and mapping with involvement of crowdsourcing, finding users to engage with in the online communities, visualization platform to coordinate, etc. as depicted in the general coverage of some of our prior initiatives [1]. The focus of this article is on a narrowly–defined research study within a broader set of computer–assisted emergency response activities — that of matching supply (offer) and demand (request) of resources or volunteer services via Twitter during disasters to improve coordination. Figure 1 shows a schematic of the study covered in detail in this paper.

 

Example from application of our analysis to match needs and offers
 
Figure 1: Example from application of our analysis to match needs and offers.

 

A typical request message involves an entity (person or organization) describing the scarcity of a certain resource or service (e.g., clothing, volunteering) and/or asking others to supply said resource. A typical offer involves an entity describing the availability and/or willingness to supply a resource. We note that in both cases, the requests or offers may be done in the name of a third–party, as observed in our dataset where many tweet messages repeating the Red Cross’ requests for money, and blood donations. Regarding the types of resources, we focus on monetary donations, volunteer work, shelter, clothing, blood and medical supplies, following the primary resource types identified by the United Nations cluster system for coordination (http://www.unocha.org/what-we-do/coordination-tools/cluster-coordination). Table 1 shows some examples of resource–related requests and offers during Hurricane Sandy, which took place off the east coast of the United States in 2012.

 

Table 1: Example tweets in our dataset, including the original unstructured text of the messages and its structured representation, generated automatically. Structured representation is flexible to append other tweet metadata such as time, location and author.
Request/OfferResource–typeText (unstructured)Structured representation
Request Money {Text redcross to 90999 to donate $10 to help those people that were effected by hurricane sandy please donate #SandyHelp} {RESOURCE-TYPE={class=money, confidence=0.9}, IS-REQUEST={class=Yes, confidence=0.95}, IS-OFFER={class=No, confidence=0.9}, TEXT="Text ...#SandyHelp''}
Request Medical {Hurricane Sandy Cancels 300 Blood Drives: How You Can Help http://...} {RESOURCE-TYPE={class=medical, confidence=0.98}, IS-REQUEST={class=Yes, confidence=0.96}, IS-OFFER={class=No, confidence=0.88}, TEXT="Hurricane Sandy ... http://...''}
Offer Volunteer {Anyone know of volunteer opportunities for hurricane Sandy? Would like to try and help in anyway possible} {RESOURCE-TYPE={class=volunteer, confidence=0.99}, IS-REQUEST={class=No, confidence=0.98}, IS-OFFER={class=Yes, confidence=1}, TEXT="Anyone know ... possible''}
Offer Clothing {I want to send some clothes for hurricane relief} {RESOURCE-TYPE={class=clothing, confidence=0.95}, IS-REQUEST={class=No, confidence=0.7}, IS-OFFER={class=Yes, confidence=0.8}, TEXT="I want ... relief''}

 

Across different emergencies, we observe that the prevalence of different class of social media messages varies. In a recent study (Imran, et al., 2013) on the 2011 Joplin Tornado, the number of resource–related messages (“donation–type” in their study) was found to be about 16 percent. In another study (Kongthon, et al., 2012) on the 2011 Thailand Floods, about eight percent of all tweets were “Requests for Assistance,” while five percent were “Requests for Information Categories.” In the emergency we analyze in this paper (Hurricane Sandy), this figure for resource–related (donation) tweets is close to five percent.

Although messages related to requests and offers of resources are a small fraction of all messages exchanged during a crisis, they are vital for improving response because even a single message can be a lifesaver. An example is, ‘need for blood donors of type O negative’. Individuals and groups in disaster–affected communities have always been the real first responders as observed in various studies (Palen, et al., 2007).

Resource–related messages are a minority, which makes them hard to identify from a deluge of millions of messages of general news, warning, sympathy, prayers, jokes, etc. Also, without a robust coordination of actions between the response organizations and the public, there can be a “second disaster” for response organizations in trying to manage the unsolicited donations (http://www.npr.org/2013/01/12/169198037/the-second-disaster-making-good-intentions-useful). Ideally, people’s messages searching for and asking for resources should be matched with relevant information quickly. In reality, waste occurs due to resources being under– or over–supplied as well as misallocated. For instance, during recent emergency events, some organizations were adamantly asking the public not to bring certain donations unless they knew what was really needed and what was sufficiently available (http://www.npr.org/2013/01/09/168946170/thanks-but-no-thanks-when-post-disaster-donations-overwhelm).

Currently, matchmaking coordination of resources is done manually. Response organizations like FEMA and the American Red Cross have recently started to explore methods for identifying needs and offers by using general–purpose social media analysis tools, but there is a need for automatically matching them as well. In this paper, we propose methods to automatically identify resource requests in social media and automatically match these with the relevant offers. Effectively matching these messages can help build resilience and enable communities to bounce back more quickly following a disaster. To this end, automatically matching needs and offers of resources at the local level during disasters can generate greater self–help and mutual–aid actions, which can accelerate disaster recovery and render local communities more resilient.

This is, however, a challenging task. Like any other content generated online by users, Twitter messages (tweets) are extremely varied in terms of relevance, usefulness, topicality, language, etc. They tend to use informal language and often contain incomplete or ambiguous information, sarcasm, opinions, jokes, and/or rumors. The 140–characters limit can be a blessing as it encourages conciseness, as well as a curse as it reduces the context that can be exploited by natural language processing (NLP) algorithms. Additionally, the volume of messages during a crisis is often overwhelming.

Our approach. We automatically convert tweets from plain–text to semi–structured records (a semi–structured record is a data item that does not follow a formal data model (structure) such as the one used by a relational database, but includes markers to separate semantic elements, as exemplified in the Table 1, column 4). The conversion process is done by first parsing each tweet using standard NLP techniques, and then applying to them supervised machine–learning classifiers and information extractors. We create two corpora of semi–structured documents: one containing the demand–side (resource requests), and one containing the supply–side (resource offers). Illustrative examples of such requests and offers that we identified automatically are shown in the Table 1. Next, to match requests and offers we apply methods developed in the context of matchmaking in dating sites (Diaz, et al., 2010). We interpret each request (respectively, offer) as a semi–structured search query and run this query against the corpus of offers (respectively, requests). This part of the system works similarly to the faceted search process employed in online shopping catalogs, where users select some features of the product from pre–defined values and additionally enter some keywords. For example, when searching for a camera on Amazon.com, it shows various features to choose from — type of lens, brand, model, price range, etc. — to help match our intention. Figure 2 summarizes our approach and how it helps the donation–coordination efforts in a real scenario.

 

Summary of our system for helping coordination of donation related messages during crisis response
 
Figure 2: Summary of our system for helping coordination of donation related messages during crisis response. Shaded area shows our contribution, with references for respective sections marked inside brackets.

 

Our contributions:

  • We study and define the problem of automatically identifying and automatically matching requests and offers during emergencies in social media (Section 3).
  • We show how to convert unstructured tweets into semi–structured records with annotated metadata as illustrated in the Table 1 (Section 4).
  • We apply information retrieval (IR) methods to find the best match for a request to an offer and vice versa (Section 5).
  • We achieve fair to excellent classification ability for identifying donation related, requests, offers and resources–related messages. In addition, we achieve 72 percent improvement over baseline for matching request–offer pairs.

Even with state–of–the–art automatic methods, we do not expect to have perfect annotations or perfect matching in each case of request and offer, due to complexity of the short and informal text processing in the presence of multiple intentions (e.g., opinions). We systematically evaluate our annotation methods based on automatic classification using the 10–fold cross–validation technique and observe fair to excellent classification ability, given the Area Under the Receiver Operating Characteristic curve (AUC) values for the classification fall in 0.75–0.98 range (details in Section 4). We also compare our matching method against a traditional search method and observe a 72 percent relative improvement over baseline in the performance measured by average precision of matching (details in Section 5).

The next section introduces related work, followed by the main technical sections. The last section presents our conclusions.

 

++++++++++

2. Related work

Social media during crises. Social media mining for disaster response has been receiving an increasing level of attention from the research community (e.g., Imran, et al., 2013; Purohit, et al., 2013; Blanchard, et al., 2012; Cameron, et al., 2012; Sarcevic, et al., 2012; Boulos, et al., 2011; Vieweg, et al., 2010; Starbird and Stamberger, 2010). Some of the current approaches require a substantial amount of manual effort, and do not scale to the speed with which messages arrive during a crisis situation.

Some methods described in the literature are automatic, and focus on situational awareness with emphasis on event detection (e.g., Mathioudakis and Koudas, 2010), visualization and mapping of information (e.g., ‘Ushahidi’ described in Banks and Hersman, 2009), and/or understanding facets of information (e.g., Imran, et al., 2013; Terpstra, et al., 2012). In this work, we try to move into the higher–level processes of decision–making and coordination. We advance the state–of–the–art by creating a systematic approach for automatic identification and matching of demand and supply of resources to help coordination.

Matching emergency relief resources. Emergency–response organizations such as the American Red Cross use human volunteers to manually perform the task of matching resource requests with offers. Others provide specialized portals for registration of volunteers and donations, such as AIDMatrix (http://www.aidmatrixnetwork.org/fema/PublicPortal/ListOfNeeds.aspx?PortalID=114) and recovers.org. Instead, our approach leverages the information already shared on social media to automatically find and match requests and offers.

Previous work has noted that the extraction of this information from social media would be easier if users had a protocol to follow for formatting messages. Some attempts have been made to bring structure on the information that will be processed, including the Tweak–the–Tweet project (Starbird and Stamberger, 2010). However, the amount of messages that include this type of hashtags is minimal. The lack of adoption may be in part due to the psychology of stressful times (Dietrich and Meltzer, 2003). During an emergency situation, people are likely to be distracted and less oriented to learn a new vocabulary and remember to use it. Hence, being able to exploit the natural communication patterns that occur in emergency–relief messages in social media can be complementary.

Recently, Varga, et al. (2013) studied the matching of problem related messages (e.g., “I do not believe infant formula is sold on Sunday”) with solution/aid messages (e.g., “At Justo supermarket in Iwaki, you can still buy infant formula”). Their classification approach involves the creation of annotated lists of “trouble expression” which serve to identify problem messages. Their matching approach seeks to identify a “problem nucleus” and a “solution nucleus”, which are the key phrases in a potential pair of problem and solution messages that should be similar in order for the matching to be considered satisfactory. By mapping “problem messages” to demand, and “solution/aid messages” to supply, we have a setting that resembles ours but with key differences. First, we focus on specific categories of resources, a fact that can be helpful, as different response organizations may want to deal with specific categories of information. Second, we use a different approach, which always preserves the entire message (during the task of classification for identifying request–offer as well as matching) instead of focusing on a segment.

Matchmaking in online media. Matching is a well–studied problem in computer science. It has been studied in various scenarios, such as the stable roommate problem (Irving, 1985), automatic matchmaking in dating sites (Diaz, et al., 2010), assigning reviewers to papers (Karimzadehgan and Zhai, 2009), etc. Matching has also been studied in the context of social question–answering (QA) portals (Bian, et al., 2008) and social media in general, including the problem of finding question–like information needs in Twitter messages (Zhao and Mei, 2013). Additional coverage of work in this area appears in a recent tutorial presented by the authors (ICWSM-2013 Tutorial: http://www.slideshare.net/knoesis/icwsm-2013-tutorial-crisis-mapping-citizen-sensing-and-social-media-analytics). The challenge addressed in this paper goes beyond automatic QA systems, because not all requests or offers are questions (see Table 1). Actually, many requests and offers do not even describe information needs, but can benefit from being matched against other messages that provide a natural complement to them (i.e., requests with offers).

 

++++++++++

3. Problems definition

Resource–related messages. Based on manual data inspection, we observed that resource–related tweets are of various, non–mutually–exclusive types:

  1. Requests for resources needed, e.g., Me and @CeceVancePR are coordinating a clothing/food drive for families affected by Hurricane Sandy. If you would like to donate, DM us.

  2. Offers for resources to be supplied, e.g., Anyone know where the nearest #RedCross is? I wanna give blood today to help the victims of hurricane Sandy.

  3. Reports of resources that have been exchanged, e.g., RT @CNBCSportsBiz: New York Yankees to donate $500,000 to Hurricane #Sandy relief efforts in Tri–State area.

We limit the scope of the current study to cases of exclusive behavior of these types as a first step.

Limitations. We ignore the following dimensions of message–author types for resource–related tweets, since they are not directly related to our focus on automatic matchmaking:

  1. Whether the entity that demands or supplies the resource is an individual or an organization, and

  2. Whether the tweet is being posted by the entity that demands or supplies the resource, or on behalf of a third party.

We also limit our study to English language tweets, which includes more than one third of worldwide tweets (Leetaru, et al., 2013). Nevertheless, we believe this approach is easily adaptable to other languages. For examples, capabilities corresponding to the tokenizers and stemmers we used for processing English content are widely available for a large range of languages.

Finally, we do not deal with issues of quantity/capacity, which would require us to know more information attributes such as how many food rations, or how many shelter beds are supplied or demanded. This is a relevant aspect of this problem that is not supported by our dataset: among the thousands of messages we manually labeled, almost none quantified the number of items being requested or offered.

Problem statement. We specifically address the following two problems:

  • Problem 1: Building resource–related corpora. The output of our pre–processing steps described in Section 4 are two sets of records, Q and D, containing records extracted from tweets requesting and offering resources, respectively. Our notation suggests that requests can be seen as queries (Q) and the offers as documents (D) in an information retrieval system (like a search engine) — although these roles are interchangeable.

  • Problem 2: Matching resource requests with offers. Let s : Q × D → [0, 1] be a function that associates a likelihood between 0 to 1 to every (request, offer) pair, such that the offer d satisfies the request q. The problem consists in finding all requests (∀q ∈ Q) and the corresponding offers (d ∈ D) such that offer d is most likely to satisfy the request in q. The converse problem, i.e., finding the best request to match an offer, is also of interest but its treatment is analogous to the one we present here.

Given that the output of problem 1 is the input for problem 2, we expect that errors will propagate, which is also what we observe empirically in some of the cases. Hence, building a high–quality corpus of requests and offers is a critical step.

 

++++++++++

4. Construction of corpora of resource–related messages

In this section we describe the method by which we created the corpus of requests and the corpus of offers. This method starts with a dataset collected from Twitter, which is then filtered to find donation–related tweets, and then further processed until each request– and offer–message found has been converted into a semi–structured record as exemplified in the Table 1.

4.1. Data collection and preparation

We collect data using Twitter’s Streaming API (https://dev.twitter.com/docs/streaming-apis/streams/public), which allows us to obtain a stream of tweets filtered according to a given criterion. In our case — and as is customary in other crisis–related collections — we start with a set of keywords and hashtags (e.g., #sandy). This set was updated by periodically by extracting the most frequent hashtags and keywords from the downloaded tweets, and then manually selecting unambiguous hashtags and keywords from this from this list to provide a control for contextual relevance to the event. We store all data for each tweet data including metadata: i.e., time of posting, location (when available), and author information including author location, profile description, number of followers and friends, etc.

Our collector ran for 11 days starting 27 October 2012 with an exception of five hours on the night of October 30th because of technical issues in the crawling machines. Due to limitations of the Twitter API on providing a sample of the entire tweet stream, and the fact that our list of keywords does not have perfect coverage, a set of 4.9 million tweets was obtained. Though this data set does not include every tweet related to this crisis event, it is sufficient for our needs. For more details on the practical limitations of Twitter’s API for capturing crisis data, see (Morstatter, et al., 2013). Details of the dataset are listed in Table 2.

In the following sections we describe the pipeline of processes we execute, which has a number of automatic steps: identifying donation–related messages (Section 4.2), classifying donation–related messages into requests and offers (Section 4.3), and determining the resource type of each of such messages (Section 4.4).

 

Table 2: Summary of characteristics of our dataset. The tweets classifiable (with high precision) as request or offer are a subset of those donation–related. Those classifiable into a resource type are a subset of those classifiable as request or offer.
Total items
Initial number of tweets 4,904,815 100%
Classified as donation–related 214,031 4%
Donation–related
Classified as exclusively request/offer with high precision 23,597 (11%) 100%
Requests (exclusively) 21,380 91%
Offers (exclusively) 2,217 9%
Exclusively requests/offers
Classified into a resource type with high precision 23,597 (100%) 100%
Money 22,787 97%
Clothing 34 0.10%
Food 72 0.30%
Medical 76 0.30%
Shelter 78 0.30%
Volunteer 550 2%

 

All steps are based on supervised automatic classification (Witten, et al., 2011), in which the supervision (labeling of examples for classes, e.g., donation ‘related’ versus ‘not related’) is obtained by crowdsourcing via Crowdflower (http://www.crowdflower.com/). A machine learning classification needs features (properties of the items to be classified) for each of these labeled examples, which helps learning scheme algorithms build a model for predicting such class labels for any new example.

All automatic classification steps produce standard metrics for quality, precision and recall [2] (Witten, et al., 2011), where we emphasize precision over recall for the positive class (e.g., ‘donation–related’, ‘is–exclusive–request’). Precision is a ratio of correctly predicted messages for a class to the total predicted messages for that class, and recall is a ratio of correctly predicted messages to the total actual number of messages for that class in the training examples. Given true positive as tp, false positive as fp, and false negative as fn in a confusion matrix for the original classes (actual observation) and predicted classes (expectation) by a classifier, then:

 

Precision Recall

 

We tuned our classifiers to aim for 90 percent or higher precision (reducing the number of false positives) at the expense of potentially having less recall (more false negatives).

Feature extraction. Each step is based on automatic supervised classification. Labels are obtained through crowdsourcing. Tweets are represented as vectors of features, each feature being a word N–gram (segment of N words), by performing standard text pre–processing operations:

  1. Removing non–ASCII characters.

  2. Separating text into tokens (words), removing stopwords and performing stemming (reducing to root words, such as ‘helping’ to ‘help’).

  3. Generalizing some tokens by replacing numbers by the token _NUM_, hyperlinks by the token _URL_, retweets (“RT @user_name”) by the token _RT_ and lastly, user mentions in the tweets (@user_name) by the token _MENTION_.

  4. Generating uni–, bi–, and tri–grams of tokens, which correspond to sequences of one, two, or three consecutive tokens after the pre–processing operations have been applied.

4.2. Donation–related classification

In a recent study by Imran, et al. (2013), the authors manually coded a large sample of tweets from the 2011 Joplin Tornado; they found that about 16 percent were related to donations of goods and services. We expect that the fraction of donation–related tweets will vary across disasters, and we also expect that as we emphasize precision over recall, we will capture a high–quality fraction.

Labeling task preparation. A multiple–choice question was asked to crowdsourcing workers (assessors), “Choose one of the following options to determine the type of a tweet”:

  • Donation — a person/group/organization is asking or offering help with a resource such as money, blood/medical supplies, volunteer work, or other goods or services.
  • No donation — there is no offering or asking for any type of donations, goods or services.
  • Cannot judge — the tweet is not in English or cannot be judged.

The options were worded to encourage assessors to understand “donation” in a broad sense, otherwise (as we observed in an initial test) they tend to understand “donations” to mean exclusively donations of money.

Sampling and labeling. Given our limited budget for the crowdsourcing task and the relatively small prevalence of donation–related tweets in the data, we introduced some bias in the sample of tweets to be labeled. We selected 1,500 unique tweets by uniform random sampling, and 1,500 unique tweets from the output of a conditional random field (CRF) based donation–related information extractor borrowed from the work of Imran, et al. (2013). The two sets of tweets were merged and randomly shuffled before they were given to the assessors.

We asked for three labels per tweet and obtained 2,673 instances labeled with a confidence value of 0.6 or more (the range is 0 to 1). This confidence value is based on inter–assessor agreement and the assessor agreement with a subset of 100 tweets for which we provided labels. Our labeled dataset contained 29 percent tweets of ‘donation–related’ class.

Learning the classification. We experimented with a number of standard machine Learning schemes (techniques). For this task, we obtained good performance by using attribute (feature) selection using a chi–squared test, considering the top 600 features, and applying a naïve Bayes classifier (Witten, et al., 2011).

To reduce the number of false positives, we used asymmetric misclassification costs. In that, we considered a non–donation classified tweet as donation as 15 times more costly than the case of a donation classified as non–donation.

After 10–fold cross–validation, for the donation class we achieved a precision of 92.5 percent (meaning that 92.5 percent of the items classified as ‘donation–related’ by the system are actually ‘donation–related’) with 47.4 percent of recall (meaning that 47.4 percent of all the items in the data that are ‘actually donation–related’ are identified by this system). The area under the ROC curve (AUC) is 0.85, which implies good classification ability.

4.3. Request–offer classification

Among the donation–related messages, we observe three information types: requests, offers, and reports, as described in Section 3. Some messages belong to more than one type, as can be seen in the examples of Table 3.

We focus on messages that are either exclusively requests or exclusively offers because they can be (a) classified without ambiguity by crowdsourcing workers, (b) classified more accurately by automatic classifiers, and (c) better matched by the automatic matching system. Our follow–up work to this study shall address the challenge of messages in between the spectrum of the exclusive behaviors of requests and offers.

Labeling task preparation. A multiple–choice question was asked to crowdsourcing workers, asking to classify a tweet into one of the following categories:

  • Request to get — when a person/group/organization needs to get some resource or service such as money

  • Offer to give — when a person/group/organization offers/wants to give/donate some resource goods or provide a service

  • Both request and offer

  • Report of past donations of certain resources, not offering explicitly to give something that can be utilized by someone

  • None of the above

  • Cannot judge

 

Table 3: Examples of requests and offers in our dataset.
Both Request and Offer behavior
  • I made these during #sandy. I will donate $5 from each snowflake I sell to the #redcross for hurricane victims. http://...
  • Please donate what you can. I am helping Hurricane Sandy Relief for family of 3 http://...
Exclusively Request
  • Red Cross is urging blood donations to support those affected by Hurricane Sandy. http://...
  • Text REDCROSS to 90999 to donate 10$ to help the victims of hurricane sandy
Exclusively Offer
  • I would like to go to New York to help out cuz of the sandy hurricane
  • does anyone know if there a local drop–off center in frederick to donate clothes to victims of hurricane sandy?

 

Sampling and labeling. We extracted donation–related tweets from our data using the classifier described in the previous section, and randomly sampled 4,000 unique tweets classified as donation–related. As in the previous step, we asked for three labels per item and considered all items labeled with confidence at least 0.6. This resulted in 52 percent of tweets being classified as exclusively request, seven percent as exclusively offer, and the remaining 41 percent in the other categories. Requests outnumber offers by a ratio of almost 7 to 1 in our dataset, but the ratio varies across categories of donations.

Additional features. This classification task was more challenging than the donation–related classification. In addition to the heavy class imbalance as noted by the ratio of requests to offers, there are a number of constructs in the text that are common to requests and offers, e.g., “want to donate for help, check here ...” (request) vs. “want to donate some money to help my friends who lost everything in hurricane sandy disaster” (offer).

These subtle differences go beyond what we can capture with word n–gram tokens as features. In order to capture them, we added an additional list of 18 regular–expressions, informed by messages selected by experts at the Red Cross. Each regular–expression was translated to one binary feature (the tweet matches the regular expression, implying 1; or does not match it, implying 0). The following are two examples (the full list is available in our dataset release, omitted here for brevity):

  1. \b(like|want) \b.* \b(to)\b.*\b(bring|give|help|raise|donate) \b
  2. \b(how)\b.* \b(can I|can we) \b.*\b(bring|give|help|raise|donate) \b

In these regular expressions, “\b” represents a word boundary, (word1|word2|...) indicates that any of the words can occur, and “.*” stands for an arbitrary piece of text, even an empty space.

Learning the classification. After extensive experimentation with different learning schemes and configurations, in which we noticed that in general multi–class classification did not perform well, we decided to use two binary classifiers in a cascade configuration. The first classifier has two classes: exclusively requests and other. The second classifier receives tweets in the other class of the first classifier, and finds the exclusively offers in them. Each classifier was based on Random Forest Tree algorithm with asymmetric costs (Witten, et al., 2011), as detailed in Table 4. The overall performance of this classifier was precision of 97.9 percent and recall of 29.7 percent with AUC value 0.82 for exclusively requests and a precision of 90.4 percent and recall of 29.2 percent with AUC value 0.75 for exclusively offers. The higher value of precision over the recall is due to our design choice.

4.4. Resource type classification

We use a supervised classifier to classify the donation–related corpus according to resource type, where we limit ourselves for the current experiment to “hard” classes, in which each tweet is related to a single resource type. Our list of resource types is the subset of those in the U.N. cluster system (United Nations Office for Coordination of Humanitarian Affairs, 2013) that were present in our dataset. To determine it, we examined the top 200 most frequent terms in the donation–related tweets.

Labeling task preparation. As before, a multiple–choice question was asked to crowdsourcing workers, “Choose one resource type”, including example tweets of each class:

  • Clothing
  • Food
  • Medical supplies including blood
  • Money
  • Shelter
  • Volunteer work
  • Not request or offer
  • Request or offer for something else
  • Cannot judge

Sampling and labeling. As in the donation–related classifier we introduced some bias in the tweets to be labeled, in order to better use our crowdsourcing budget; otherwise we would not have obtained a sufficient number of samples of the smaller classes. We selected 6,400 unique tweets in two parts. In the first part, 4,000 were selected by uniform random sampling among the ones containing exclusively requests or exclusively offers in the output of the previous automatic classifier. In the second part, the remaining 2,400 were selected by searching for keywords related to each of the six resource classes (e.g., “money”, “clothes”, “blood”, etc.) These 2,400 tweets were sampled from the entire input dataset (50 percent uniformly at random and 50 percent from the output of the donation–related classifier).

Again, we asked for three labels per item and considered items with confidence at least 0.6; we also discarded the tweets that were not considered to be in one of the categories (e.g., 10 percent were labeled as request or offer for something else). The results were: money 71 percent, volunteer work eight percent, clothing six percent, medical supplies/blood five percent, food five percent, and shelter five percent.

Additional features. We reinforced our word n–gram features with regular–expression based features, adding 15 patterns with the help of experts from the Red Cross. Each pattern produced one binary feature based on the regex matcher result (true, false). Example patterns are listed below (full list available in our dataset release):

  1. \b(shelter|tent city|warm place|warming center|need a place|cots) \b
  2. \b(food|meal|meals|lunch|dinner|breakfast|snack|snacks) \b

 

Table 4: Classification modeling results. Learning scheme abbreviations refer to NBM=Naïve Bayes Multinomial, RF=Random Forest, and CR indicates asymmetric false–alarm Cost Ratios. All classifiers used feature selection. For binary classifiers, precision and recall are for the positive class, while for the last classifier they are averages.
Task Learning scheme Number of features Precision Recall AUC Number of training examples
Donation Related NBM, CR true:false=15:1 600 92.5% 47.4% 0.85 2673 (29% donations)
Exclusively Request RF, CR true:false=50:1 500 97.9% 29.7% 0.82 3836 (56% requests)
Exclusively Offer RF, CR true:false=9:2 500 90.4% 29.2% 0.75 1763 (13% offers)
Resource Type RF 500 92.8% 92.9% 0.98 3572 (71% money, 8% volunteer, 6% clothing, 5% shelter, 5% medical, 5% food)

 

Learning the classification. The best result was obtained by a random forest classifier scheme (Witten, et al., 2011). Table 4 shows statistics about this model, which across the classes achieves average precision of 92.8 percent and recall of 92.9 percent with AUC of 0.98, suggesting excellent classification ability.

The outcome of all the steps is summarized in Table 5, which contains the characteristics of the corpus that we use for the matching task described in the next section.

 

Table 5: Distribution of resource types in the requests and offer corpora. The low numbers are due to our high precision constraint on the classifiers.
Resource type Total classified Requests Offers
Money 22,787 20,671 (91%) 2,116 (9%)
Clothing 34 9 (26%) 25 (74%)
Food 72 63 (87%) 9 (13%)
Medical 76 71 (93%) 5 (7%)
Shelter 78 60 (77%) 18 (23%)
Volunteer work 550 506 (92%) 44 (8%)

 

 

++++++++++

5. Matching requests and offers

Previous sections describe the automatic text analysis method to find tweets requesting and offering donations of different resources. This section describes a method for automatically matching them.

5.1. Algorithm

We consider the task of matching requests with offers as one of classifying whether an arbitrary request and offer are relevant to each other. For example, consider the following request,

R1: “we are coordinating a clothing/food drive for families affected by Hurricane Sandy. If you would like to donate, DM us”

and the following offer,

O1: “I got a bunch of clothes I’d like to donate to hurricane sandy victims. Anyone know where/how I can do that?”

In this case, we would say that the offer–request pair <R1,O1> is relevant because R1 is relevant to O1 and O1 is relevant to R1. Now consider the following offer,

O2: “Where can I donate food for Sandy victims?”

In this case, we would say that the offer–request pair <R1,O2> is not relevant because the offer and request are not relevant to each other. The objective of our system is to correctly predict the relevance of an arbitrary offer–request pair. A similar methodology was previously adopted in the context of matchmaking in online dating sites (Diaz, et al., 2010).

Feature selection. In the context of machine learning, features refer to properties of the item to be classified; in this case, features refer to the properties of the offer–request pair we are trying to classify as relevant or not relevant. There are obviously many properties of an offer–request pair. We would like to only include features likely to be correlated with match relevance. These include properties such as the text similarity between the offer and request. In general, we can divide these features into two sets. Our first set of features consists of the classifier predictions described in previous sections. For example, the confidence that a particular tweet is an offer might be correlated with match relevance. Using the confidence instead of a binary prediction (such as, ‘is_request’ or ‘is_not_request’) allows the model to better exploit information gathered during the previous steps. In our second set as addition to the first set of derivative features, we consider the text similarity between each candidate pair. Text similarity is likely to capture relatedness of pairs of tweets not captured by our coarser upstream classifications. We compute similarity by the cosine similarity of tf–idf term vectors (Baeza–Yates and Ribeiro–Neto, 2011) of the pair of tweets after stopword removal and stemming, similar to a traditional information retrieval method in search systems.

We evaluate the effectiveness of different feature sets by considering three experimental conditions, using (i) only request–offer prediction probabilities and resource type prediction probabilities for each offer and request; (ii) only the text similarity; and, (iii) all features.

Functional form. A machine–learning algorithm refers to an algorithm, which can learn the relationship between features and the target classification, in this case, match relevance. In our work, we use an algorithm known as Gradient–Boosted Decision Tree (GBDT) (Friedman, 2001). Our small feature space makes the decision to use GBDT appropriate since the tree will use complex conjunctions of features that is absent from linear classifiers. For example, this algorithm might learn that “high text similarity and high request/offer confidence implies relevance” but “high text similarity and very low request/offer confidence implies non–relevance”.

5.2. Experiments and results

Labeling task preparation. In order to train and evaluate our matching model, we collect labeled training data for a random sample of 1,500 request–offer pairs from our resource–related corpora constructed in the Section 4. We required that one element of the pair be exclusively offer and the other exclusively request.

Crowdsourcing workers were asked to label each pair as “very useful,” “useful,” “somewhat useful,” or “not relevant/not useful.” Receiving multiple grades allows the system to focus the learning on those instances in which workers were most confident about (i.e., “very useful” and “not useful”). Nonetheless, we collapsed “very useful” and “useful” as the positive class; “somewhat useful” and “not relevant/not useful” pairs as the negative class due to high volume of the moderate level classes. We asked for five independent labels per item for this task and it resulted into 68 positive and 1113 negative labels, with confidence greater than 0.6.

 

Table 6: Evaluation of matching results, using root mean square error (RMSE) and average precision (AP), where RMSE is smaller the better and AP is higher the better.
Method with feature set Root mean squared error (RMSE) Average precision (AP)
Text–similarity only (Baseline) 0.394 ± 0.08 0.162 ± 0.08
Features except text–similarity, but including request–offer prediction probabilities 0.388 ± 0.08 0.279 ± 0.10
All features 0.383 ± 0.08 0.207 ± 0.09

 

Evaluation. We evaluate the matching performance using the root mean squared error (RMSE) of the predicted label and average precision (AP) when ranking all matches. RMSE measures the degree to which we accurately predict the match relevance label of an arbitrary offer–request pair. AP measures the ability to distinguish relevant and non–relevant pairs, regardless of predicting the actual label. We conducted experiments using 10–fold cross–validation. The results are summarized in Table 6.

We make several observations from these results. First, a naïve system might attempt to perform matching using text–similarity features alone, a form of traditional information retrieval method. However, our results demonstrate that this baseline is the lowest performing across all metrics. Second, although combining all features results in the lowest RMSE, using only the features derived from the confidence scores of the classifier (request/offer) predictions achieve the strongest AP. We observe 72 percent relative improvement in the AP against the lowest performing metric, the baseline. This follows from the fact that GBDT minimizes squared error on the training set. We suspect that incorporating a rank–based loss would lead to more consistent AP between these other experimental settings. Nevertheless, these results provide support for performing the classification preprocessing (request–offer predictions prior to the task of classification for relevant pairs for matching).

 

++++++++++

6. Discussion

6.1. Summary of findings

The proposed processing pipeline of (1) building a corpora of rich information about requests and offers for donation coordination, and (2) matching resource requests with offers, has shown to be effective for this task. This is a flexible framework to which more components can be added. Even when errors may propagate through the different steps, setting high thresholds to emphasize precision over recall helps to provide high quality inputs to the matching task of coordination.

Evaluation using 10–fold cross–validation for our classifiers (donation related, request, offer, and resource–related) shows AUC range between fair (0.75) to excellent (0.98). This implies good classification ability for separating meaningful information despite a noisy dataset. Also, our matching model achieves good performance when we combine the features of prediction probabilities of the identified request or offer behavior for the messages with the text similarity of the messages. In practice, users may interact with the system’s ranking of matches and thus, help improve the performance.

6.2. Recommendations

As methods for managing social media messages improve, we should be attempting to move from text mining and social media analytics towards higher level activities (or actionable information) such as supporting the coordination of actions. Effectively matching donation messages is a good candidate for the kind of capability that more directly impact human actions and decision, and in the case addressed in this work, can help build resilience and enable communities to bounce back more quickly following a disaster.

State–of–the–art approaches involve manually intensive effort as performed by response organizations such as the American Red Cross, which cannot scale. The usage of commercially available Twitter classification applications such as Crimson Hexagon and others is a good first step, but it only partially solves the problem, as a matchmaking component for requests and offers is still required.

6.3. Limitations

The main limitation of the proposed method is the recall of the various classifiers, which is challenging due to various unstructured tweet characteristics. We believe that stronger classifiers and better features can certainly be used at every step in the proposed system, while making sure that high precision is preferred for any component. We note the following observations in the steps of creating a resource-related corpora and matching requests and offers:

Imbalance in the requests and offers distribution. As we note in the Table 4, there are 10 times more exclusively requests than exclusively offers in the training corpus as well as in the predicted labels (Table 2), making it challenging to classify requests and offers. Despite using expert–driven features (the regular expression based patterns), there were cases of ambiguous behavior (both request and offers) in the message content. In the future work, we may want to capture all types of behavior — exclusive request, exclusive offer, as well as mixed.

Imbalance in the resource type distribution. The lower percentage of classified tweets into non–money classes (Table 5) such as clothing, food, etc. is partially due to our high precision constraint on the classifiers. Also, the constraint of the exclusively requests and offers behavior on the input reduces the potential set. But more fundamentally, it is likely due to imbalance distribution existing in the dataset we analyzed, as noted in the training set (71 percent money–related messages).

The underlying cause for such distribution may be that donations of money are the most prevalent way to help in this case. In our thousands of labeled examples, we noted that people were propagating messages such as “Text redcross to 90999 to donate $10 to help those people that were affected by hurricane sandy please donate” extensively. In any case, the imbalance does not help the labeling process or the automatic classifiers. Furthermore, we observe the ratios of requests and offers varying across different resource type classes as we note from Table 5, for example, the “clothing” class has higher number of offers than requests. The remaining classes have higher proportions of requests than offers but the ratio varies. Again, the non–uniformity may affect the performance of matching algorithm in the subsequent task.

Batch operation vs. continuous querying. A limitation unrelated to the classification performance is that we have described this as a batch operation in which all requests and offers are known at the same time. This may not always be the case in a real system. Systems where the queries are given beforehand and as new elements arrive they should be answered, are known as continuous querying systems (Chen, et al., 2000; Terry, et al., 1992). In practice, our system should operate in the continuous manner, with the added complexity that both requests and offers may constitute queries to the corresponding complementary set already collected at that point of time.

6.4. Future work

The framework we have described can be extended in many ways, including:

Tweet classification improvements. In some cases further information about a donation–related message may be required. First, users may want to know whether the entity that demands or supplies a resource is an individual or an organization, and/or other aspects that allow them to evaluate the credibility of a message. Second, many messages are posted on behalf of a third party (e.g., individuals not associated with the Red Cross but that ask for donations to the Red Cross); we could try to detect messages that refer to a third party.

Additionally, we can envision a hybrid system in which manual and automatic classification coexist. We could, for instance, make use of crowdsourcing to improve the quality of the matching by annotating a small subset of requests/offers in order to improve the overall matching score of the messages that are matched, or the number of messages that are matchable.

Matching improvements. Geographical distance needs to be included in the objective function for geographically dispersed disasters. Additional metadata about the tweets, such as author profiles may help to prioritize messages.

Issues related to capacity (e.g., shelter for K people) or inventory sizes (e.g., K rations of food) also need to be taken into account, when they are available in the data. As users’ sophistication in using social media tools rises, we could expect more detailed information in their messages.

In addition to the matching framework we have presented here, based on information retrieval research studies, other methods could be tested. For instance, a machine translation approach could yield good results, although it may require a larger amount of training data.

Application to other crisis information and aspects beyond donations. Ideally, automated methods such as the one we have described should be tested across different crisis datasets of different types, such as earthquakes, floods, hurricane, wildfires, etc. Also, developments for non–English languages will be helpful, especially for those in which text processing tools.

In addition to the problem of matching emergency relief resources, there are other similar matching problems, as the one described by Varga, et al. (2013). There are also applications beyond the crisis domain. For instance, in the healthcare domain, people often share questions and concerns about diseases, as well as their personal experience as a patient. Information seekers could be matched with patients having experiences with the same disease or treatment.

 

++++++++++

7. Conclusion

We presented a systematic study to automatically identify requests and offers of donations for various resource types including shelter, money, volunteer work, clothing and medical supplies. We also introduced a method for automatically matching them, which aids in donation coordination during emergencies.

While noting a number of limitations and avenues for future research, the methodology we have described has shown to be useful in practice. For instance, during the Oklahoma Tornado, that took place in 2013 in the U.S., and during typhoon Yolanda in November 2013, we applied our processing framework to quickly identify messages related to types of requests to help and offers of help and shared with response organizations.

Reproducibility. Our dataset will be available upon request, for research purposes. End of article

 

About the authors

Hemant Purohit is an interdisciplinary (Computer and Social Sciences) researcher at Kno.e.sis — the Ohio Center of Excellence in Knowledge-enabled Computing at Wright State University, where he coordinates crisis informatics research under NSF SoCS project. He is pursuing a unique approach of people–content network analysis for analyzing social signals with insights from psycholinguistic theories of coordination to answer: whom to coordinate, why to coordinate and how to coordinate. His work also involves problem spaces of community engagement and sustainability, expert detection and presentation.
Web: http://knoesis.org/researchers/hemant
Direct comments to: hemant [at] knoesis [dot] org

Carlos Castillo (Ph.D.) is a Senior Scientist in the Social Computing group of the Qatar Foundation’s Computing Research Institute (QCRI). Prior to QCRI, Carlos worked with Yahoo Research. He has influenced research fields on several topics including information retrieval, spam detection/demotion, usage analysis and social network analysis. His current research interest is the mining of content, links, and usage data from the Web to fuel applications in the news and crisis domains.
Web: http://www.chato.cl/research/
Direct comments to: chato [at] acm [dot] org

Fernando Diaz (Ph.D.) is a researcher at the Microsoft Research NYC lab. His primary research interest is formal information retrieval models. His research experience includes distributed information retrieval approaches to Web search, temporal aspects of information access, mouse tracking, cross–lingual information retrieval, graph–based retrieval methods, and synthesizing information from multiple corpora. Currently, he is studying them in the context of unexpected crisis events.
Web: http://ciir.cs.umass.edu/~fdiaz/
E–mail: fdiaz [at] microsoft [dot] com

Amit Sheth (Ph.D.) is an educator, researcher and entrepreneur. He is currently the LexisNexis Ohio Eminent Scholar at Wright State University in Dayton, Ohio, and the director of Kno.e.sis — the Ohio Center of Excellence in Knowledge–enabled Computing which works on topics in semantic, social, sensor, and services computing over the Web, with the goal of advancing from the information age to meaning age. He is also an IEEE Fellow.
Web: http://knoesis.org/amit
E–mail: amit [at] knoesis [dot] org

Patrick Meier (Ph.D.) is an internationally recognized thought leader on the application of new technologies for crisis early warning, humanitarian response and resilience. He presently serves as Director of Social Innovation at the Qatar Foundation’s Computing Research Institute (QCRI). He is an accomplished writer and speaker, with talks at several major venues including the White House, U.N., Skoll World Forum, Club de Madrid, Mobile World Congress, PopTech, Where 2.0, TTI/Vanguard, SXSW and several TEDxs.
Web: http://irevolution.net/bio/
E–mail: pmeier [at] qf [dot] org [dot] qa

 

Acknowledgements

We thank NSF for the SoCS grant IIS–1111182 ‘Social Media Enhanced Organizational Sensemaking in Emergency Response.’ We thank our collaborators at the American Red Cross and colleagues at Kno.e.sis and QCRI for giving helpful feedback, and assisting in qualitative studies, especially Noora Al Emadi and Sarah Vieweg.

 

Notes

1. Prior work coverage: http://techpresident.com/news/wegov/24082/twitris-taking-crisis-mapping-next-level; http://www.forbes.com/sites/skollworldforum/2013/05/02/crisis-maps-harnessing-the-power-of-big-data-to-deliver-humanitarian-assistance/; http://www.thehindu.com/sci-tech/technology/gadgets/using-crisis-mapping-to-aid-uttarakhand/article4854027.ece. More at http://irevolution.net/media/.

2. Precision and Recall metrics for classification: http://en.wikipedia.org/wiki/Precision_and_recall.

 

References

Ricardo Baeza–Yates and Berthier Ribeiro–Neto, 2011. Modern information retrieval: The concepts and technology behind search. Second edition. New York: Addison–Wesley.

K. Banks and E. Hersman, 2009. “FrontlineSMS and Ushahidi — A demo,” ICTD ’09: Proceedings of the International Conference on Information and Communication Technologies and Development, p. 484.
doi: http://dx.doi.org/10.1109/ICTD.2009.5426725, accessed 26 December 2013.

Jiang Bian, Yandong Liu, Eugene Agichtein, and Hongyuan Zha, 2008. “Finding the right facts in the crowd: factoid question answering over social media,” WWW ’08: Proceedings of the 17th International Conference on World Wide Web, pp. 467–476, and at http://wwwconference.org/www2008/papers/pdf/p467-bianA.pdf, accessed 26 December 2013.
doi: http://dx.doi.org/10.1145/1367497.1367561, accessed 26 December 2013.

Heather Blanchard, Andy Carvin, Melissa Elliott Whitaker, and Merni Fitzgerald, 2012. “The case for integrating crisis response with social media,” white paper, American Red Cross; version at http://crisisdata.wikispaces.com/, accessed 26 December 2013.

Maged N. Kamel Boulos, Bernd Resch, David N. Crowley, John G. Breslin, Gunho Sohn, Russ Burtner, William A. Pike, Eduardo Jezierski, and Kuo–Yu Slayer Chuang, 2011. “Crowdsourcing, citizen sensing and sensor Web technologies for public and environmental health surveillance and crisis management: Trends, OGC standards and application examples,” International Journal of Health Geographics, volume 10, number 1, pp. 67–96.
doi: http://dx.doi.org/10.1186/1476-072X-10-67, accessed 26 December 2013.

Mark A. Cameron, Robert Power, Bella Robinson, and Jie Yin, 2012. “Emergency situation awareness from twitter for crisis management,” WWW ’12 Companion: Proceedings of the 21st International Conference Companion on World Wide Web, pp. 695–698, and at http://www2012.wwwconference.org/proceedings/companion/p695.pdf, accessed 26 December 2013.
doi: http://dx.doi.org/10.1145/2187980.2188183, accessed 26 December 2013.

Jianjun Chen, David J. DeWitt, Feng Tian, and Yuan Wang, 2000. “NiagaraCQ: A scalable continuous query system for Internet databases,” ACM SIGMOD Record, volume 29, number 2, pp. 379–390.
doi: http://dx.doi.org/10.1145/335191.335432, accessed 26 December 2013.

Fernando Diaz, Donald Metzler, and Sihem Amer–Yahia, 2010. “Relevance and ranking in online dating systems,” SIGIR ’10: Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp 66–73.
doi: http://dx.doi.org/10.1145/1835449.1835463, accessed 26 December 2013.

Rainer Dietrich and Tilman von Meltzer, 2003. Communication in high risk environments. Linguistische Berichte. Sonderheft, 12. Hamburg: Buske.

Jerome H. Friedman, 2001. “Greedy function approximation: A gradient boosting machine,” Annals of Statistics, volume 29, number 5, pp. 1,189–1,232.
doi: http://dx.doi.org/10.1214/aos/1013203451, accessed 26 December 2013.

Muhammad Imran, Shady Elbassuoni, Carlos Castillo, Fernando Diaz, and Patrick Meier, 2013. “Extracting information nuggets from disaster–related messages in social media,” In: T. Comes, F. Fiedrich, S. Fortier, J. Geldermann and T. Müller (editors). ISCRAM ’13: Proceedings of the 10th International ISCRAM Conference, at http://chato.cl/papers/imran_elbassuoni_castillo_diaz_meier_2013_extracting_information_nuggets_disasters.pdf, accessed 26 December 2013.

Robert W. Irving, 1985. “An efficient algorithm for the ‘stable roommates’ problem,” Journal of Algorithms, volume 6, number 4, pp. 577–595.
doi: http://dx.doi.org/10.1016/0196-6774(85)90033-1, accessed 26 December 2013.

Maryam Karimzadehgan and Chengxiang Zhai, 2009. “Constrained multi–aspect expertise matching for committee review assignment,” CIKM ’09: Proceedings of the 18th ACM Conference on Information and Knowledge Management, pp. 1,697–1,700.
doi: http://dx.doi.org/10.1145/1645953.1646207, accessed 26 December 2013.

A. Kongthon, C. Haruechaiyasak, J. Pailai, and S. Kongyoung, 2012. “The role of Twitter during a natural disaster: Case study of 2011 Thai flood,” PICMET ’12: Proceedings of International Conference on Technology Management for Emerging Technologies, pp. 2,227–2,232.

Kalev H. Leetaru, Shaowen Wang, Guofeng Cao, Anand Padmanabhan, and Eric Shook, 2013. “Mapping the global Twitter heartbeat: The geography of Twitter,” First Monday, volume 18, number 5, at http://firstmonday.org/article/view/4366/3654, accessed 31 August 2013.
doi: http://dx.doi.org/10.5210/fm.v18i5.4366, accessed 26 December 2013.

Michael Mathioudakis and Nick Koudas, 2010. “Twittermonitor: Trend detection over the Twitter stream,” SIGMOD ’10: Proceedings of the 2010 International Conference on Management of Data, pp. 1,155–1,158.
doi: http://dx.doi.org/10.1145/1807167.1807306, accessed 26 December 2013.

Fred Morstatter, Jürgen Pfeffer, Huan Liu, and Kathleen M. Carley, 2013. “Is the sample good enough? Comparing data from Twitter’s streaming API with Twitter’s firehose,” ICWSM ’13: Proceedings of Seventh International AAAI Conference on Weblogs and Social Media, pp. 400–408; version at http://arxiv.org/abs/1306.5204, accessed 26 December 2013.

Leysia Palen, Starr Roxanne Hiltz, and Sophia B. Liu, 2007. “Online forums supporting grassroots participation in emergency preparedness and response,” Communications of the ACM — Emergency response information systems: emerging trends and technologies, volume 50, number 3, pp. 54–58.
doi: http://dx.doi.org/10.1145/1226736.1226766, accessed 26 December 2013.

Hemant Purohit, Andrew Hampton, Valerie L. Shalin, Amit Sheth, John Flach, and Shreyansh Bhatt, 2013. “What kind of #communication is Twitter? mining #psycholinguistic cues for emergency coordination,” Computers in Human Behavior, volume 29, number 6, pp. 2,438–2,447.
doi: http://dx.doi.org/10.1016/j.chb.2013.05.007, accessed 26 December 2013.

Aleksandra Sarcevic, Leysia Palen, Joanne White, Kate Starbird, Mossaab Bagdouri, and Kenneth Anderson, 2012. “‘Beacons of hope’ in decentralized coordination: Learning from on–the–ground medical twitterers during the 2010 Haiti earthquake,” CSCW ’12: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, pp. 47–56.
doi: http://dx.doi.org/10.1145/2145204.2145217, accessed 26 December 2013.

Amit Sheth, 2009. “Citizen sensing, social signals, and enriching human experience,” IEEE Internet Computing, volume 13, number 4, pp. 87–92.
doi: http://dx.doi.org/10.1109/MIC.2009.77, accessed 26 December 2013.

Kate Starbird and Jeannie Stamberger, 2010. “Tweak the tweet: Leveraging microblogging proliferation with a prescriptive syntax to support citizen reporting,” ISCRAM ’10: Proceedings of the 7th International ISCRAM Conference; version at http://repository.cmu.edu/cgi/viewcontent.cgi?article=1034&context=silicon_valley, accessed 26 December 2013.

Teun Terpstra, A. de Vries, R. Stronkman, and G.L. Paradies, 2012. “Towards a realtime Twitter analysis during crises for operational crisis management,” ISCRAM ’12: Proceedings of the 9th International ISCRAM Conference; version at http://www.iscramlive.org/ISCRAM2012/proceedings/172.pdf, accessed 26 December 2013.

Douglas Terry, David Goldberg, David Nichols, and Brian Oki, 1992. “Continuous queries over append–only databases,” SIGMOD ’92: Proceedings of the 1992 ACM SIGMOD International Conference on Management of Data, pp. 321–330.
doi: http://dx.doi.org/10.1145/130283.130333, accessed 26 December 2013.

United Nations Office for Coordination of Humanitarian Affairs (UNOCHA), 2013. “Cluster coordination,” at http://www.unocha.org/what-we-do/coordination-tools/cluster-coordination, accessed 31 August 2013.

Istvàn Varga, Motoki Sano, Kentaro Torisawa, Chikara Hashimoto, Kiyonori Ohtake, Takao Kawai, Jong–Hoon Oh and Stijn De Saeger, 2013. “Aid is out there: Looking for help from tweets during a large scale disaster,” ACL ’13: Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics, pp. 1,619–1,629; version at http://aclweb.org/anthology/P/P13/P13-1159.pdf, accessed 26 December 2013.

Sarah Vieweg, 2012. “Situational awareness in mass emergency: A behavioral and linguistic analysis of microblogged communications,” Ph.D. dissertation, University of Colorado at Boulder; version at http://works.bepress.com/vieweg/15/, accessed 26 December 2013.

Sarah Vieweg, Amanda L. Hughes, Kate Starbird, and Leysia Palen, 2010. “Microblogging during two natural hazards events: What Twitter may contribute to situational awareness,” CHI ’10: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1,079–1,088.
doi: http://dx.doi.org/10.1145/1753326.1753486, accessed 26 December 2013.

Ian H. Witten, Eibe Frank, and Mark A. Hall, 2011. Data mining: Practical machine learning tools and techniques. Third edition. Burlington, Mass.: Morgan Kaufmann.

Zhe Zhao and Qiaozhu Mei, 2013. “Questions about questions: An empirical analysis of information needs on Twitter,” WWW ’13: Proceedings of the 22nd international Conference on World Wide Web, pp. 1,545–1,556, and at http://www2013.wwwconference.org/proceedings/p1545.pdf, accessed 26 December 2013.

 


Editorial history

Received 31 August 2013; accepted 29 November 2013.


Copyright © 2014, First Monday.
Copyright © 2014, Hemant Purohit, Carlos Castillo, Fernando Diaz, Amit Sheth, and Patrick Meier.

Emergency–relief coordination on social media: Automatically matching resource requests and offers
by Hemant Purohit, Carlos Castillo, Fernando Diaz, Amit Sheth, and Patrick Meier.
First Monday, Volume 19, Number 1 - 6 January 2014
http://firstmonday.org/ojs/index.php/fm/article/view/4848/3809
doi: 10.5210/fm.v19i1.4848.





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2016.