The US Congressional Research Service defines situational awareness as “the ability to identify, process, and comprehend critical elements of an incident or situation” which can “help officials determine where people are located, assess victim needs, and alert citizens and first responders to changing conditions and new threats.”

Situational awareness begins with crisis discovery itself, and here, too, our collective use of ICT offers new potential. The emerging field of ‘infoveillance’ monitors the large amounts of data we create in our daily, connected lives to improve awareness, identify new events, and measure trends. The most famous example of infoveillance in a public health context is Google.org’s Flu Trends, which measures the volume of illness-related search queries across Google’s global footprint to identify and predict patterns in health. The data is available for researchers and comparison across regions.

Another form of infoveillance is our ability to monitor and “listen” to social media platforms. For example, the relatively accessible nature of data on Twitter has led researchers to monitor the spread of flu and bird flu with publicly available information collected from the platform. There have also been studies to detect earthquakes based on Twitter’s social signals. The United States Geological Survey created a platform, Did You Feel It, to collect distributed citizen reports of tremors, and has also distributed software to measure laptop-based accelerometers en masse.

A relatively lightweight way to consider the value of participatory media with regards to managing a crisis is to help emergency managers “listen” to social media messages posted by the affected population to more accurately determine needs and improve situational awareness. There is certainly still great need to train formal aid agencies to use social media listening platforms like Radian6 to glean actionable information from a busy stream of data. This is an interesting development in tech-mediated crisis response, but a weak form of participatory aid, because all it asks of potential collaborators is that they post updates as would anyway. As the Red Cross survey found, 72% of respondents planned to mention crises on social media.

Given the volume of data, the ability to semi-automatically extract a population’s needs from organic conversation is alluring, and has attracted considerable research. One early system attempted to train people to adopt a crisis-specific syntax in their tweets to facilitate the aggregation of that information, although it will be a long road to convince the public to adopt this encoding at meaningful scale. Further work has approached these large volumes of data by bucketing crisis tweets into more easily understood sub-topics, geographically-identifying information, and analyzing information curation performed by the crowd itself.

More ambitious projects underway seek to apply machine learning and Natural Language Processing to large corpora of social media posts to extract key findings with little human intervention. One of the most advanced projects in this space is the Qatar Computing Research Initiative’s Artificial Intelligence for Disaster Response (AIDR). Conceived as an open source Twitter dashboard for disaster response, the tool moves beyond the manual classification found in earlier research to combine automatic tagging of tweets and microwork to delegate the work that still needs to be completed by humans. According to Patrick Meier’s update post on the project, the system can already automatically identify tweets containing:

• informative content (in contrast to personal messages or information unhelpful for disaster response)
• eye-witness reporting
• pictures, video footage, broadcast media mentions
• reports of casualties and infrastructure damage
• reports of people missing, seen and/or found
• messages of caution and advice
• calls for help and important needs
• offers of help and support

The team has classified groups of tweets for several types of disaster to train algorithms to find others like them. Additional modules will allow additional analysis of otherwise unwieldy volumes of tweets. Similar projects include Social Media Tracking and Analysis Software (SMTAS) by Mississippi State University’s Social Science Research Center, which aggregates data from multiple social networks to provide better contextual understanding to responders. Another example is the open source CrisisTracker platform, under development at Madeira University. The platform mines Twitter in the wake of a disaster and automatically clusters messages to facilitate crowdsourced organization of the large volumes of data.