How hashtags went from helpful to harmful in fighting wildfires
Social media can be a useful tool during natural disasters, but disinformation is drowning out the truth — and making the fight harder for first responders.
SALT LAKE CITY — Firefighters are battling more than just smoke and flames in the great Northwest. Rumors and misinformation are spreading just as fast.
In Oregon, Douglas County’s Sheriff’s Office posted to social media, “Rumors spread just like wildfire and now our 9-1-1 dispatchers and professional staff are being overrun with requests for information and inquiries on an UNTRUE rumor that 6 Antifa members have been arrested for setting fires in DOUGLAS COUNTY, OREGON.”
Washington and Oregon officials pleaded for people to stop spreading unsubstantiated claims that antifa and Proud Boys — one an unstructured movement against fascism and racism, the other a far right organization, both known for engaging in counterprotests and physical force — had started the wildfires that have burned more than 5 million acres on the West Coast.
“I also want to address an issue that keeps coming up, even from some of the public that we are talking to while working. It is hot, dry, and fire spreads quickly in those conditions. There is nothing to show it’s Antifa or Proud Boys setting fires. Wait for information,” wrote firefighter Matt Lowery on the East Pierce Fire & Rescue union Facebook page, the Associated Press reported.
According to scientists, rising temperatures are the primary reason that wildfires have gotten worse. People, freak lightning storms and downed power lines ignite the flames. A history of fire suppression has left a plethora of plants that serve as fuel.
Misinformation makes the fight to contain the fires even more difficult.
Social media can be a helpful tool
In the earlier days of social media, the platforms became useful tools during natural disasters. People with fire or police scanners used Twitter and other tools to pass along information.
“In 2007 and 2008 we saw these social media channels as a great opportunity to fill in information when there was an absence,” said Jeannette Sutton, associate professor in the College of Emergency Preparedness, Homeland Security, and Cybersecurity at the University at Albany. Researchers saw positive social activity on platforms, with information about who needed to evacuate, where to go if you had to evacuate, and what resources were available to share.
In fact, the hashtag — the ubiquitous pound sign now used as a device for labeling posts by category — first took off on Twitter after wildfires broke out in Southern California in 2007 and people needed a way to find relevant information.
“The hashtag created a signal and noise to help those citizen reporters, official reporters on the ground, and people who just had useful things to offer about what they were seeing or how they could help,” said Leysia Palen, a professor at the University Colorado Boulder in the department of information science.
“Fast forward 13 years and now we have California wildfires, this time in the north, and the hashtags don’t provide a signal anymore.”
While there are still people tweeting out helpful information, their posts are getting drowned out by those spreading disinformation — deliberately disruptive voices that render a once-useful tool dangerous.
Hijacking disaster information
There are two categories of rumors, Sutton explained. Some arise organically. Others are intentionally misleading and malicious.
Many rumors spread as people earnestly try to make sense of what’s happening. That’s what Sutton considers the more natural kind of rumoring, “where in the absence of information from trusted public officials, people seek out other information.”
These rumors can start when someone poses a question, like asking where a particular fire is burning and can then morph into false claims that a fire is actually burning in a specific place.
According to Palen and Sutton, such genuine mistakes that could be easily corrected no longer dominate social media.
“There’s people that are behaving very badly, and now there’s automation that enables that behavior to just go and go and go without any kind of governing,” Palen said. “I’m alarmed.”
The false rumors about the origins of the wildfires on the West Coast continued to spread even after officials in Oregon and Washington made statements refuting the claims, news outlets published fact checks, and even the FBI released a statement that their investigations found reports of “extremists” setting wildfires in Oregon to be untrue.
The FBI added what could be taken as a reprimand: “Conspiracy theories and disinformation take valuable resources away from local fire and police agencies working around the clock to bring these fires under control. Please help our entire community by only sharing validated information from official sources.”
Nevertheless, stories with headlines like “Man Arrested for Starting Oregon Fire Gets Released Without Bail, Sets 6 More Fires,” were shared more than 40,000 times on Facebook, the New York Times reported.
Palen compared these rumors to another kind of hijacking that takes place during smaller, more regional natural disasters.
She explained that during tornadoes, small news outlets from around the world, sites that are “pseudo news,” will find the location of the event and put a geotag on their tweet. That way, when people are looking for tornado information they come across unrelated tweets.
In this case, they’re exploiting the wildfires to amplify a message about antifa, Palen said.
“Wildfires” becomes just another buzzword as the desire of the global community to understand the devastation of the West Coast is subverted and further enraged by adding another term that has become politically and emotionally fraught.
“They are happening to connect it because they need the word wildfire and then they use the hashtag,” Palen said.
Look at the tweets, and it’s almost as if somebody just mashed the two trending terms into one post, leveraging Twitter’s algorithm for maximum impact.
“In another time, those nonsensical things might have just passed on without much notice,” Palen wrote in an email. But now, the messages don’t need to be finessed, or even particularly convincing to take hold. What she terms “the Twitter machine” will either amplify it or it won’t; either way there’s an unlimited ability to publish.
She has researched the ways disinformation spreads during the pandemic and has become increasingly alarmed by “the Twitter machine.” There’s a network that kicks into action when certain kinds of messages are sent out. That network amplifies all kinds of information, both good and bad, but ultimately it seems to be drowning out authoritative voices.
Amplifying the truth
While it’s an uphill battle against an algorithm that doesn’t seem to favor accuracy, there are some things that could help.
Timing makes a difference, according to Sutton at University of Albany. Making sure that emergency officials update and disseminate information quickly (or knock down rumors) could increase their reach.
“Because they’re fighting against these algorithms and these individuals with really large audiences,” Sutton said, having a more active presence on social media so emergency managers can grow their follower numbers would also help, i.e. if more people retweeted accurate information.
“Emergency managers have been trying to get Twitter to prioritize emergency tweets for years,” Sutton said. “It just doesn’t happen.”
However, if any progress is to be made in the fight against disinformation, the presence of authoritative voices on social media is crucial.
“The officials who have something to contribute need to be out there and be heard,” Sutton explained.
“Which is simple on its face, but really hard to actually do.”