In Australia, Facebook is once again hamstrung by its business model
Last month, the Australian government made headlines with a new law forcing Big Tech platforms, namely Google and Facebook, to pay publishers for news content. The move was ostensibly meant to provide a new revenue stream supporting journalism, but the legislation is also the latest development in a succession of moves by influential News Corp CEO Rupert Murdoch to strike back at the online platforms sapping his publications’ advertising revenue.
While Google brokered a deal with News Corp and other major Australian publishers, Facebook decided instead to use a machine learning tool to delete all “news” links from Australian Facebook. Caught in the wave of takedowns were also non-news sites: public health pages providing key coronavirus information, trade unions, and a number of other entities that share, but do not produce, news content. Facebook has since backtracked, and Australian news content has been allowed back on the platform.
While Google reached a deal with News Corp and other major Australian publishers, Facebook decided instead to use a machine learning tool to delete all “news” links from Australian Facebook.
The fiasco illustrates broader issues facing both the company and Big Tech in general: the spectre of regulatory action. This trend that explains the influx of politically influential figures entering Facebook’s employ, like Former UK Deputy Prime Minister Nick Clegg, who is now Facebook’s Vice-President of Global Affairs and Communications.
Facebook’s chronic problem isn’t the aggressive methods of its public affairs team, nor its CEO waxing poetic about free speech principles only to reverse course later. Facebook is hamstrung by its own business model, which incentivizes it to prioritize user engagement above all else.
The Australia case is reminiscent of another moment in the struggle between Big Tech and governments. In 2015, a pair of gunmen murdered a group of people at a barbecue in San Bernardino, CA with what were later understood to be jihadist motives. After the attack, Apple CEO Tim Cook seized the moment to solidify Apple’s brand image around privacy, publicly refusing the federal government’s requests to create a backdoor in iOS.
This principled stand was backed up by Apple’s business model, which involves selling hardware and software as a luxury brand, not selling data or behavioral insights. Cook’s move was both ethically defensible and strategically sound: he protected both users’ privacy and his brand’s image.
After the 2015 San Bernardino attack, Apple CEO Tim Cook seized the moment to solidify Apple’s brand image around privacy, publicly refusing the federal government’s requests to create a backdoor in iOS. This principled stand was backed up by Apple’s business model, which involves selling hardware and software as a luxury brand, not selling data or behavioral insights.
In the case Australia, different actors are involved. Google, like Facebook, relies on mining data and behavioral insights to generate advertising revenue. However, in the case of news, Facebook and Google have different incentives around quality. On the podcast “Pivot”, Scott Galloway from NYU pointed out that Google has a quality incentive when it comes to news. Users trust Google to feed them quality results, so Google would naturally be willing to pay to access professional journalists’ content.
More people use Google than any other search engine because they trust it to lead them not just to engaging information, but to correct information. Google therefore has a vested commercial interest in its algorithms delivering the highest quality response to users’ search queries. Like Apple in 2015, Google can both take an ethical stand — compensating journalists for their work — while also playing to the incentives of its business model.
On the other hand, Facebook’s business model is based on engagement. It doesn’t need you to trust the feed, it needs you to be addicted to the feed. The News Feed is most effective at attracting and holding attention when it gives users a dopamine hit, not when it sends them higher quality results. To Facebook, fake news and real news are difficult to distinguish amongst the fodder used to keep people on their platform.
In short, from Facebook’s perspective, it doesn’t matter if the site sends you a detailed article from the Wall Street Journal or a complete fabrication from a Macedonian fake news site. What matters is that the user stays on Facebook.com, interacting with content as much as possible to feed the ad targeting algorithms.
The immediate situation in Australia has been resolved, with USD 1bn over three years having been pledged to support publishers. But the fundamental weakness of Facebook’s reputation is becoming obvious. Regulators are clearly jumping the gun to take shots at the company in the wake of the Cambridge Analytica scandal, debates over political advertising, and the prominent role the site played in spreading conspiracies and coronavirus disinformation.
In short, shutting down news was a bad look. Zuckerberg may have been in the right on substance — free hyperlinking is a crucial component of an open internet. But considering the company has already attracted the ire of regulators around the world, this was likely not the ideal time to take such a stand.
In any case, Australia’s efforts, whether laudable or influenced by News Corp’s entrenched power, are largely for naught. As many observers have pointed out, the long-term problem facing journalism is the advertising duopoly of Google and Facebook. And the only way out of that problem is robust anti-trust action. Big Tech services may be used around the world, but only two legislatures have any direct regulatory power over the largest of these companies: the California State Assembly in Sacramento, and the United States Congress. Though the impact of these technologies is global, the regulatory solutions to tech issues will likely have to be American, as long as US-based companies continue to dominate the industry.
A 5-minute introduction to Political Astroturfing.
At Wonk Bridge, among our broader ambitions is a fuller understanding of our “Network Society”. In today’s article, we’re aiming to connect several important nodes in that broader ambition. Our more seasoned readers will already see how Political Astroturfingsimultaneously plays on both the online and offline to ultimately damage the individual’s ability to mindfully navigate in-between dimensions.
Political Astroturfing is a form of manufactured and deceptive activity initiated by political actors who seek to mimic bottom-up (or grassroots) activity by autonomous individuals.(slightly modified from Kovic et al. 2018’s definition which we found most accurate and concise)
While we will focus on astroturfing conducting exclusive by digital means, do keep in mind that this mischievous political practice remains as old as Human civilisation. People have always sought to “Manufacture Consent” through technologically-facilitated mimickry, and have good reason to continue resorting to the prevalent communications technologies of the Early Digital age to do so. And without belabouring the obvious, mimickry has always been a popular tactic in politics because people continue to distrust subjectivity from parties who are not friends/family/ “of the same tribe”.
Our America Correspondent and Policy-columnist Jackson Oliver Websterwrote a piece about how astroturfing was used to stir and then organise the real-life anti-COVID lockdown protests across the United States last April. Several actors began the astroturfing campaign by opening a series of “Re-open” website URLs and then connecting said URLs to “Operation Gridlock” type Groups on Facebook. Some of these Groups then organised real-life events calling for civil unrest in Pennsylvania, Wisconsin, Ohio, Minnesota, and Iowa.
The #Re-Open protests are a great example of the unique place astroturfing has in our societal make-up. They work best when taking advantage of already volatile or divisive real-world situations (such as the Covid-19 lockdowns, which were controversial amongst a slice of the American population), but are initiated and sped-up by mischievous actors with intentions unaligned with those of the protesters themselves. In Re-open’s case, one family of conspirators — the Dorr Brothers — had used the websites to harvest data from and push anti-lockdown and pro-gun apparel to website visitors. The intentions of the astroturfers can thus be manifold, from a desire to stir-up action to fuelling political passions for financial gain.
The sharp-end of Fake news
Astroturfing will often find itself in the same conversational lexicon as Fake News. Both astroturfing and fake news are seen as ways to artificially shape peoples’ appreciation of “reality” via primarily digital means.
21st century citizenship, concerning medium/large scale political activity and discourse in North America and Europe, is supported by infrastructure on social networking sites. The beerhalls and market-squares have emptied, in favour of Facebook Groups, Twitter Feeds and interest-based fora where citizens can spread awareness of political issues and organise demonstrations. At the risk of igniting a philosophical debate in the comments, I would suggest that the controversy surrounding Fake news at the moment is deeply connected with the underlying belief that citizens today are unprepared/unable to critically appraise or reason with the information circulated on digital political infrastructure, as well as they might have been able to offline. Indeed the particularity of astroturfing lies in its manipulation of our in-built information filtration mechanism, or what Wait But Why refers to as a “Reason Bouncer”.
Our information filtration mechanism is a way of deciding which information from both virtual and real dimensions is worth considering as “fact” or “truth” and which should be discarded/invalidated. As described in “The Story of Us”, information that appeals to an individuals primal motivations, values or morals tend to be accepted more easily by the “Reason Bouncer”, just as information coming from “trustworthy sources” such as friends, family or other “in-group individuals”. Of course, just like how teenagers try to use fake-IDs to sneak into nightclubs, astroturfing seeks to get past your “Reason Bouncers” by mimicking the behaviour and appealing to the motivations of your “group”.
The effectiveness of this information filtration “exploit” can be seen in the 2016 Russian astroturfing attack in Houston, Texas. Russian actors, operating from thousands of kilometers away, created two conflicting communities on Facebook, one called “Heart of Texas” (right-wing, conservative, anti-Muslim) and the other called the “United Muslims of America” (Islamic). They then organised concurrent protests on the question of Islam in the same city: one called “Save Islamic Knowledge” and another called “Stop the Islamification of Texas” right in front of the Islamic Da’wah Center of Houston. The key point here is that the astroturfing campaign was conducted in two stages: infiltration and activation. Infiltration was key to get past the two Texan communities’ “Reason Bouncer”, by establishing credibility over several months with the creation, population and curation of the Facebook communities. and all that was required to “activate” both communities was the appropriate time, place and occasion.
The “Estonian Solution”
Several examinations of the astroturfing issue have pointed out that, rather than the government or military, ordinary citizens are often the targets of disinformation and disruption campaigns using the astroturfing technique. Steven L. Hall and Stephanie Hartell rightfully point out the Estonian experience with Russian disinformation campaigns as a possible starting point for improving society resilience to astroturfing campaigns.
As one of the first Western countries to have experience a coordinated disinformation campaign in 2007, the people of Estonia rallyed around the need for a coordinated Clausewitzian response (Government, Army, and People) to Russian aggression: “Not only government or military, but also citizens must be prepared”. Hall and Hartell note the amazing (by American standards) civilian response to Russian disinformation, including the creation of a popular volunteer-run fact-checking blog/website called PropaStop.org.
Since 2016, the anti-fake news and fact-checking industry in the United States is booming — with more than 200 fact-checking organisations active as of December 2019. The fight against disinformation and the methods that make astroturfing possible is indeed well and alive in the United States.
Where I disagree with Hall and Hartell, who recommend initiatives similar to those by Estonia in the USA, is that disinformation and astroturfing cannot meaningfully be reduced in the USA without addressing the internal political and social divisions which make the job all too easy and effective. The United States is a divided country, along both Governmental and popular lines. How can the united action of Estonia be replicated when two out of the three axes (Government, Military and People) are compromised?
This — possibly familiar — Pew Research data visualisation (click here for the research) shows just how much this division has exacerbated over time. Astroturfing campaigns like the ones in Houston in 2016 comfortably operate in tribal environments, where suspicion of the internal “Other” (along racial religious, political lines) trumps that of the true “Other” — found at the opposite end of the globe. In divided environments, fact-checking entreprises also suffer from weakened credibility and the suspicion of the very people they seek to protect.
In such environments, short of addressing the issues that divide a country, the best technologists can perhaps do is create new tools transparently and openly. So as to avoid suspicion and invite inspection. But to also seek as many opportunities to work in partnership with Government, the Military and all citizens, with the objective of arming the latter with the ability to critically evaluate information online and understand what digital tools and platforms actually do.
 A society where an individual interacts with a complex interplay of online and offline stimuli, to formulate his/her more holistic experience of the world we live in. The term was coined by Spanish sociologist Manuel Castells.
Over the weekends of the 17th and 24th of April, thousands of Americans showed up at intersections and state houses across the country to protest against social distancing rules, the closure of businesses, and other measures taken by mayors and governors to combat the Covid-19 pandemic. Depending on the location, protestors ranged from the pedestrian to the extreme and bizarre. Some groups were calm, carrying signs calling on governors to reopen businesses. Other groups were toting semi-automatic rifles, combat gear, and QAnon paraphernalia.
Users on reddit, in particular /u/Dr_Midnight, noticed a strange pattern in certain sites purporting to support the anti-quarantine protests. Dozens of sites with the URL format reopen[state code/name].com had all been registered on 17 April within minutes of each other, many from a single round of GoDaddy domain purchases from the same IP address in Florida. The original Reddit posts were removed by moderators because they revealed private information about the individual who had registered the reopen.com domains. Here are screenshots without sensitive information, as examples:
Sites urging civil unrest in Pennsylvania, Wisconsin, Ohio, Minnesota, and Iowa all had the same “contact your legislator” widget installed, and these and other states’ websites “blog” sections cross linked to each other.
Many of the reopen.com sites purchased on 17 April are dormant and have no content at the time of publication. However several of these domains forward users to a string of state gun rights advocacy websites, all named [state]gunrights.org. The Pennsylvania, Minnesota, Michigan and other “gun rights” sites and associated Facebook groups belong to the Dorr brothers, gun rights extremists and conservative advocates who Republican lawmakers in the Midwest have repeated labeled as “grifters”. Multiple reopen.com sites have “shop” sections selling the Dorrs’ anti-quarantine and pro-gun rights merchandise.
Several reopen.com URLs lead to Facebook groups calling themselves “Operation Gridlock [city name]”. Here are the identical descriptions for the LA and Tennessee Gridlock Facebook groups:
Security researcher Brian Krebs also identified reopen.com domains, including reopenmississippi.com, that had eventually been sold on to In Pursuit Of LLC, a for-profit political communications agency reported to belong to the conservative billionaire industrialist Charles Koch. Non-profit journalistic site ProPublica has identified several former In Pursuit Of employees who are now on the Trump White House communications staff. It is unclear who registered reopenmississippi.com and other sites purchased by for-profit political consultancies, as many were not purchased during the 17 April’s afternoon buying spree in Florida.
A further twist in the story came on 23 April, when a man named Michael Murphy, whose IP address was identified in /u/Dr_Midnight’s original removed reddit investigation, was interviewed by reporter Brianna Sacks. It turns out that Murphy, a struggling day trader from Sebastian, Florida, spent $4,000 on dozens of reopen.com domains in the hopes of selling them on to liberal activists looking to prevent conservatives from organizing protests. An attempt to out-grift the grifters.
It is unclear whether Murphy’s intentions were political, financial, or both. He describes his politics as “generally liberal”, however his business has been suffering in recent years — he even tried to reorient to selling N-95 mask cleaning solution when the coronavirus outbreak worsened in March, but was unsuccessful. Murphy even claims to have attempted to contact late night TV host John Oliver, hoping the comedian would pay him for domains to use in one of his show’s signature trolling stunts. Murphy came forward to reporters after anti-right-wing reddit users began doxxing him, revealing his name, address, and businesses. Any reopen.com sites not registered to Murphy’s Florida IP address were likely bought by the Dorrs brothers or Koch-backed organizations before Murphy could snatch them up.
What do we make of all this?
Relatively unsophisticated technical actors have shown themselves capable of mobilizing large numbers of citizens into the streets. A few well-named URLs and a decent Facebook following are all it takes for a series of protests to be organized across the country with little notice. Protesting citizens are entirely unaware that any central coordination of their activities exists beyond their local social media groups. However these groups were not genuine expressions of opinion by concerned private citizens. Most were created concurrently by individuals or organizations with the explicit intent of political or financial gain through advocating activities that contradict public health rules and guidelines in a time of national crisis.
It’s important to note at this point these protests represent the views of a very small minority of voters, regardless of party. A poll conducted by the Democracy Fund and UCLA in late March and again in early April shows broad approval of, and compliance with, local and state social distancing guidelines and business closures. Around 87% of respondents approved of varying measures imposed by mayors and governors, and 81% said they hadn’t left their homes over the last two weeks except for buying necessities, up from 72% in late March. Majorities of Democrats, Republicans, and Independents all believed quarantine measures to be necessary. Despite this general consensus in support of emergency measures, astroturfing operations were able to mobilize a diverse set of online activists, spectators, social media buffs, conspiracy theorists, and guns rights absolutists, even reaching all the way to disgruntled mainstream conservatives and their families.
The Internet and social media catalyze kinetic action
Centrally coordinated puppeteering of otherwise spontaneous demonstrations is not new. What is novel is the ability to do so at a national scale with almost no investment of resources of any kind — financial or otherwise. All it took was an internet connection, a few web domains, and a cursory knowledge of the online right wing universe. Once the spaces for action were created, and the right actors assembled, the demonstrations themselves were almost an inevitability. With enough prodding from conservative media and political figures, right up to the top of the movement, people took to the streets.
Twitter, and to a lesser extent Facebook, have actively shied away from preventing this method of organizing on their platforms. Despite both companies ostensibly having changed terms-of-use enforcement to take down content encouraging violating state quarantine orders, Facebook has not taken down Freedom Fund or Conservative Coalition groups or individual posts, and Twitter has officially decided that the President’s “LIBERATE” tweets do not violate their rules against inciting violence. On 22 April, Facebook did take down events pages for anti-quarantine protests in California, Nebraska, and New Jersey, but only after these states’ governors explicitly ordered the company to do so. Events pages in other states, most notably Michigan, Pennsylvania, and Ohio, remained active over the weekend of 24 April. Several groups of protesters in Lansing, Michigan entered the State Capitol Building carrying semi-automatic rifles and wearing kevlar vests and other combat gear. Michigan’s governor, Gretchen Whitmer, has been a target of particularly vitriolic rhetoric from protestors over the state’s emergency orders — some of the most stringent in the country — enacted after a major outbreak in the Detroit area in late March.
Inauthentic action catalyzed by social media is legitimized by traditional media
Traditional media, most notably TV, are often playing catch-up with more savvy information actors online. An Insider Exclusive special oPoln coronavirus on 29 April — broadcast in primetime on multiple US cable networks — contained a segment on the protests. It first showed “hundreds” of people continuing to protest in front of various state houses, and immediately contrasted these images with footage of long lines at food banks in Houston, Texas. The narration insinuates that the “exasperation” felt by the protestors somehow derives from an inability to find basic necessities. This insinuation, however, is false. Footage of the protests has revealed the discontents to be predominantly white and older, while those requiring assistance from food banks in major cities are often younger, economically-precarious people of color, a demographic notably absent from images of anti-stay-at-home protesters.
The astroturfing operation has therefore worked its way through an entire information cycle. Political donor money is used to fund fringe actors’ online efforts, purchasing websites and organizing on social media. These sites are used to generate kinetic action in the form of protests. These protests are then covered by the traditional media, broadcasting and legitimizing the initial message of the organizers, inserting their narrative into the mainstream.
That is what came to mind when I listened to a webinar two weeks ago by the Director of the Reuters Institute, Professor Rasmus Kleis Nielsen, in which he started off by saying:
“We are not just fighting an epidemic but also an infodemic, where people are confused about the information they are getting from politicians, news organisations, social media, search and video sharing sites.”
In recent weeks, the topic of fake news has reemerged in the headlines in light of the current coronavirus pandemic. As a concept, false information is nothing new. As my colleague Max Gorynski illustrated in a previous article diving into its origins, civilisation has been grappling with the issue of misinformation for centuries. But the coronavirus has had an interesting side-effect, whereby the public have become tangibly aware of the dangerous consequences that misinformation and disinformation can have. One particularly poignant example of this was the report in early March “that 16 Iranians [had] died from methanol poisoning…after false rumours spread that drinking alcohol would help prevent people getting the Covid-19 virus.”
The very fact that individuals are experiencing fear for their own safety during the pandemic, has forced onto the agenda skills which are necessary to differentiate fact from fiction. Useful information from dangerous information. Those skills include critical thinking, media literacy, source checking, awareness of the problem and responsible sharing of information. In this sense, the pandemic has perversely offered an opportunity to come to terms with fake news — in effect a metaphorical fork in the road.
The Age of Whoever Shouts the Loudest
The idea for this article started with the assertion by US President Trump in a public press conference on February 26th that “because of all we have done, the risk to the American people remains very low”, in reference to the current Covid-19 global pandemic.
These comments have not aged well, with the coronavirus cases in the US having surpassed 670,000 and 35,000 deaths at the time of writing.
Unless you have been living under a rock for the last four years, you know that this is far from the first time Trump has spread misinformation. In fact, he is the uncontested champion of lying, with “The Washington Post calculat[ing] that he’d made 2,140 false or misleading claims during the first year in office — an average of nearly 5.9 a day.” 
But to be fair to Trump (and it really pains me to write that), he is simply the embodiment of something that has been festering in our society for decades — the erosion of truth.
But how did we get here? How did we get to a point where the truth has become unfashionable, telling lies acceptable and normalised; where truth is whatever a person chooses to believe?
Recently I read former New York Times critic Michiko Kakutani’s excellent book ‘The Death of Truth’ in which she sets out a timeline for this erosion and places it in the geo-political context of today. As she succinctly put it, we are currently in a time where “nationalism, tribalism, dislocation, fears of social change and the hatred of outsiders are on the rise again as people, locked in their partisan filter bubbles, are losing a sense of shared reality and the ability to communicate across social and sectarian lines”.
There has, in effect, been a re-definition of the value that facts, science, rationale, objectivity and expert-opinion hold in our society, arguably leaving the door ajar for misinformation and disinformation to run amock.
A large contributing factor to this story has been the advent of the internet. As CEO of the New York Times Company Mark Thompson said in 2016, “our digital eco-systems have evolved into a near-perfect environment for distorted false news to thrive”. This is an environment where information can be created and shared freely at incredible velocity and mass, without the veracity of it being checked beforehand. This view is reaffirmed by Kakutani who notes that “when it comes to spreading fake news and undermining belief in objectivity, technology has proven to be a highly flammable accelerant”.
One industry that has been hit particularly hard by this changing societal relationship with truth has been journalism. As award winning digital media scholar Alfred Hermida notes: “verification is one of the cornerstones of the professional ideology of journalism in Western liberal democracy, together with related concepts such as objectivity, impartiality and autonomy.” With this context mentioned above in mind, it is not surprising that the across the world the level of trust in journalism and indeed in journalists has declined in the recent past.
Declining Levels of Trust, Showing Healthy Dose of Scepticism
These declining levels of trust in journalism are most stark when it comes to the space of digital news. The industry leading market research report compiled in 2019 by the Reuters Institute, found that across the 38-country sample, the 75,000 survey respondents only had a trust level of 33% for news found via search engines and 23% for news found via social media.
These findings illustrate that people were already starting to think about the veracity and origin of the information that they were reading, even before the coronavirus outbreak. In fact, when asked, 55% of respondents said that they remain concerned about their ability to separate what is real and what is fake on the internet. In the most extreme cases, people had felt the need to actively change their news consumption patterns to ensure a higher quality of news. This was conveyed in the fact that 26% of the repondents had said that they had started to rely on sources that they considered to be more reputable .
Spotting Different Types of Fake News
Yes, Covid-19 has led to a plethora of misinformation and disinformation being produced and spread online, and yes the challenge is certainly big.
At this point, it is appropriate to highlight that there are many types of what (in the contemporary vernacular) has been characterised under the umbrella term ‘fake news’. The graphic below from Reuters illustrates the different forms that it can take.
This was taken from a recently published piece of analysis, where Reuters looked at a sample of 225 pieces of misinformation between the beginning of January and end of March. On the left you have misleading content, which “contain[s] some true information, but [where] the details were reformulated, selected and re-contextualised in ways that made them false or misleading” and false context, where “genuine content is shared with false contextual information”.
Reconfigured content is arguably the more dangerous type of misinformation, because it is much more difficult for the reader to spot. This is why one technique often cited used to tackle the challenge of misinformation is to improve people’s media literacy levels and critical thinking skills. On the right of the graphic (38% of the findings) you have misinformation that was fabricated — also more accurately described as disinformation. This is content that is 100% false, that has been created with the intention of deliberately deceiving the reader.
Green Shoots of Recovery
The reaction to the spread of misinformation online surrounding the pandemic has been interesting to follow and given me reasons to be hopeful.
Firstly, the fact checkers are fighting back aggressively against the wave of fake news. The same Reuters article illustrated for example, how “the number of English language fact checks rose by 900% from January to March”.
Furthermore, early indications are that people are employing a healthy dose of scepticism and taking some further steps themselves to access the credibility of the content that they are reading. This is reinforced by a survey conducted by Ofcom on the news consumption patterns of 2000 respondents in the UK from March 23 to March 29.
When encountering what they perceived to be misinformation, 45% of the survey respondents said that they had actively attempted to verify the content — 15% doing so by choosing themselves to employ fact-checking tips they had seen online, 13% asking friends and family, 10% going to fact-checking websites and 7% saying they had reported or blocked the content.
Similarly, there has been a large increase in the number of social media users who are themselves flagging and removing misinformation. In particular, as reported by the BBC, administrators of closed Facebook groups have been very active on this front in the last few weeks. Going forward, I would urge readers to check any information that they send on to others via closed networks (eg WhatsApp) because of the risks as outlined in the video below. ‘Bottom-up’ spread amongst peers, family and friends is potentially more dangerous than ‘top-down’ misinformation spread by politicians, officials and celebrities.
Secondly, as has been widely reported in recent days, public service broadcasters (e.g. ARD in Germany) have seen a significant uptick in their viewership since the coronavirus started. On March 16th, the BBC announced in a series of tweets that the previous week had been “the biggest week ever for BBC News Online…with 70m unique browsers coming to the website and apps” and that its TV audience for the BBC News at six pm “was up by 27% on 2019”. This is reinforced by the aforementioned Ofcom report, where 82% of those asked said that they were using the BBC as a source of information (see graph below).
This is encouraging when compared to online media consumption patterns before the crisis, where a majority of people were accessing their news through either search engines, social media sites, email, mobile alerts and aggregators. These sources are much more likely to feature stories that include misinformation and disinformation. These sources are particularly popular with people under the age of 35, who prefer them to public service broadcasters that have a majority of older users.
During times of crisis, it is well established that people typically tend to engage much more frequently with news than during ‘normal times’. A key driver for this change in behaviour is fear, whereby the population feel the need to inform themselves for their own protection. In their search for factually correct information people often turn to the established media brands, which is what we have seen happen in recent weeks.
In the Context of Life and Death, Turns Out Facts Do Matter
In the past, after a crisis has subsided, there is a tendency for the population’s media consumption patterns to revert back to pre-crisis levels. Perhaps this will also happen this time. But there are indications that at least some of the current patterns of behaviour might remain in place post-coronavirus. This crisis has a key point of differentiation to other crises in the past. The nature of this virus has meant that in this environment facts and data do matter and the population are starting to realise this.
On the one hand, people are using sources where they can rely on the information to be true. Many have turned to the public service broadcasters which typically command a high level of trust from users. Similarly, experts are back in high demand as millions of citizens look to them for reassurance, trusting their knowledge and analysis. Appropriately, scientists are currently the most trusted spokespeople globally, which is why they have been taking a leading role in government press conferences.
On the other hand, people have become hypersensitive to fake news. There is an awareness by the general public of the plurality and danger of the spread of misinformation which is a good starting point for tackling this challenge. Furthermore, we have seen a multitude of fact checking websites and organisations taking up arms against the deluge of fake news online and these resources are being used. This has been complemented by people having adopted a more responsible attitude towards sharing of information with family and friends in closed networks.
We are at a critical juncture — a fork in the road ahead— and there is still a lot of work to do in the battle against fake news. Will the Facts Win Out? Only time will tell…