Categories
Five Minuter

Link Wars: Facebook vs. Australia

In Australia, Facebook is once again hamstrung by its business model

Last month, the Australian government made headlines with a new law forcing Big Tech platforms, namely Google and Facebook, to pay publishers for news content. The move was ostensibly meant to provide a new revenue stream supporting journalism, but the legislation is also the latest development in a succession of moves by influential News Corp CEO Rupert Murdoch to strike back at the online platforms sapping his publications’ advertising revenue.

While Google brokered a deal with News Corp and other major Australian publishers, Facebook decided instead to use a machine learning tool to delete all “news” links from Australian Facebook. Caught in the wave of takedowns were also non-news sites: public health pages providing key coronavirus information, trade unions, and a number of other entities that share, but do not produce, news content. Facebook has since backtracked, and Australian news content has been allowed back on the platform.

While Google reached a deal with News Corp and other major Australian publishers, Facebook decided instead to use a machine learning tool to delete all “news” links from Australian Facebook.

The fiasco illustrates broader issues facing both the company and Big Tech in general: the spectre of regulatory action. This trend that explains the influx of politically influential figures entering Facebook’s employ, like Former UK Deputy Prime Minister Nick Clegg, who is now Facebook’s Vice-President of Global Affairs and Communications.

Facebook’s chronic problem isn’t the aggressive methods of its public affairs team, nor its CEO waxing poetic about free speech principles only to reverse course later. Facebook is hamstrung by its own business model, which incentivizes it to prioritize user engagement above all else.

The Australia case is reminiscent of another moment in the struggle between Big Tech and governments. In 2015, a pair of gunmen murdered a group of people at a barbecue in San Bernardino, CA with what were later understood to be jihadist motives. After the attack, Apple CEO Tim Cook seized the moment to solidify Apple’s brand image around privacy, publicly refusing the federal government’s requests to create a backdoor in iOS.

This principled stand was backed up by Apple’s business model, which involves selling hardware and software as a luxury brand, not selling data or behavioral insights. Cook’s move was both ethically defensible and strategically sound: he protected both users’ privacy and his brand’s image.

After the 2015 San Bernardino attack, Apple CEO Tim Cook seized the moment to solidify Apple’s brand image around privacy, publicly refusing the federal government’s requests to create a backdoor in iOS. This principled stand was backed up by Apple’s business model, which involves selling hardware and software as a luxury brand, not selling data or behavioral insights.

In the case Australia, different actors are involved. Google, like Facebook, relies on mining data and behavioral insights to generate advertising revenue. However, in the case of news, Facebook and Google have different incentives around quality. On the podcast “Pivot”, Scott Galloway from NYU pointed out that Google has a quality incentive when it comes to news. Users trust Google to feed them quality results, so Google would naturally be willing to pay to access professional journalists’ content.

More people use Google than any other search engine because they trust it to lead them not just to engaging information, but to correct information. Google therefore has a vested commercial interest in its algorithms delivering the highest quality response to users’ search queries. Like Apple in 2015, Google can both take an ethical stand — compensating journalists for their work — while also playing to the incentives of its business model.

On the other hand, Facebook’s business model is based on engagement. It doesn’t need you to trust the feed, it needs you to be addicted to the feed. The News Feed is most effective at attracting and holding attention when it gives users a dopamine hit, not when it sends them higher quality results. To Facebook, fake news and real news are difficult to distinguish amongst the fodder used to keep people on their platform.

In short, from Facebook’s perspective, it doesn’t matter if the site sends you a detailed article from the Wall Street Journal or a complete fabrication from a Macedonian fake news site. What matters is that the user stays on Facebook.com, interacting with content as much as possible to feed the ad targeting algorithms.

The immediate situation in Australia has been resolved, with USD 1bn over three years having been pledged to support publishers. But the fundamental weakness of Facebook’s reputation is becoming obvious. Regulators are clearly jumping the gun to take shots at the company in the wake of the Cambridge Analytica scandal, debates over political advertising, and the prominent role the site played in spreading conspiracies and coronavirus disinformation.

In short, shutting down news was a bad look. Zuckerberg may have been in the right on substance — free hyperlinking is a crucial component of an open internet. But considering the company has already attracted the ire of regulators around the world, this was likely not the ideal time to take such a stand.

In any case, Australia’s efforts, whether laudable or influenced by News Corp’s entrenched power, are largely for naught. As many observers have pointed out, the long-term problem facing journalism is the advertising duopoly of Google and Facebook. And the only way out of that problem is robust anti-trust action. Big Tech services may be used around the world, but only two legislatures have any direct regulatory power over the largest of these companies: the California State Assembly in Sacramento, and the United States Congress. Though the impact of these technologies is global, the regulatory solutions to tech issues will likely have to be American, as long as US-based companies continue to dominate the industry.

Categories
Five Minuter

Astroturfing — the sharp-end of Fake News and how it cuts through a House-Divided

 

A 5-minute introduction to Political Astroturfing.

Dear Reader,

At Wonk Bridge, among our broader ambitions is a fuller understanding of our “Network Society”[1]. In today’s article, we’re aiming to connect several important nodes in that broader ambition. Our more seasoned readers will already see how Political Astroturfing simultaneously plays on both the online and offline to ultimately damage the individual’s ability to mindfully navigate in-between dimensions.

Definition

Political Astroturfing is a form of manufactured and deceptive activity initiated by political actors who seek to mimic bottom-up (or grassroots) activity by autonomous individuals.(slightly modified from Kovic et al. 2018’s definition which we found most accurate and concise)

While we will focus on astroturfing conducting exclusive by digital means, do keep in mind that this mischievous political practice remains as old as Human civilisation. People have always sought to “Manufacture Consent” through technologically-facilitated mimickry, and have good reason to continue resorting to the prevalent communications technologies of the Early Digital age to do so. And without belabouring the obvious, mimickry has always been a popular tactic in politics because people continue to distrust subjectivity from parties who are not friends/family/ “of the same tribe”.

Our America Correspondent and Policy-columnist Jackson Oliver Webster wrote a piece about how astroturfing was used to stir and then organise the real-life anti-COVID lockdown protests across the United States last April. Several actors began the astroturfing campaign by opening a series of “Re-open” website URLs and then connecting said URLs to “Operation Gridlock” type Groups on Facebook. Some of these Groups then organised real-life events calling for civil unrest in Pennsylvania, Wisconsin, Ohio, Minnesota, and Iowa.

The #Re-Open protests are a great example of the unique place astroturfing has in our societal make-up. They work best when taking advantage of already volatile or divisive real-world situations (such as the Covid-19 lockdowns, which were controversial amongst a slice of the American population), but are initiated and sped-up by mischievous actors with intentions unaligned with those of the protesters themselves. In Re-open’s case, one family of conspirators — the Dorr Brothers — had used the websites to harvest data from and push anti-lockdown and pro-gun apparel to website visitors. The intentions of the astroturfers can thus be manifold, from a desire to stir-up action to fuelling political passions for financial gain.

The sharp-end of Fake news

Astroturfing will often find itself in the same conversational lexicon as Fake News. Both astroturfing and fake news are seen as ways to artificially shape peoples’ appreciation of “reality” via primarily digital means.

21st century citizenship, concerning medium/large scale political activity and discourse in North America and Europe, is supported by infrastructure on social networking sites. The beerhalls and market-squares have emptied, in favour of Facebook Groups, Twitter Feeds and interest-based fora where citizens can spread awareness of political issues and organise demonstrations. At the risk of igniting a philosophical debate in the comments, I would suggest that the controversy surrounding Fake news at the moment is deeply connected with the underlying belief that citizens today are unprepared/unable to critically appraise or reason with the information circulated on digital political infrastructure, as well as they might have been able to offline. Indeed the particularity of astroturfing lies in its manipulation of our in-built information filtration mechanism, or what Wait But Why refers to as a “Reason Bouncer”.

For a more complete essay on how we developed said mechanism, please refer to their “The Story of Us” series.

Our information filtration mechanism is a way of deciding which information from both virtual and real dimensions is worth considering as “fact” or “truth” and which should be discarded/invalidated. As described in “The Story of Us”, information that appeals to an individuals primal motivations, values or morals tend to be accepted more easily by the “Reason Bouncer”, just as information coming from “trustworthy sources” such as friends, family or other “in-group individuals”. Of course, just like how teenagers try to use fake-IDs to sneak into nightclubs, astroturfing seeks to get past your “Reason Bouncers” by mimicking the behaviour and appealing to the motivations of your “group”.

The effectiveness of this information filtration “exploit” can be seen in the 2016 Russian astroturfing attack in Houston, Texas. Russian actors, operating from thousands of kilometers away, created two conflicting communities on Facebook, one called “Heart of Texas” (right-wing, conservative, anti-Muslim) and the other called the “United Muslims of America” (Islamic). They then organised concurrent protests on the question of Islam in the same city: one called “Save Islamic Knowledge” and another called “Stop the Islamification of Texas” right in front of the Islamic Da’wah Center of Houston. The key point here is that the astroturfing campaign was conducted in two stages: infiltration and activation. Infiltration was key to get past the two Texan communities’ “Reason Bouncer”, by establishing credibility over several months with the creation, population and curation of the Facebook communities. and all that was required to “activate” both communities was the appropriate time, place and occasion.

The “Estonian Solution”

Several examinations of the astroturfing issue have pointed out that, rather than the government or military, ordinary citizens are often the targets of disinformation and disruption campaigns using the astroturfing technique. Steven L. Hall and Stephanie Hartell rightfully point out the Estonian experience with Russian disinformation campaigns as a possible starting point for improving society resilience to astroturfing campaigns.

As one of the first Western countries to have experience a coordinated disinformation campaign in 2007, the people of Estonia rallyed around the need for a coordinated Clausewitzian response (Government, Army, and People) to Russian aggression: “Not only government or military, but also citizens must be prepared”. Hall and Hartell note the amazing (by American standards) civilian response to Russian disinformation, including the creation of a popular volunteer-run fact-checking blog/website called PropaStop.org.

Since 2016, the anti-fake news and fact-checking industry in the United States is booming — with more than 200 fact-checking organisations active as of December 2019. The fight against disinformation and the methods that make astroturfing possible is indeed well and alive in the United States.

Where I disagree with Hall and Hartell, who recommend initiatives similar to those by Estonia in the USA, is that disinformation and astroturfing cannot meaningfully be reduced in the USA without addressing the internal political and social divisions which make the job all too easy and effective. The United States is a divided country, along both Governmental and popular lines. How can the united action of Estonia be replicated when two out of the three axes (Government, Military and People) are compromised?

This — possibly familiar — Pew Research data visualisation (click here for the research) shows just how much this division has exacerbated over time. Astroturfing campaigns like the ones in Houston in 2016 comfortably operate in tribal environments, where suspicion of the internal “Other” (along racial religious, political lines) trumps that of the true “Other” — found at the opposite end of the globe. In divided environments, fact-checking entreprises also suffer from weakened credibility and the suspicion of the very people they seek to protect.

In such environments, short of addressing the issues that divide a country, the best technologists can perhaps do is create new tools transparently and openly. So as to avoid suspicion and invite inspection. But to also seek as many opportunities to work in partnership with Government, the Military and all citizens, with the objective of arming the latter with the ability to critically evaluate information online and understand what digital tools and platforms actually do.

[1] A society where an individual interacts with a complex interplay of online and offline stimuli, to formulate his/her more holistic experience of the world we live in. The term was coined by Spanish sociologist Manuel Castells.