Trialectic – Can Technology be Moral?


A Wonk Bridge debate format between three champions designed to test their ability to develop their knowledge through exposure to each other and the audience, as well as maximise the audience’s learning opportunities on a given motion.

For more information, please read our introduction to the format here, The Trialectic.

Motioning for a Trialectic Caucus

Behind every Trialectic is a motion and a first swing at the motion, which is designed to kick-start the conversation. Please find my motion for a Trialectic on the question “Can Technology be Moral?” below.

I would like to premise this with a formative belief: Humans have, among many motivations, sought to seek “a better life” or “the good life” through the invention and use of technology and tools.

From this perspective, technology and human agency have served as variables in a so-called “equation of happiness” (hedonism) or in the pursuit of another goal: power, glory, respect, access to the kingdom of Heaven.

At the risk of begging the question, I would like to premise this motion with a preambulatory statement of context. I would like to focus our contextual awareness around three societal problems.

First, the incredible transformation of the human experience through technology mediation has changed the way we see and experience the world. Making most of our existing epistemological frameworks inadequate as well as turning our political, cultural systems unstable if not obsolete.

Another interpretation of this change is that parts of our world are becoming “hyper-historic”, where information-communication technologies are becoming the focal point, not a background feature of human civilisations (Floridi, 2012).

Next, the driving force behind “the game” and the rules of “the game”, which can be generally referred to as Late Capitalism, are being put under question with Postmodern thought exposing it’s weaknesses and unfairness, and a growing body of Climate Change thinkers documenting its unsustainability and nefarious effect on long term human survival. More practically, since the 2008 financial crash, Capitalism has taken a turn towards excluding human agents from the creation of wealth and commodifying distraction/attention. In short, the exclusion of the Human from human activity.

Third, the gradual irrelevance of a growing share of humans in economic and political activity, as well as the lack of tools for both experts and regular citizens to understand the new world(s) being crafted (this “Networked Society” which is a Hybrid of Digital Civilization and of Technologically Mediated Analog world) (Castells, 2009), has created an identity crisis a both the collective and individual levels. We know what is out there, have lost sight of the How and can’t even contemplate the Why anymore.

  • A better understanding of the forces shaping our world
  • An intentional debate on defining what this collective “Why” must be

Can help us find a new “True North” and begin acting morally by designing intentional technologies based around helping us act more morally.

Introductory Thesis

I based my initial stance on this topic atop the shoulders of a modern giant in Digital Ethics – Peter-Paul Verbeek, based on his 2011 work Moralising Technology.

Verbeek, who wants us to believe that the role of “things”, which includes “technologies”, inherently holds moral value. That we need to examine ethics through not an exclusively human-centric lens but also from a materialistic-angle. That we cannot ignore any longer the deep interlink between humans and their tools.

There is first the question of technological mediation. Humans depend on their senses to develop an appreciation of the world around them. Their senses are, however, limited. Our sense of sight can be limited by myopia or other delibitating conditions. We can use eyeglasses to “correct” our vision, and develop an appreciation of our surroundings in higher-definition.

This is a case of using technology that helps us reach a similar level of sensing as our peers, perhaps because living in a society comes with its own “system requirements”? We correct our vision with eyeglasses because we want to participate in society, be in the world, and place ourselves in the best position to abide by the ethics and laws. Technology is necessary to see the world like others, because when we see a common image of the world, we are able to draw conclusions as to how to behave within it.

When a new technology helps us develop our sense-perception even further, we can intuitively affirm that technological mediation occurs in the “definition” of ethics and values. Technologies help us see more of the world. Before the invention of the electric street-lamp system, as part of a wider system of urban reorganisation in the 19th century, western cultures looked-down on the practice of activities at night. An honest man (or woman) would not lurk on in the streets of Paris or London at night.

The darkness of dimly-lit streets made it easy for criminals and malfeasants to hide from the police and to harass the vulnerable. Though still seen as relatively more dangerous than moving in the light of day, it is now socially accepted (even romanticized) to ambulate under the city street-lamps and pursue a full-night’s entertainment.

A technology, the street-lamp system, helped people see more of the world (literally) and our ethics grew out of the previous equilibrium and into a new one. By affecting the way we perceive reality, technology also helps shape our constructed reality, and therefore also directly interferes in the moral thought-process of both individual and collective thought-processes.

From the pre-operative level, my thesis doesn’t diverge too far from Verbeek or Latour’s initial propositions. It will in terms of operative or practical applications seek to put a greater emphasis.

It seems clear that Technology has a role to play in defining what can be a moral practice. The question examined in this thesis therefore seeks to go a step further in exploring whether the creation (technology) can be considered independently from its creator (inventor/designer).

Are human agents responsible for the direct and indirect effects of the tools they build?

Of course, it is clear that adopting an perspective on the morality of technology that is solely anchored in the concept of technology mediation is problematic. As Verbeek mentioned in his book, the isolation of human subjects from material objects is deeply entrenched in our Modernist metaphysical schemes (cf. Latour 1993), contextualises ethics as a solely human affair and keeps us from approaching ethics as a hybrid.

This out-dated metaphysical scheme, sees human beings as active and intentional while material objects as passive and instrumental (Verbeek, 2011). Human behaviour can be assessed in moral terms good or bad but a technological artifact can be assessed only in terms of its functionality (functioning well or poorly) (Verbeek, 2011). Indeed, technologies have a tendency to reveal their true utility after having been used or applied, not before as they were being created or designed.

It is also key to my argument that technologies resembling intentionality are not in themselves intentional. Science fiction relating to artificial general intelligence aside, the context within which technology is being discussed today (2021), is a context of where technologies operate with a semblance of autonomy, situated in a complex web of interelated human and machine agents.

Just because the behaviour of some technologies today (i.e. Google search algorithms) are not decipherable, does not mean that they are autonomous nor intentional. What is intentional is the decision to create a system that contains no checks nor balances. To build a car without breaks or a network without an off-switch.

Technology does have the power to change our ethics.

An example Verbeek uses frequently is the pre-natal ultrasound scan that parents use to see and check whether their unborn child or fetus has any birth defects. This technology gives parents the chance or transfer the responsibility of making a potentially life-threatening or life-defining decision. It also gives them the first glimpse of what their unborn baby looks like through the monitor.

While the birth of a child before the scan was seen ethically as the work of a higher power, outside of human responsibility and agency, the scanner has given parents the tools and the responsibility to make a decision. As Verbeek documents at several occasions in the book, it changes dramatically the way parents’ (especially the fathers) label what they see through the monitor: from a fetus to an unborn child.

The whole ceremony around the scan visit, with the doctor’s briefing and the evaluation of results, creates a new moral dilemma for parents and a new moral responsibility to give life or not to a child with birth defects, rather than accepting whatever outcome is given to you at birth.

But let’s take this a step further and ask the age-old question: Who benefits?

The pre-natal ultrasound scan and the many other tests offered by hospitals today will service the patients. It will give them the chance to see their specimen and make choices about its future. But the clients of these machines are in fact hospitals and doctors, they are also, indirectly, policy-makers and healthcare institutions. The clients seek to begin shifting responsibility away from hospitals and doctors, and onto the parents who will have gained newfound commitment to the unborn babies that they have had the chance to see for the first time. The reasons driving this are manifold, but hospitals and governments are financially/economically interested in baby-births and also in having parents be committed to seeing through the stages of a natality.

When considering the morality of technologies, of systems and objects that are part of those systems, it’s worth paying close attention to what Bruno Latour calls systems of morality indicators; moral messages exist everywhere in society and inform each other, from the speed bump dissuading the driver from driving fast because “the area is unsafe, and driving fast would damage the car” to the suburban house fence informing bystanders “this is my private property”.

But also on who benefits from the widespread usage of said technological-products. The discussions around the morality of technology tend to focus on the effects deriving from the usage or application of said technologies rather than the financial or other benefits deriving from the adoption of said technologies at a large-scale.

Social Media as an example

The bundle of technologies that we call social media is a clear example of why this way of thinking matters. The nefarious consequences of mass-scale social media usage in a society and for an individual are clear and well-documented. We have documented its effects on warping and changing our conception of reality (technological mediation), in political sphere with our astroturfing piece and on our social relationships in our syndication of the friend piece.

In our discussions responding to the acclaimed Netflix documentary The Social Dilemma, we spotted an interesting pattern in the accounts. That one man or woman was powerless in stopping a system that was so lodged in the interweaving interests of Big Tech’s shareholders. The economic-logic of social media makes it so that acting on the nefarious consequences like fake news or information echo-chambers, would be null impossible due to the fact that it would require altering social media’s ad-based business model.

The technology of social media works and keeps being used because it is not concerned with the side-effects, but with the desired effect which is to provide companies or interested parties (usually with large digital marketing budgets) with panopticon-esque insights into its users (who happen to be over 80% of people living in the US, according to Pew Research Center 2019).

Technologies are tools. I mean, this is pretty obvious and doesn’t really need further explanation in writing, but they are not always tools like hammers or pencils that would prove useful to most human beings. They are sometimes network-spanning systems of surveillance that are used by billions, only to provide actual benefit to a chosen few.

The intention of the designer is thus primordial when considering technology and morality because the application of said technology will inevitably have an effect on agents that encounter the technology, but it will also have an effect on the designer themself. There will be a financial benefit and, more than this, ‘the financial benefit will inform future action’ (reflected Oliver Cox, uponing editing this piece).

So yes, the reverse situation is also true, some technologies may be designed with a particular social mission in mind, and then used for a whole suite of unforeseen nefarious applications.

In this case, should the designer be blamed or made responsible for the new applications of their technology, should the technology itself be the subject of moral inquisition and the designer be absolved from their ignorance, or should each application of such technology be considered “a derivative” and thus conceptually separate from the original creation?

Another titan in digital ethics, Luciano Floridi of the Oxford Internet Institute, thinks that intentions are tied to the concept of responsibility; “If you turn on a light in your house and the neighbour’s house goes BANG! It isn’t your responsibility, you did not intend for it to happen.” Yes, the BANG may have had something to do with the turning on of the light, but as he goes on to mention, “accountability is different, it is the process of cause and effect relationships that connects the action to the reaction.”

Accountability as the missing link

With this in mind, we can assume that the missing link between designing a technology and placing the responsibility over to designers is accountability. To hold someone accountable for their actions, one must have access to knowledge or to data that would provide some sort of a paper trail for the observer to trace the effects of said design on the environment and the interactions of the environment with said design.

While it is indeed possible to measure the effects of a technology like social-media from an external perspective, it is far easier and more informative to do so from the source. Yes, what would hold designers of technologies most accountable is for them to hold themselves accountable.

There is therefore a problem of competing priorities when it comes to accountability, derived from the problem of the access to knowledge (or data).

In the three examples given: of the pre-natal ultrasound scanner, social media and the light-switch-that-turns-out-to-be-a-bomb. The intentions of the designer varied across a spectrum, from zero intention to blow up your neighbour’s house, to the pre-natal ultrasound scan where the intention to provide parents with a choice regarding the future of their child was deliberate.

In all three cases, an element beyond intentionality plays an role; the designer is either unaware of (with the light-switch) or unwilling to investigate (with social media) the consequences of applying technology. Behind the veil of claims of technological sophistication, designers reneage from their moral duties to “control their creations”.

If the attribution of responsibility in technologies lies in both intentionality and accountability, then, deontologically, shouldn’t the designers of such technologies provide the necessary information and build the structures to allow for accountability?

The designers should be held accountable for their creations, however autonomous they may initially seem. If so how, feasibly, can they be held accountable?

Many of these questions have been approached and tackled to some extent in the legal world, with intellectual property and copyright laws on the question of ownership of an original work. And this has also been examined to some length by the insurance industry which uses risk management frameworks to determine burden sharing of new initiatives between a principal and agent.

But in the realm of ethics and the impact of technologies on the social good, the focus that may best suit the issue we have here is the Tragedy of the Commons. It is the case where technologies that are widely available (as accessible as water or breathable air) have become commodities and are being used as building blocks for other purposes by a number of different actors.

The argument that technologies have inherent moral value is besides the point. The argument is that moral value should be ascribed to the ways in which technologies are used (whether those be called derivatives or original new technologies); the designers need to be inherently tied to their designs.

  1. In the GDPR example: where processing of personal data represents a genus of technologies where the moral value is ascribed to the processors and controllers of the personal data. The natural resource behind the technology, personal data, remains under control of the owner of that resource.
  2. Ethics by design: The process by which technologies are designed needs to be more inclusive and considerate. Its impact on stakeholders (suppliers, consumers, investors, employees, broader society, and the environment) need to be assessed and factored in during development. It cannot be something that can be wholly predicted but it can also be understood and managed if taken with particular due care. Example: regulated industries such as Life Sciences and Aerospace have lengthy trialling processes involving many stakeholders which makes the introduction of new products more rigorous.

Accountability as the other-side of the equation

As mentioned, the emergence of new technologies such as blockchain governance systems (e.g. Ethereum smart contracts) provide clear examples of how new technologies have created new ways of holding agents accountable for their actions. Those actions that without such enabling technologies, would have been considered outside-of-their-control.

It seems that technology can work on both sides of a theoretical ethical-accountability equation. If some technologies make it easier to act outside of pre-existing ethical parameters and unseen by the panoply of accountability tools in-use, then others can provide stakeholders with more tools to hold each other into account.

Can Technology Be Moral? Yes, it can given its ability to provide more tools to tighten the gap between agents actions and the responsibility they have for those actions. But some technology can be immoral, and stay immoral, without an effective counterweight in place. Technology is therefore an amoral subject, but very much moral in its role as both a medium and as an object for moral actors.


It will be my honour and pleasure to debate with our two other Trialectic champions, Alice Thwaite and Jamie Woodcock. I am looking forward to what promises to be a learning experience and to update this piece accordingly after their expert takes.

Please send us a message or comment on this article if you would like to join the audience (our audience is also expected to jump-in!).

Five Minuter

Link Wars: Facebook vs. Australia

In Australia, Facebook is once again hamstrung by its business model

Last month, the Australian government made headlines with a new law forcing Big Tech platforms, namely Google and Facebook, to pay publishers for news content. The move was ostensibly meant to provide a new revenue stream supporting journalism, but the legislation is also the latest development in a succession of moves by influential News Corp CEO Rupert Murdoch to strike back at the online platforms sapping his publications’ advertising revenue.

While Google brokered a deal with News Corp and other major Australian publishers, Facebook decided instead to use a machine learning tool to delete all “news” links from Australian Facebook. Caught in the wave of takedowns were also non-news sites: public health pages providing key coronavirus information, trade unions, and a number of other entities that share, but do not produce, news content. Facebook has since backtracked, and Australian news content has been allowed back on the platform.

While Google reached a deal with News Corp and other major Australian publishers, Facebook decided instead to use a machine learning tool to delete all “news” links from Australian Facebook.

The fiasco illustrates broader issues facing both the company and Big Tech in general: the spectre of regulatory action. This trend that explains the influx of politically influential figures entering Facebook’s employ, like Former UK Deputy Prime Minister Nick Clegg, who is now Facebook’s Vice-President of Global Affairs and Communications.

Facebook’s chronic problem isn’t the aggressive methods of its public affairs team, nor its CEO waxing poetic about free speech principles only to reverse course later. Facebook is hamstrung by its own business model, which incentivizes it to prioritize user engagement above all else.

The Australia case is reminiscent of another moment in the struggle between Big Tech and governments. In 2015, a pair of gunmen murdered a group of people at a barbecue in San Bernardino, CA with what were later understood to be jihadist motives. After the attack, Apple CEO Tim Cook seized the moment to solidify Apple’s brand image around privacy, publicly refusing the federal government’s requests to create a backdoor in iOS.

This principled stand was backed up by Apple’s business model, which involves selling hardware and software as a luxury brand, not selling data or behavioral insights. Cook’s move was both ethically defensible and strategically sound: he protected both users’ privacy and his brand’s image.

After the 2015 San Bernardino attack, Apple CEO Tim Cook seized the moment to solidify Apple’s brand image around privacy, publicly refusing the federal government’s requests to create a backdoor in iOS. This principled stand was backed up by Apple’s business model, which involves selling hardware and software as a luxury brand, not selling data or behavioral insights.

In the case Australia, different actors are involved. Google, like Facebook, relies on mining data and behavioral insights to generate advertising revenue. However, in the case of news, Facebook and Google have different incentives around quality. On the podcast “Pivot”, Scott Galloway from NYU pointed out that Google has a quality incentive when it comes to news. Users trust Google to feed them quality results, so Google would naturally be willing to pay to access professional journalists’ content.

More people use Google than any other search engine because they trust it to lead them not just to engaging information, but to correct information. Google therefore has a vested commercial interest in its algorithms delivering the highest quality response to users’ search queries. Like Apple in 2015, Google can both take an ethical stand — compensating journalists for their work — while also playing to the incentives of its business model.

On the other hand, Facebook’s business model is based on engagement. It doesn’t need you to trust the feed, it needs you to be addicted to the feed. The News Feed is most effective at attracting and holding attention when it gives users a dopamine hit, not when it sends them higher quality results. To Facebook, fake news and real news are difficult to distinguish amongst the fodder used to keep people on their platform.

In short, from Facebook’s perspective, it doesn’t matter if the site sends you a detailed article from the Wall Street Journal or a complete fabrication from a Macedonian fake news site. What matters is that the user stays on, interacting with content as much as possible to feed the ad targeting algorithms.

The immediate situation in Australia has been resolved, with USD 1bn over three years having been pledged to support publishers. But the fundamental weakness of Facebook’s reputation is becoming obvious. Regulators are clearly jumping the gun to take shots at the company in the wake of the Cambridge Analytica scandal, debates over political advertising, and the prominent role the site played in spreading conspiracies and coronavirus disinformation.

In short, shutting down news was a bad look. Zuckerberg may have been in the right on substance — free hyperlinking is a crucial component of an open internet. But considering the company has already attracted the ire of regulators around the world, this was likely not the ideal time to take such a stand.

In any case, Australia’s efforts, whether laudable or influenced by News Corp’s entrenched power, are largely for naught. As many observers have pointed out, the long-term problem facing journalism is the advertising duopoly of Google and Facebook. And the only way out of that problem is robust anti-trust action. Big Tech services may be used around the world, but only two legislatures have any direct regulatory power over the largest of these companies: the California State Assembly in Sacramento, and the United States Congress. Though the impact of these technologies is global, the regulatory solutions to tech issues will likely have to be American, as long as US-based companies continue to dominate the industry.


It is time to talk about Technology differently

While manifestly found in many animal species, humanity’s ability to devise and wield intricate tools is unique in its breadth and impact. Be it part of our genetic code, a proportionally massive cranium or an elegant pair of opposable-thumbs, some set of perfect conditions has allowed for the presence of a magnificent talent; our obsession with finding easier ways to achieve our diverse ends. We would do well to remember this. Technology is not an end in itself, nor is it a single ubiquitously recognised set of means. It is a talent found in all of us, an urge to create and innovate and move past obstacles set before us.

Statistically, most of you will be readers from Europe or North America. Recently, we have been exposed to a certain idea of what “Technology” is supposed to mean. If we go by published output from mainstream Technology- or Business press outlets, we could be easily led into thinking that Technology is euphemism for the “Information Technology industry”. Some of us might associate the word to a mosaic of gadgets that together form part of this vaguely coined “Fourth Industrial Revolution” – a Global economy driven by automation. Why is our definition of Technology so limited?

As initially said by Robert Smith, Co-Founder of Seedlink and anthropology researcher, this is a “Euro-American Centric consensus”. A handful of financiers and technologists from London and San Francisco are setting the tone for how start-ups should be born and companies should be run. It is built around an obsession with the economic domination of four or five Big Tech corporations and the opinions of investors in Silicon Valley or Silicon Circle. This obsession is blinding us from the exciting developments in technology, like the midday sun outshining the moon and stars.

It is in fact a double blind. First you are being misled into thinking IT may be the most important technology, simply by merit of investment volumes and value (see CB-Insights’ 2019 List of Unicorns by Industry). Next that Big Tech may be an appropriate poster-child for contemporary technological development.

Let us decide to take a step-back, or rather, to remove our headsets and examine the question of technology as the fruit of an anthropologically-encoded set of creative or innovative behaviours based on improving the human condition.

Now a gospel to be repeated on San Franciscan dinner-tables, Moore & Grove’s balanced corporate-innovative environment at Intel in the 1970s, created the foundation on which several breakthrough technologies like the MOS transistor and integrated-circuit were developed. This foundation and the success that came with it enabled Intel and several other early digital companies to create a financially-supportive environment for start-ups to pursue ambitious high-risk projects.

It is in fact quite revealing how much directional influence Moore and Grove have had on the ideological tapestry of Silicon Valley. Moore’s law dictates the technical keystone: “The number of transistors on a microchip doubles every two years.” Elsewhere, one of Grove’s laws (the exact law is subject to a great-many disputes) dictates the cultural keystone: “Success breeds complacency. Complacency breeds failure. Only the paranoid survive.” (Attributed to Andy Grove in: Ciarán Parker (2006) The Thinkers 50: The World’s Most Influential Business. p. 70). Another Grovian law is that “A fundamental rule in technology says that whatever can be done will be done.” (Attributed to Andrew S. Grove in: William J. Baumol et al (2007) Good Capitalism, Bad Capitalism, and the Economics of Growth. p. 228). Built on these two keystones was the ideological evolution of Silicon Valley, built into a highly self-confident arena for microchip-based solutions to an apparently infinite plethora of identifiable problems. It explains the emergence and dominance of disruptive innovation and unique value proposition as pillar concepts. It gives prelude to the impact left by Peter Thiel’s Zero to One, which we have already covered here.

The recounting of the early days seems to be missing key ingredients. In addition to the leaders of the Intel corporation, were Gordon French and another Moore, Fred Moore. French and Moore were co-Founders of the Homebrew Computer Club, a club for DIY personal computer building enthusiasts founded in Menlo Park. This informal group of computer geeks was in all intents and purposes a digital humanist entreprise, openly inviting anyone who seeks to know more about electronics and computers to join the conversation and build with like-minded peers. Its great influence on Steve Wozniak and the many Stanford University engineers to that have built the Valley cannot be overstated.

Technologists from across the globe have inspired themselves off of this origin story, and innovative ecosystems have cropped-up in mimickry. New uses of IT, democratised and cheaper-to-access, have led to fascinating developments in parts of the developing world that do not enjoy California’s access to investment funds. And there is also the fact that Silicon Valley was not the only Tech story of the last 50 years (think vaccines, cancer research and environmental technologies). More colours come to light and the grey-bland world of Euro-American financialised IT will fade back into a world of people finding new ways of solving problems, finding new problems to solve, finding new problems from ways of solving, finding new solutions to problems yet unseen.

We dove into the mission of Supriya Rai — who seeks to bring beauty and colour into hundreds of identical-looking London office buildings with Switcheroo. She is now also Wonk Bridge’s CTO!

Portraits of Young Founders: Supriya Rai

We followed Muhammad and Robbie, who broke away from the London incubator scene after an initially successful Agri-tech IoT prototype, to radically changing their business plan to launch a logistics service company in East Africa, against the wishes of their Euro-American investment mentors. Rather than launch Seedlink to improve the lives of Malawians and East Africans at large, which entirely satisfy the white saviour narrative and follow a set of Euro-American prescribed ROIs, they sought to build a proposition that would fit in this unique business climate. How can a company that connects rural farmers to urban centres ignore common practices like tipping that are branded as bribery in the Euro-American world. What explained the gap between the London investors’ expectations and the emerging strategy needed to succeed in East Africa?

Thanks to a double-feature from our China-correspondent Edward Zhang, we analysed how different countries used the power of their societal and political technology as well as how they leveraged their national cultures to combat Covid-19. Sometimes, technologies are a set of cultural values and political innovations developed over the course of generations.

The Chinese Tech behind the War on Coronavirus

The Technologies that will help China recover from COVID-19

We also saw how a different application of a mature information technology such as the MMO video-game has helped fight autism where many other methods have failed.

Building a Haven for the Autistic on Minecraft

The real world


Photo by Namnso Ukpanah on unsplash / Edited by Yuji Develle

I am writing this article on the foothills of Mount Kilimanjaro, in the shade of a hotel found not far from a bustling Tanzanian town. Here, I can observe a much healthier use of technology, less dictated by the tyranny of notifications and more driven by connection between individuals found in the analog. People here use social media and telephones regularly, but they spend the majority of their time outside and depend on cooperation between townsfolk to survive (in the absence of public utilities or private sector).


My own photo of a Tanzanian suburb town near Arusha (Yuji Develle / December, 2020)

The Internet is available but limited to particular areas of towns and villages; WIFI hotspots at restaurants, bars or the ubiquitous mobile-phone stands (Tigo, M-PESA, Vodacom).


Left: A closed Tigo kiosk, Right: A Tigo pesa customer service shop (Yuji Develle / December, 2020)

The portals to the Digital Civilization have been kept open but also restricted by the lack of internet-access in peoples’ homes (missing infrastructure and the relatively high cost of IT being primary reasons why). It has kept IT from frenetically expanding into what it has become in the North-Atlantic and East Asia.

Like an ever-expanding cloud, the Technology-Finance Nexus has taken over our Global economy and replaced many institutions that served as pillars to the shape and life of analog world.

  • Social Networks have come to replace the pub, the newspaper kiosk, the café
  • Remote-working applications, the office
  • Amazon, the brick-and-mortar store, the pharmacy, the supermarket
  • Netflix, the cinema
  • Steam or Epic Games, the playground

These analog mainstays have been taken apart, ported and reassembled into the digital world. While the size of our Digital civilization continues to grow in volume and richness, the analog is shrinking and emptying with visible haste. The degradations that the disappearances provoke and that the exclusive-use of these Digital alternatives generate are unfortunately well documented at Wonk Bridge.

Astroturfing — the sharp-end of Fake News and how it cuts through a House-Divided

Social Media and the Syndication of the ‘Friend’

A new way of covering Tech

With our most recent initiative, Wonk World, we seek to avoid falling into the trap of overdiscussing and overusing the same Tech stories, told through and about the same territories, as representations of Tech as a whole. We aim to shed light into the creative and exciting rest-of-the-world.

We will be reaching out to technologists and digital humanists located far beyond Tech journalism’s traditional hunting grounds: Israel, China, Costa Rica. We will be following young Founders’ progress through the gruelling process of entrepreneurship in our Portraits of Young Founders newseries. Finally, we are looking for ways to break out of our collective echo-chambers and bring new perspectives into the Wonk Bridge community, so diversity of region as well as vision will constitue one of Wonk Bridge’s credos.

So join us, wherever you are and however you are, beyond the four walls of your devices and into the unexplored regions of the world and dimensions of the mind to see technology as Wonk Bridge sees it: the greatest story of humankind.


The Forsythia-Industrial Complex

In Steven Soderbergh’s newly rediscovered 2011 film Contagion, a hypothetical novel virus called MEV-1 causes a global pandemic, which has to be stopped by the film’s protagonists, epidemiologists working for the Centers for Disease Control.

While the film contains all sorts of exposition sequences that are legitimately educational about the nature and spread of airborne viruses, the real nugget of gold in the film is in its main subplot. Jude Law plays Alan Krumwiede, an Alex Jones-type conspiracy entrepreneur. He spends his days chasing down sensational stories and posting ranting videos on his website, “Truth Serum”.

Despite the contemporary irrelevance of the “blogosphere”, the Truth Serum subplot is as pertinent today as it was in 2011. In many ways, its implications are more frightening now than ever. In 2011, algorithm-driven social media sites did not have the same preponderance over the information environment that they enjoy today.

Studies indicate that media consumption patterns have changed rapidly over the last decade. While internet-based news consumption was wide-spread by 2011, there were two key differences with the information environment of today. Firstly, online news consumption skewed young. Today, American adults of all ages consume much if not most of their news online. Secondly, news items spread via a number of media, including email chains and blogs. The dynamics of email and blogs as media are fundamentally different from algorithm-driven platforms like Facebook, Twitter, and YouTube. Blog followings and email chains are linear — a person recommends a blog or forwards an email to certain people at their own discretion. Information from these media therefore spreads less virally than content on algorithmic sites. In fact, the entire concept of “virality” essentially cannot exist without the engagement-based recommendations and news feed algorithms behind our now-dominant social media machines.

In the second and third acts of the film (*spoilers*, sorry), Krumwiede begins pushing a homeopathic cure, Forsythia, on his website and eventually on TV. He posts a series of videos on Truth Serum in which he has apparently caught the MEV-1 virus, and nurses himself back to health by using the substance. His endorsement of the drug to his loyal followers causes desperate Americans to loot pharmacies in search of Forsythia. Krumwiede is eventually arrested, and after an antibodies test reveals he never actually had the virus, he is charged with fraud. By this time, he’s moved from Forsythia to anti-vaccination, claiming the CDC and WHO are “in bed” with big pharma, and that’s why they’re trying to vaccinate the entire population. He ends the film urging his millions of loyal online followers not to take the crucial MEV-1 vaccine being produced by American and French pharmaceutical companies.

Image for post

Forsythia today

While the Forsythia subplot serves as a slight diversion from our main protagonists’ investigations and research, it has turned out to be the part of the movie that holds up best in light of our current novel coronavirus pandemic. Of course, in its obvious resemblances to the hydroxycholorquine craze in the US and France. But more worryingly, it’s become obvious that our contemporary information environment is more vulnerable to the Krumwiedes of the world now than it would have been in 2011.

American author and critic Kurt Andersen’s 2017 book Fantasyland argues that the viral, platform-based contemporary Internet is the crucial focal point of our conspiracy-laden politics. Common criticisms of modern social and online media focus on its tendency to create self-reinforcing echo chambers where individuals are only exposed to information that bolsters their existing views. Andersen takes this idea further and turns it on its head, arguing that social media causes the cross-pollination of information silos that would otherwise have remained separate.

This phenomenon is evident in the fact that believing in one conspiracy theory exponentially increases the likelihood you’ll believe in others. In the days before mass access to the internet, an individual with an easily-debunked belief would have been relatively isolated, they can now connect with other like-minded individuals across the globe. Anti-vaxxers can virtually intermingle with chemtrails theorists, 9/11 truthers, and anti-semites. Without this cross-pollination, expansive crowd-sourced conspiracy narratives like Q-Anon would simply not be possible.

In Contagion’s 2011-based universe, Krumwiede is a lone crusader, harrasing reporters and officials, pushing his homeopathic scams, and broadcasting to millions from a webcam as a one-man information army. In 2020, there is a whole parallel information ecosystem across several internet platforms where conspiracy theorists, activists, influence bots, grifters, and extremists can exchange and reinforce each others’ beliefs. Today, Krumwiede would be one of thousands of viral content creators “flooding the zone” with conspiracies, untruths, partial truths, and unverified and misleading claims.

Imagining better media bubbles

Could the Internet become a less toxic place? Maybe, but it’s a difficult problem. In a November 2019 interview with Vox, tech entrepreneur Anil Dash reminisces for “the Internet we lost” with the rise of the social media giants. He points out the weaknesses in “free speech” arguments made by Zuckerberg and other tech moguls, arguing that free discourse can exist without virality and engagement metrics. He says that a “trust network” model that looks more like the blogosphere of the 90s and 00s is perhaps more conducive to civil discourse than our platform-centric information environment today. Bloggers, like TV anchors or op-ed columnists, have to slowly gain the trust of their audience over time. Without virality’s constant interruptions, a stronger bond forms between content producer and content consumer, leaving less room for loud interlopers with wild claims to wedge their way into the discourse.

The issue with this concept is that, in a trust network model, aforementioned trusted information sources are difficult to dislodge once their network has been established. A new video “owning” the old champion relies on virality to dethrone the incumbent, and requires on a news feed or recommendations algorithm to find its way into the incumbent’s audience’s media diet. While the kind of decentralized trust network Dash and other ex-bloggers are nostalgic for would perhaps address the problems of influence bots, viral information blitzes, and other issues caused by engagement-based algorithmic media, it could also exacerbate the silo-ization of our information environment by entrenching a certain set of existing sources.


Why We Cannot Trust Big Tech to be Apolitical

What Google’s whistleblowers and walk-outs have revealed about working in a politicised information duopoly. How we cannot expect neutrality even in tackling COVID-19.

7:30am in the lobby of an independent news agency, a man of scruffy bearing and sweaty palms keeps looking anxiously at his wristwatch. He wasn’t typically so prone to stress, but the last few weeks have given him good reason to look over his shoulder.

A few weeks ago, he was a nondescript employee in one of the world’s most prominent and influential tech organisations. He had begun to find himself bothered by a series of decisions and practices that sharply clashed with his moral compass. So he quietly collected evidence in preparation for his big day. When that day came however, he became a condemned man; condemned to forever roam under the all-pervasive threat of vengeance from the most powerful technology company in the world. He received an anonymous letter making demands and, demanding cease-and-desist action by a specific date. Then, the police showed up to his door on the specious grounds of concern about his “mental health”. Knowing his life may have been on the line, he created a kill-switch: “kill me and everything I have on you will be released to the public”.

Did you enjoy my screenplay for this summer’s next action-thriller hit? Well I have a confession to make; it’s based on a true story. This is actually the beginning of the tale of Zachary Vorhies — the latest in a long line of Google Whistleblowers.

If you want to skip to the conclusion and its link to COVID-19, please click here. If you want to hear the whole sorry saga then, please, read on.

Google as puppet-master?

For those of you well-acquainted with Big Tech’s ethical mishaps, you may have already heard of the Google Data Dump. It was, in short, a substantial leak of internal Google documents, policies and guidance designed to demonstrate Google’s willingness to deliberately shape the information landscape in favour for a certain conception of reality. Said conception of reality seems to exclude right-wing media outlets, and seeks to promote a socially-liberal agenda. As the old Stalinist adage goes: “It’s not the people who vote that count, but the people who count that vote” — put another way, it’s not the facts that matter so much as how you interpret and arrange those facts (and does Google ever interpret and arrange facts, with their 90% search engine market share!).However shocking the suggestions here may be, we need to read through the coverage of this data dump with a critical eye. As a journalist, whenever I approach such a leak, I like to go through the thought-process of the actors involved. Why did Zachary leak the documents? What drove the decision to leak those documents at a specific date? How did he hear about Project Veritas, why did he provide them with a scoop, and what does the recipient of such data gain?

The leaked documents were shared to Project Veritas, an independent whistleblowing outlet which pledges exuberantly on its front page to “Help Expose Corruption!”. Most of its brand content seems to derive from shocking whistleblower revelations that come with the site’s own flavour of sensationalist titling and conspiratorial imagery.

On the site, Zachary’s story plays comfortably into Project Veritas’ audience-expectations. The audience in question is ambiguous and unknown to the writer, and the reflections made are largely based on the platform’s content-reel. The implications are first allowed to fester, and then spread as part of a bigger conspiracy of liberal Google executives forcing coders to prevent the spread of right-wing populism (as encapsulated by Donald Trump). The data dump itself isn’t intrinsically shocking as much as it is when used to support a particular vision of reality. I discussed this topic with several Google insiders working at the company’s Colorado and Ireland offices. They tell me that most of the information in the data dump is easily accessible and circulated frequently amongst Google employees. They tell me that, while it is frowned-upon to discuss such matters openly, the data dump only began having significant traction whence it landed on the doorsteps of Project Veritas, who knew how to use the data to reinforce a fiery conspiratory narrative.

Google as game designer?

Google’s convenient counter-narrative to these revelations runs along the lines that it’s all because, as Genmai says, “Google got screwed over in 2016”. It is clear that, during the 2016 presidential race, a series of right-wing media outlets managed to navigate through the ludicrously arcane Google and Facebook traffic algorithm, and successfully gamed it to the point at which both publishers had to change the rules of their game. There is a strong sense of enmity at Google about how a handful of Albanian or Macedonian fake news artists managed to “out-hack” Google. Designers at heart, Googlers are uncomfortable with the idea that certain content pieces are able to “outperform” (without directly benefitting Google/Facebook financially). Indeed, the only content that is meant to over-perform is paid/sponsored content.

What these whistleblower scandals and recent walk-outs have proven is that we cannot see Big Tech as monolithic, as a set of corporations acting solely to further the interest of shareholders or of ad revenues. Google is a group of individuals, made up of a plethora of political ideologies and socio-ethnic representations. It is a company full of engineers and designers who are aware of their impact on politics and society through their quasi-duopoly on the information space. With this awareness is a confidence and agency inherited from the “Googler” mindset; a perpetual journey to solve problems, even when alone against all odds. Add these ingredients together and what you have is an unstable cocktail of ideas that sometimes leads to breakthrough innovation, sometimes to conflicts over how best to wield technology to change the world (for the better).

Google as a microcosm of society

Perhaps the tech commentariat should have seen it all coming. Big Tech has accumulated a staggering amount of political power, through information-market dominance and financial success. Cries to regulate Big Tech “monopolies” have reached fever pitch. In the meantime, governments the world-over have urged Big Tech to build solutions to deal with some of the societal and political ramifications of their tools.

Who builds these solutions? Googlers do. The very same Googlers endowed with ideological responsibility, voicing their political and social views so that they may have a say in the way society is ultimately run.

So the more interesting question(s) lies a level below the accusations of the Whistleblowers or of the walkouts:

  • Can we trust people to build/design apolitical/non-ideological tools?
  • Should we, as subjective and emotive people, be always neutral/objective?
  • When did we choose to become subjective, when we become actors in the socio-political world, what accountability and responsibilities come with this decision?

The charge for Big Tech is two-fold, therefore:

  1. We cannot trust Google to be apolitical or non-ideological. Despite claims to impartiality, there is little evidence suggesting a system in place to restrict partiality and individual agency from the tools designed. In fact, there doesn’t seem to be the desire to remain impartial, as demonstrated by the contents of the data dumps and history of industrial actions (walkouts and whistleblowers).
  2. We cannot know Google’s editorial line. We all know that MSNBC leans to the Left. We all know Fox News tends to the Right. Publishers on paper, radio, television and magazine have all remained informative and respected news-outlets, while also recognising their own biases. This is called adopting an editorial line. Masquerading behind their label as a “technology company”, Google and other Big Tech have all relinquished their responsibility to identify inherent bias and to communicate this bias (or take deliberate steps to repeal bias) to its readers/users.

Our next piece on the topic will look at how useful the comparison between editorial lines and product design-bias at Google and other Big Tech companies can be.

Feel free to read through the detailed insights of Wonk Bridge’s read through Project Veritas data leak below. A bientôt!

Project Veritas Case Study

Core claims:

  • Senior executives made claims that they wanted to “Scope the information landscape” to redefine what was “objectively true” ahead of the elections
  • Google found out what he did and sent him an unsolicited letter making a threat and several demands including a “request” to cease & desist, comply by a certain date, and scrape the data (but by then Vorhies already sent the data to a legal entity)
  • “Dead Man’s Switch” launched in case “I was killed”, which would release all the documents. The next day, the Police were called on grounds of “mental health” (something Google does apparently frequently with its whistleblowers)

From the data dump, an oft-cited passage:

“If a representation is factually accurate, can it still be algorithmic unfairness?”. This screenshot from a “Humane tech” (a Google initiative) document was used by Vorhies to say facts were being twisted to manipulate reality into “promoting the far-left wing’s social justice agenda”.

Image for post

From the Data dump

Whether or not it does so deliberately, the leaked blacklists point to a preference for slamming the ban-hammer on right-wing conservative or populist content, if our scope is limited to US content.

Image for post
Screenshot but you can find the rest of the leaked list here

As a publisher, there is no clear reason why it should be objective here, just like how MSNBC and Fox News are pretty clear in their ideological stances too. The issue is that Google presents itself as a neutral tool.

What is the solution to an overly ideological publishing monopoly? Generally, this translated to the creation of competitors, which has not occurred. Perhaps it’s early days, but there are enough Conservative coders and programmers out there and enough right-wing capital in circulation to create a rival to Google. Just speculation here.

Google’s response to the Project Veritas leak is much more damning, however. The case here being that Freedom of Speech and social activism should be permitted in both cases (Google’s and Project Veritas’). a) The removal of the video from YouTube b) threatening of the whistleblower… Does it qualify as abuse of power?

The crackdown on whistleblowers (evidence: organisers of the Google Walkout), organisers of industrial action in the Google Walkout decried similar discrimination in reverse to that of the right-wing conservative Googlers. ““I identify as a LatinX female and I experienced blatant racist and sexist things from my coworker. I reported it up to where my manager knew, my director knew, the coworker’s manager knew and our HR representative knew. Nothing happened. I was warned that things will get very serious if continued,” one Googler wrote. “I definitely felt the theme of ‘protect the man’ as we so often hear about. No one protected me, the victim. I thought Google was different.”” The claim there was that Google wasn’t doing enough to protect social justice at work (and also in the products they make). The claim here being that Google doesn’t respond convincingly to these allegations.

In a message posted to many internal Google mailing lists Monday, Meredith Whittaker, who leads Google’s Open Research, said that after the company disbanded its external AI ethics council on April 4, she was told that her role would be “changed dramatically.” Whittaker said she was told that, in order to stay at the company, she would have to “abandon” her work on AI ethics and her role at AI Now Institute, a research center she cofounded at New York University.

Now, it is easy to fit these events into a broader narrative of the whistleblower crackdown, but it is clear that perspective plays a huge role in how you view these events. The disbanding of the External AI Ethics Council (which Wonk Bridge has discussed in a previous podcast) was also largely influenced by the Council’s misalignment with the values of a majority of Googlers. Meredith Whittaker may have tried to be balanced in her running of the Council, but that didn’t sit too well with the rest of the company body.

Claire Stapleton, another walkout organizer and a 12-year veteran of the company, said in the email that two months after the protest she was told she would be demoted from her role as marketing manager at YouTube and lose half her reports. After escalating the issue to human resources, she said she faced further retaliation. “My manager started ignoring me, my work was given to other people, and I was told to go on medical leave, even though I’m not sick,” Stapleton wrote. After she hired a lawyer, the company conducted an investigation and seemed to reverse her demotion. “While my work has been restored, the environment remains hostile and I consider quitting nearly every day,” she wrote.

Google as a Public Service Provider

The fates of Claire Stapleton, Meredith Whittaker and Zachary Vorhies all demonstrate a common moral dilemma posed by large and influential corporations; the balance between the corporate interest, the sum-total interest of employees, and of the “public interest”. These interests are often in conflict with each other, as is the case around the question of: “How should we manage access to controversial and/or potentially fake content”.

The reason why corporations like Google are not well-placed to answer such questions, is because they are unable to align their interests with the public interest in any accountable way. Well-functioning democracies are better placed to provide answers as their interests align directly to the public interest (in theory). Elected officials are mandated by “the People” to represent “the People” and fulfil the “Public Interest” in a representative capacity. As long as trust in elected officials and their capacity to fulfil the public interest is strong, the social contract continues to align the institutional and public interest.

As we look to our most influential actors (governments, large corporations, influential people) to show us the way through the COVID-19 crisis, the Public’s reaction will largely depend on the key question of whether they see their interests as aligned with the institutions in question. When Google and like corporations involve themselves in seemingly gratuitous philanthropy, they should not be surprised by negative push-back. It is an objectively good thing for Google to use its vast wealth of data to help curb the growth of Coronavirus. But the Public will still doubt whether it is a good thing for Google to actually do this.


Useful sources
Google Document (Data) Dump Whistleblower story

Project Maven

Retaliation against employees who whistleblow

An academic paper explaining the impact of search-engine manipulation on election outcomes.



Turf Wars: The Birth of the COVID-19 Protests

Over the weekends of the 17th and 24th of April, thousands of Americans showed up at intersections and state houses across the country to protest against social distancing rules, the closure of businesses, and other measures taken by mayors and governors to combat the Covid-19 pandemic. Depending on the location, protestors ranged from the pedestrian to the extreme and bizarre. Some groups were calm, carrying signs calling on governors to reopen businesses. Other groups were toting semi-automatic rifles, combat gear, and QAnon paraphernalia.

Users on reddit, in particular /u/Dr_Midnight, noticed a strange pattern in certain sites purporting to support the anti-quarantine protests. Dozens of sites with the URL format reopen[state code/name].com had all been registered on 17 April within minutes of each other, many from a single round of GoDaddy domain purchases from the same IP address in Florida. The original Reddit posts were removed by moderators because they revealed private information about the individual who had registered the domains. Here are screenshots without sensitive information, as examples:

Image for post
The Pennsylvania and Minnesota sites are on the same server, registered from the same IP address

Image for post

Date and time for domain purchases // creds to Krebs On Security

Sites urging civil unrest in Pennsylvania, Wisconsin, Ohio, Minnesota, and Iowa all had the same “contact your legislator” widget installed, and these and other states’ websites “blog” sections cross linked to each other.

Many of the sites purchased on 17 April are dormant and have no content at the time of publication. However several of these domains forward users to a string of state gun rights advocacy websites, all named [state] The Pennsylvania, Minnesota, Michigan and other “gun rights” sites and associated Facebook groups belong to the Dorr brothers, gun rights extremists and conservative advocates who Republican lawmakers in the Midwest have repeated labeled as “grifters”. Multiple sites have “shop” sections selling the Dorrs’ anti-quarantine and pro-gun rights merchandise.

Several URLs lead to Facebook groups calling themselves “Operation Gridlock [city name]”. Here are the identical descriptions for the LA and Tennessee Gridlock Facebook groups:

Image for post
Image for post

Security researcher Brian Krebs also identified domains, including, that had eventually been sold on to In Pursuit Of LLC, a for-profit political communications agency reported to belong to the conservative billionaire industrialist Charles Koch. Non-profit journalistic site ProPublica has identified several former In Pursuit Of employees who are now on the Trump White House communications staff. It is unclear who registered and other sites purchased by for-profit political consultancies, as many were not purchased during the 17 April’s afternoon buying spree in Florida.

A further twist in the story came on 23 April, when a man named Michael Murphy, whose IP address was identified in /u/Dr_Midnight’s original removed reddit investigation, was interviewed by reporter Brianna Sacks. It turns out that Murphy, a struggling day trader from Sebastian, Florida, spent $4,000 on dozens of domains in the hopes of selling them on to liberal activists looking to prevent conservatives from organizing protests. An attempt to out-grift the grifters.

It is unclear whether Murphy’s intentions were political, financial, or both. He describes his politics as “generally liberal”, however his business has been suffering in recent years — he even tried to reorient to selling N-95 mask cleaning solution when the coronavirus outbreak worsened in March, but was unsuccessful. Murphy even claims to have attempted to contact late night TV host John Oliver, hoping the comedian would pay him for domains to use in one of his show’s signature trolling stunts. Murphy came forward to reporters after anti-right-wing reddit users began doxxing him, revealing his name, address, and businesses. Any sites not registered to Murphy’s Florida IP address were likely bought by the Dorrs brothers or Koch-backed organizations before Murphy could snatch them up.

What do we make of all this?

Relatively unsophisticated technical actors have shown themselves capable of mobilizing large numbers of citizens into the streets. A few well-named URLs and a decent Facebook following are all it takes for a series of protests to be organized across the country with little notice. Protesting citizens are entirely unaware that any central coordination of their activities exists beyond their local social media groups. However these groups were not genuine expressions of opinion by concerned private citizens. Most were created concurrently by individuals or organizations with the explicit intent of political or financial gain through advocating activities that contradict public health rules and guidelines in a time of national crisis.

It’s important to note at this point these protests represent the views of a very small minority of voters, regardless of party. A poll conducted by the Democracy Fund and UCLA in late March and again in early April shows broad approval of, and compliance with, local and state social distancing guidelines and business closures. Around 87% of respondents approved of varying measures imposed by mayors and governors, and 81% said they hadn’t left their homes over the last two weeks except for buying necessities, up from 72% in late March. Majorities of Democrats, Republicans, and Independents all believed quarantine measures to be necessary. Despite this general consensus in support of emergency measures, astroturfing operations were able to mobilize a diverse set of online activists, spectators, social media buffs, conspiracy theorists, and guns rights absolutists, even reaching all the way to disgruntled mainstream conservatives and their families.

The Internet and social media catalyze kinetic action

Centrally coordinated puppeteering of otherwise spontaneous demonstrations is not new. What is novel is the ability to do so at a national scale with almost no investment of resources of any kind — financial or otherwise. All it took was an internet connection, a few web domains, and a cursory knowledge of the online right wing universe. Once the spaces for action were created, and the right actors assembled, the demonstrations themselves were almost an inevitability. With enough prodding from conservative media and political figures, right up to the top of the movement, people took to the streets.

Twitter, and to a lesser extent Facebook, have actively shied away from preventing this method of organizing on their platforms. Despite both companies ostensibly having changed terms-of-use enforcement to take down content encouraging violating state quarantine orders, Facebook has not taken down Freedom Fund or Conservative Coalition groups or individual posts, and Twitter has officially decided that the President’s “LIBERATE” tweets do not violate their rules against inciting violence. On 22 April, Facebook did take down events pages for anti-quarantine protests in California, Nebraska, and New Jersey, but only after these states’ governors explicitly ordered the company to do so. Events pages in other states, most notably Michigan, Pennsylvania, and Ohio, remained active over the weekend of 24 April. Several groups of protesters in Lansing, Michigan entered the State Capitol Building carrying semi-automatic rifles and wearing kevlar vests and other combat gear. Michigan’s governor, Gretchen Whitmer, has been a target of particularly vitriolic rhetoric from protestors over the state’s emergency orders — some of the most stringent in the country — enacted after a major outbreak in the Detroit area in late March.

Inauthentic action catalyzed by social media is legitimized by traditional media

Traditional media, most notably TV, are often playing catch-up with more savvy information actors online. An Insider Exclusive special oPoln coronavirus on 29 April — broadcast in primetime on multiple US cable networks — contained a segment on the protests. It first showed “hundreds” of people continuing to protest in front of various state houses, and immediately contrasted these images with footage of long lines at food banks in Houston, Texas. The narration insinuates that the “exasperation” felt by the protestors somehow derives from an inability to find basic necessities. This insinuation, however, is false. Footage of the protests has revealed the discontents to be predominantly white and older, while those requiring assistance from food banks in major cities are often younger, economically-precarious people of color, a demographic notably absent from images of anti-stay-at-home protesters.

The astroturfing operation has therefore worked its way through an entire information cycle. Political donor money is used to fund fringe actors’ online efforts, purchasing websites and organizing on social media. These sites are used to generate kinetic action in the form of protests. These protests are then covered by the traditional media, broadcasting and legitimizing the initial message of the organizers, inserting their narrative into the mainstream.