Trialectic – Can Technology be Moral?


A Wonk Bridge debate format between three champions designed to test their ability to develop their knowledge through exposure to each other and the audience, as well as maximise the audience’s learning opportunities on a given motion.

For more information, please read our introduction to the format here, The Trialectic.

Motioning for a Trialectic Caucus

Behind every Trialectic is a motion and a first swing at the motion, which is designed to kick-start the conversation. Please find my motion for a Trialectic on the question “Can Technology be Moral?” below.

I would like to premise this with a formative belief: Humans have, among many motivations, sought to seek “a better life” or “the good life” through the invention and use of technology and tools.

From this perspective, technology and human agency have served as variables in a so-called “equation of happiness” (hedonism) or in the pursuit of another goal: power, glory, respect, access to the kingdom of Heaven.

At the risk of begging the question, I would like to premise this motion with a preambulatory statement of context. I would like to focus our contextual awareness around three societal problems.

First, the incredible transformation of the human experience through technology mediation has changed the way we see and experience the world. Making most of our existing epistemological frameworks inadequate as well as turning our political, cultural systems unstable if not obsolete.

Another interpretation of this change is that parts of our world are becoming “hyper-historic”, where information-communication technologies are becoming the focal point, not a background feature of human civilisations (Floridi, 2012).

Next, the driving force behind “the game” and the rules of “the game”, which can be generally referred to as Late Capitalism, are being put under question with Postmodern thought exposing it’s weaknesses and unfairness, and a growing body of Climate Change thinkers documenting its unsustainability and nefarious effect on long term human survival. More practically, since the 2008 financial crash, Capitalism has taken a turn towards excluding human agents from the creation of wealth and commodifying distraction/attention. In short, the exclusion of the Human from human activity.

Third, the gradual irrelevance of a growing share of humans in economic and political activity, as well as the lack of tools for both experts and regular citizens to understand the new world(s) being crafted (this “Networked Society” which is a Hybrid of Digital Civilization and of Technologically Mediated Analog world) (Castells, 2009), has created an identity crisis a both the collective and individual levels. We know what is out there, have lost sight of the How and can’t even contemplate the Why anymore.

  • A better understanding of the forces shaping our world
  • An intentional debate on defining what this collective “Why” must be

Can help us find a new “True North” and begin acting morally by designing intentional technologies based around helping us act more morally.

Introductory Thesis

I based my initial stance on this topic atop the shoulders of a modern giant in Digital Ethics – Peter-Paul Verbeek, based on his 2011 work Moralising Technology.

Verbeek, who wants us to believe that the role of “things”, which includes “technologies”, inherently holds moral value. That we need to examine ethics through not an exclusively human-centric lens but also from a materialistic-angle. That we cannot ignore any longer the deep interlink between humans and their tools.

There is first the question of technological mediation. Humans depend on their senses to develop an appreciation of the world around them. Their senses are, however, limited. Our sense of sight can be limited by myopia or other delibitating conditions. We can use eyeglasses to “correct” our vision, and develop an appreciation of our surroundings in higher-definition.

This is a case of using technology that helps us reach a similar level of sensing as our peers, perhaps because living in a society comes with its own “system requirements”? We correct our vision with eyeglasses because we want to participate in society, be in the world, and place ourselves in the best position to abide by the ethics and laws. Technology is necessary to see the world like others, because when we see a common image of the world, we are able to draw conclusions as to how to behave within it.

When a new technology helps us develop our sense-perception even further, we can intuitively affirm that technological mediation occurs in the “definition” of ethics and values. Technologies help us see more of the world. Before the invention of the electric street-lamp system, as part of a wider system of urban reorganisation in the 19th century, western cultures looked-down on the practice of activities at night. An honest man (or woman) would not lurk on in the streets of Paris or London at night.

The darkness of dimly-lit streets made it easy for criminals and malfeasants to hide from the police and to harass the vulnerable. Though still seen as relatively more dangerous than moving in the light of day, it is now socially accepted (even romanticized) to ambulate under the city street-lamps and pursue a full-night’s entertainment.

A technology, the street-lamp system, helped people see more of the world (literally) and our ethics grew out of the previous equilibrium and into a new one. By affecting the way we perceive reality, technology also helps shape our constructed reality, and therefore also directly interferes in the moral thought-process of both individual and collective thought-processes.

From the pre-operative level, my thesis doesn’t diverge too far from Verbeek or Latour’s initial propositions. It will in terms of operative or practical applications seek to put a greater emphasis.

It seems clear that Technology has a role to play in defining what can be a moral practice. The question examined in this thesis therefore seeks to go a step further in exploring whether the creation (technology) can be considered independently from its creator (inventor/designer).

Are human agents responsible for the direct and indirect effects of the tools they build?

Of course, it is clear that adopting an perspective on the morality of technology that is solely anchored in the concept of technology mediation is problematic. As Verbeek mentioned in his book, the isolation of human subjects from material objects is deeply entrenched in our Modernist metaphysical schemes (cf. Latour 1993), contextualises ethics as a solely human affair and keeps us from approaching ethics as a hybrid.

This out-dated metaphysical scheme, sees human beings as active and intentional while material objects as passive and instrumental (Verbeek, 2011). Human behaviour can be assessed in moral terms good or bad but a technological artifact can be assessed only in terms of its functionality (functioning well or poorly) (Verbeek, 2011). Indeed, technologies have a tendency to reveal their true utility after having been used or applied, not before as they were being created or designed.

It is also key to my argument that technologies resembling intentionality are not in themselves intentional. Science fiction relating to artificial general intelligence aside, the context within which technology is being discussed today (2021), is a context of where technologies operate with a semblance of autonomy, situated in a complex web of interelated human and machine agents.

Just because the behaviour of some technologies today (i.e. Google search algorithms) are not decipherable, does not mean that they are autonomous nor intentional. What is intentional is the decision to create a system that contains no checks nor balances. To build a car without breaks or a network without an off-switch.

Technology does have the power to change our ethics.

An example Verbeek uses frequently is the pre-natal ultrasound scan that parents use to see and check whether their unborn child or fetus has any birth defects. This technology gives parents the chance or transfer the responsibility of making a potentially life-threatening or life-defining decision. It also gives them the first glimpse of what their unborn baby looks like through the monitor.

While the birth of a child before the scan was seen ethically as the work of a higher power, outside of human responsibility and agency, the scanner has given parents the tools and the responsibility to make a decision. As Verbeek documents at several occasions in the book, it changes dramatically the way parents’ (especially the fathers) label what they see through the monitor: from a fetus to an unborn child.

The whole ceremony around the scan visit, with the doctor’s briefing and the evaluation of results, creates a new moral dilemma for parents and a new moral responsibility to give life or not to a child with birth defects, rather than accepting whatever outcome is given to you at birth.

But let’s take this a step further and ask the age-old question: Who benefits?

The pre-natal ultrasound scan and the many other tests offered by hospitals today will service the patients. It will give them the chance to see their specimen and make choices about its future. But the clients of these machines are in fact hospitals and doctors, they are also, indirectly, policy-makers and healthcare institutions. The clients seek to begin shifting responsibility away from hospitals and doctors, and onto the parents who will have gained newfound commitment to the unborn babies that they have had the chance to see for the first time. The reasons driving this are manifold, but hospitals and governments are financially/economically interested in baby-births and also in having parents be committed to seeing through the stages of a natality.

When considering the morality of technologies, of systems and objects that are part of those systems, it’s worth paying close attention to what Bruno Latour calls systems of morality indicators; moral messages exist everywhere in society and inform each other, from the speed bump dissuading the driver from driving fast because “the area is unsafe, and driving fast would damage the car” to the suburban house fence informing bystanders “this is my private property”.

But also on who benefits from the widespread usage of said technological-products. The discussions around the morality of technology tend to focus on the effects deriving from the usage or application of said technologies rather than the financial or other benefits deriving from the adoption of said technologies at a large-scale.

Social Media as an example

The bundle of technologies that we call social media is a clear example of why this way of thinking matters. The nefarious consequences of mass-scale social media usage in a society and for an individual are clear and well-documented. We have documented its effects on warping and changing our conception of reality (technological mediation), in political sphere with our astroturfing piece and on our social relationships in our syndication of the friend piece.

In our discussions responding to the acclaimed Netflix documentary The Social Dilemma, we spotted an interesting pattern in the accounts. That one man or woman was powerless in stopping a system that was so lodged in the interweaving interests of Big Tech’s shareholders. The economic-logic of social media makes it so that acting on the nefarious consequences like fake news or information echo-chambers, would be null impossible due to the fact that it would require altering social media’s ad-based business model.

The technology of social media works and keeps being used because it is not concerned with the side-effects, but with the desired effect which is to provide companies or interested parties (usually with large digital marketing budgets) with panopticon-esque insights into its users (who happen to be over 80% of people living in the US, according to Pew Research Center 2019).

Technologies are tools. I mean, this is pretty obvious and doesn’t really need further explanation in writing, but they are not always tools like hammers or pencils that would prove useful to most human beings. They are sometimes network-spanning systems of surveillance that are used by billions, only to provide actual benefit to a chosen few.

The intention of the designer is thus primordial when considering technology and morality because the application of said technology will inevitably have an effect on agents that encounter the technology, but it will also have an effect on the designer themself. There will be a financial benefit and, more than this, ‘the financial benefit will inform future action’ (reflected Oliver Cox, uponing editing this piece).

So yes, the reverse situation is also true, some technologies may be designed with a particular social mission in mind, and then used for a whole suite of unforeseen nefarious applications.

In this case, should the designer be blamed or made responsible for the new applications of their technology, should the technology itself be the subject of moral inquisition and the designer be absolved from their ignorance, or should each application of such technology be considered “a derivative” and thus conceptually separate from the original creation?

Another titan in digital ethics, Luciano Floridi of the Oxford Internet Institute, thinks that intentions are tied to the concept of responsibility; “If you turn on a light in your house and the neighbour’s house goes BANG! It isn’t your responsibility, you did not intend for it to happen.” Yes, the BANG may have had something to do with the turning on of the light, but as he goes on to mention, “accountability is different, it is the process of cause and effect relationships that connects the action to the reaction.”

Accountability as the missing link

With this in mind, we can assume that the missing link between designing a technology and placing the responsibility over to designers is accountability. To hold someone accountable for their actions, one must have access to knowledge or to data that would provide some sort of a paper trail for the observer to trace the effects of said design on the environment and the interactions of the environment with said design.

While it is indeed possible to measure the effects of a technology like social-media from an external perspective, it is far easier and more informative to do so from the source. Yes, what would hold designers of technologies most accountable is for them to hold themselves accountable.

There is therefore a problem of competing priorities when it comes to accountability, derived from the problem of the access to knowledge (or data).

In the three examples given: of the pre-natal ultrasound scanner, social media and the light-switch-that-turns-out-to-be-a-bomb. The intentions of the designer varied across a spectrum, from zero intention to blow up your neighbour’s house, to the pre-natal ultrasound scan where the intention to provide parents with a choice regarding the future of their child was deliberate.

In all three cases, an element beyond intentionality plays an role; the designer is either unaware of (with the light-switch) or unwilling to investigate (with social media) the consequences of applying technology. Behind the veil of claims of technological sophistication, designers reneage from their moral duties to “control their creations”.

If the attribution of responsibility in technologies lies in both intentionality and accountability, then, deontologically, shouldn’t the designers of such technologies provide the necessary information and build the structures to allow for accountability?

The designers should be held accountable for their creations, however autonomous they may initially seem. If so how, feasibly, can they be held accountable?

Many of these questions have been approached and tackled to some extent in the legal world, with intellectual property and copyright laws on the question of ownership of an original work. And this has also been examined to some length by the insurance industry which uses risk management frameworks to determine burden sharing of new initiatives between a principal and agent.

But in the realm of ethics and the impact of technologies on the social good, the focus that may best suit the issue we have here is the Tragedy of the Commons. It is the case where technologies that are widely available (as accessible as water or breathable air) have become commodities and are being used as building blocks for other purposes by a number of different actors.

The argument that technologies have inherent moral value is besides the point. The argument is that moral value should be ascribed to the ways in which technologies are used (whether those be called derivatives or original new technologies); the designers need to be inherently tied to their designs.

  1. In the GDPR example: where processing of personal data represents a genus of technologies where the moral value is ascribed to the processors and controllers of the personal data. The natural resource behind the technology, personal data, remains under control of the owner of that resource.
  2. Ethics by design: The process by which technologies are designed needs to be more inclusive and considerate. Its impact on stakeholders (suppliers, consumers, investors, employees, broader society, and the environment) need to be assessed and factored in during development. It cannot be something that can be wholly predicted but it can also be understood and managed if taken with particular due care. Example: regulated industries such as Life Sciences and Aerospace have lengthy trialling processes involving many stakeholders which makes the introduction of new products more rigorous.

Accountability as the other-side of the equation

As mentioned, the emergence of new technologies such as blockchain governance systems (e.g. Ethereum smart contracts) provide clear examples of how new technologies have created new ways of holding agents accountable for their actions. Those actions that without such enabling technologies, would have been considered outside-of-their-control.

It seems that technology can work on both sides of a theoretical ethical-accountability equation. If some technologies make it easier to act outside of pre-existing ethical parameters and unseen by the panoply of accountability tools in-use, then others can provide stakeholders with more tools to hold each other into account.

Can Technology Be Moral? Yes, it can given its ability to provide more tools to tighten the gap between agents actions and the responsibility they have for those actions. But some technology can be immoral, and stay immoral, without an effective counterweight in place. Technology is therefore an amoral subject, but very much moral in its role as both a medium and as an object for moral actors.


It will be my honour and pleasure to debate with our two other Trialectic champions, Alice Thwaite and Jamie Woodcock. I am looking forward to what promises to be a learning experience and to update this piece accordingly after their expert takes.

Please send us a message or comment on this article if you would like to join the audience (our audience is also expected to jump-in!).

Five Minuter

Astroturfing — the sharp-end of Fake News and how it cuts through a House-Divided


A 5-minute introduction to Political Astroturfing.

Dear Reader,

At Wonk Bridge, among our broader ambitions is a fuller understanding of our “Network Society”[1]. In today’s article, we’re aiming to connect several important nodes in that broader ambition. Our more seasoned readers will already see how Political Astroturfing simultaneously plays on both the online and offline to ultimately damage the individual’s ability to mindfully navigate in-between dimensions.


Political Astroturfing is a form of manufactured and deceptive activity initiated by political actors who seek to mimic bottom-up (or grassroots) activity by autonomous individuals.(slightly modified from Kovic et al. 2018’s definition which we found most accurate and concise)

While we will focus on astroturfing conducting exclusive by digital means, do keep in mind that this mischievous political practice remains as old as Human civilisation. People have always sought to “Manufacture Consent” through technologically-facilitated mimickry, and have good reason to continue resorting to the prevalent communications technologies of the Early Digital age to do so. And without belabouring the obvious, mimickry has always been a popular tactic in politics because people continue to distrust subjectivity from parties who are not friends/family/ “of the same tribe”.

Our America Correspondent and Policy-columnist Jackson Oliver Webster wrote a piece about how astroturfing was used to stir and then organise the real-life anti-COVID lockdown protests across the United States last April. Several actors began the astroturfing campaign by opening a series of “Re-open” website URLs and then connecting said URLs to “Operation Gridlock” type Groups on Facebook. Some of these Groups then organised real-life events calling for civil unrest in Pennsylvania, Wisconsin, Ohio, Minnesota, and Iowa.

The #Re-Open protests are a great example of the unique place astroturfing has in our societal make-up. They work best when taking advantage of already volatile or divisive real-world situations (such as the Covid-19 lockdowns, which were controversial amongst a slice of the American population), but are initiated and sped-up by mischievous actors with intentions unaligned with those of the protesters themselves. In Re-open’s case, one family of conspirators — the Dorr Brothers — had used the websites to harvest data from and push anti-lockdown and pro-gun apparel to website visitors. The intentions of the astroturfers can thus be manifold, from a desire to stir-up action to fuelling political passions for financial gain.

The sharp-end of Fake news

Astroturfing will often find itself in the same conversational lexicon as Fake News. Both astroturfing and fake news are seen as ways to artificially shape peoples’ appreciation of “reality” via primarily digital means.

21st century citizenship, concerning medium/large scale political activity and discourse in North America and Europe, is supported by infrastructure on social networking sites. The beerhalls and market-squares have emptied, in favour of Facebook Groups, Twitter Feeds and interest-based fora where citizens can spread awareness of political issues and organise demonstrations. At the risk of igniting a philosophical debate in the comments, I would suggest that the controversy surrounding Fake news at the moment is deeply connected with the underlying belief that citizens today are unprepared/unable to critically appraise or reason with the information circulated on digital political infrastructure, as well as they might have been able to offline. Indeed the particularity of astroturfing lies in its manipulation of our in-built information filtration mechanism, or what Wait But Why refers to as a “Reason Bouncer”.

For a more complete essay on how we developed said mechanism, please refer to their “The Story of Us” series.

Our information filtration mechanism is a way of deciding which information from both virtual and real dimensions is worth considering as “fact” or “truth” and which should be discarded/invalidated. As described in “The Story of Us”, information that appeals to an individuals primal motivations, values or morals tend to be accepted more easily by the “Reason Bouncer”, just as information coming from “trustworthy sources” such as friends, family or other “in-group individuals”. Of course, just like how teenagers try to use fake-IDs to sneak into nightclubs, astroturfing seeks to get past your “Reason Bouncers” by mimicking the behaviour and appealing to the motivations of your “group”.

The effectiveness of this information filtration “exploit” can be seen in the 2016 Russian astroturfing attack in Houston, Texas. Russian actors, operating from thousands of kilometers away, created two conflicting communities on Facebook, one called “Heart of Texas” (right-wing, conservative, anti-Muslim) and the other called the “United Muslims of America” (Islamic). They then organised concurrent protests on the question of Islam in the same city: one called “Save Islamic Knowledge” and another called “Stop the Islamification of Texas” right in front of the Islamic Da’wah Center of Houston. The key point here is that the astroturfing campaign was conducted in two stages: infiltration and activation. Infiltration was key to get past the two Texan communities’ “Reason Bouncer”, by establishing credibility over several months with the creation, population and curation of the Facebook communities. and all that was required to “activate” both communities was the appropriate time, place and occasion.

The “Estonian Solution”

Several examinations of the astroturfing issue have pointed out that, rather than the government or military, ordinary citizens are often the targets of disinformation and disruption campaigns using the astroturfing technique. Steven L. Hall and Stephanie Hartell rightfully point out the Estonian experience with Russian disinformation campaigns as a possible starting point for improving society resilience to astroturfing campaigns.

As one of the first Western countries to have experience a coordinated disinformation campaign in 2007, the people of Estonia rallyed around the need for a coordinated Clausewitzian response (Government, Army, and People) to Russian aggression: “Not only government or military, but also citizens must be prepared”. Hall and Hartell note the amazing (by American standards) civilian response to Russian disinformation, including the creation of a popular volunteer-run fact-checking blog/website called

Since 2016, the anti-fake news and fact-checking industry in the United States is booming — with more than 200 fact-checking organisations active as of December 2019. The fight against disinformation and the methods that make astroturfing possible is indeed well and alive in the United States.

Where I disagree with Hall and Hartell, who recommend initiatives similar to those by Estonia in the USA, is that disinformation and astroturfing cannot meaningfully be reduced in the USA without addressing the internal political and social divisions which make the job all too easy and effective. The United States is a divided country, along both Governmental and popular lines. How can the united action of Estonia be replicated when two out of the three axes (Government, Military and People) are compromised?

This — possibly familiar — Pew Research data visualisation (click here for the research) shows just how much this division has exacerbated over time. Astroturfing campaigns like the ones in Houston in 2016 comfortably operate in tribal environments, where suspicion of the internal “Other” (along racial religious, political lines) trumps that of the true “Other” — found at the opposite end of the globe. In divided environments, fact-checking entreprises also suffer from weakened credibility and the suspicion of the very people they seek to protect.

In such environments, short of addressing the issues that divide a country, the best technologists can perhaps do is create new tools transparently and openly. So as to avoid suspicion and invite inspection. But to also seek as many opportunities to work in partnership with Government, the Military and all citizens, with the objective of arming the latter with the ability to critically evaluate information online and understand what digital tools and platforms actually do.

[1] A society where an individual interacts with a complex interplay of online and offline stimuli, to formulate his/her more holistic experience of the world we live in. The term was coined by Spanish sociologist Manuel Castells.


Turf Wars: The Birth of the COVID-19 Protests

Over the weekends of the 17th and 24th of April, thousands of Americans showed up at intersections and state houses across the country to protest against social distancing rules, the closure of businesses, and other measures taken by mayors and governors to combat the Covid-19 pandemic. Depending on the location, protestors ranged from the pedestrian to the extreme and bizarre. Some groups were calm, carrying signs calling on governors to reopen businesses. Other groups were toting semi-automatic rifles, combat gear, and QAnon paraphernalia.

Users on reddit, in particular /u/Dr_Midnight, noticed a strange pattern in certain sites purporting to support the anti-quarantine protests. Dozens of sites with the URL format reopen[state code/name].com had all been registered on 17 April within minutes of each other, many from a single round of GoDaddy domain purchases from the same IP address in Florida. The original Reddit posts were removed by moderators because they revealed private information about the individual who had registered the domains. Here are screenshots without sensitive information, as examples:

Image for post
The Pennsylvania and Minnesota sites are on the same server, registered from the same IP address

Image for post

Date and time for domain purchases // creds to Krebs On Security

Sites urging civil unrest in Pennsylvania, Wisconsin, Ohio, Minnesota, and Iowa all had the same “contact your legislator” widget installed, and these and other states’ websites “blog” sections cross linked to each other.

Many of the sites purchased on 17 April are dormant and have no content at the time of publication. However several of these domains forward users to a string of state gun rights advocacy websites, all named [state] The Pennsylvania, Minnesota, Michigan and other “gun rights” sites and associated Facebook groups belong to the Dorr brothers, gun rights extremists and conservative advocates who Republican lawmakers in the Midwest have repeated labeled as “grifters”. Multiple sites have “shop” sections selling the Dorrs’ anti-quarantine and pro-gun rights merchandise.

Several URLs lead to Facebook groups calling themselves “Operation Gridlock [city name]”. Here are the identical descriptions for the LA and Tennessee Gridlock Facebook groups:

Image for post
Image for post

Security researcher Brian Krebs also identified domains, including, that had eventually been sold on to In Pursuit Of LLC, a for-profit political communications agency reported to belong to the conservative billionaire industrialist Charles Koch. Non-profit journalistic site ProPublica has identified several former In Pursuit Of employees who are now on the Trump White House communications staff. It is unclear who registered and other sites purchased by for-profit political consultancies, as many were not purchased during the 17 April’s afternoon buying spree in Florida.

A further twist in the story came on 23 April, when a man named Michael Murphy, whose IP address was identified in /u/Dr_Midnight’s original removed reddit investigation, was interviewed by reporter Brianna Sacks. It turns out that Murphy, a struggling day trader from Sebastian, Florida, spent $4,000 on dozens of domains in the hopes of selling them on to liberal activists looking to prevent conservatives from organizing protests. An attempt to out-grift the grifters.

It is unclear whether Murphy’s intentions were political, financial, or both. He describes his politics as “generally liberal”, however his business has been suffering in recent years — he even tried to reorient to selling N-95 mask cleaning solution when the coronavirus outbreak worsened in March, but was unsuccessful. Murphy even claims to have attempted to contact late night TV host John Oliver, hoping the comedian would pay him for domains to use in one of his show’s signature trolling stunts. Murphy came forward to reporters after anti-right-wing reddit users began doxxing him, revealing his name, address, and businesses. Any sites not registered to Murphy’s Florida IP address were likely bought by the Dorrs brothers or Koch-backed organizations before Murphy could snatch them up.

What do we make of all this?

Relatively unsophisticated technical actors have shown themselves capable of mobilizing large numbers of citizens into the streets. A few well-named URLs and a decent Facebook following are all it takes for a series of protests to be organized across the country with little notice. Protesting citizens are entirely unaware that any central coordination of their activities exists beyond their local social media groups. However these groups were not genuine expressions of opinion by concerned private citizens. Most were created concurrently by individuals or organizations with the explicit intent of political or financial gain through advocating activities that contradict public health rules and guidelines in a time of national crisis.

It’s important to note at this point these protests represent the views of a very small minority of voters, regardless of party. A poll conducted by the Democracy Fund and UCLA in late March and again in early April shows broad approval of, and compliance with, local and state social distancing guidelines and business closures. Around 87% of respondents approved of varying measures imposed by mayors and governors, and 81% said they hadn’t left their homes over the last two weeks except for buying necessities, up from 72% in late March. Majorities of Democrats, Republicans, and Independents all believed quarantine measures to be necessary. Despite this general consensus in support of emergency measures, astroturfing operations were able to mobilize a diverse set of online activists, spectators, social media buffs, conspiracy theorists, and guns rights absolutists, even reaching all the way to disgruntled mainstream conservatives and their families.

The Internet and social media catalyze kinetic action

Centrally coordinated puppeteering of otherwise spontaneous demonstrations is not new. What is novel is the ability to do so at a national scale with almost no investment of resources of any kind — financial or otherwise. All it took was an internet connection, a few web domains, and a cursory knowledge of the online right wing universe. Once the spaces for action were created, and the right actors assembled, the demonstrations themselves were almost an inevitability. With enough prodding from conservative media and political figures, right up to the top of the movement, people took to the streets.

Twitter, and to a lesser extent Facebook, have actively shied away from preventing this method of organizing on their platforms. Despite both companies ostensibly having changed terms-of-use enforcement to take down content encouraging violating state quarantine orders, Facebook has not taken down Freedom Fund or Conservative Coalition groups or individual posts, and Twitter has officially decided that the President’s “LIBERATE” tweets do not violate their rules against inciting violence. On 22 April, Facebook did take down events pages for anti-quarantine protests in California, Nebraska, and New Jersey, but only after these states’ governors explicitly ordered the company to do so. Events pages in other states, most notably Michigan, Pennsylvania, and Ohio, remained active over the weekend of 24 April. Several groups of protesters in Lansing, Michigan entered the State Capitol Building carrying semi-automatic rifles and wearing kevlar vests and other combat gear. Michigan’s governor, Gretchen Whitmer, has been a target of particularly vitriolic rhetoric from protestors over the state’s emergency orders — some of the most stringent in the country — enacted after a major outbreak in the Detroit area in late March.

Inauthentic action catalyzed by social media is legitimized by traditional media

Traditional media, most notably TV, are often playing catch-up with more savvy information actors online. An Insider Exclusive special oPoln coronavirus on 29 April — broadcast in primetime on multiple US cable networks — contained a segment on the protests. It first showed “hundreds” of people continuing to protest in front of various state houses, and immediately contrasted these images with footage of long lines at food banks in Houston, Texas. The narration insinuates that the “exasperation” felt by the protestors somehow derives from an inability to find basic necessities. This insinuation, however, is false. Footage of the protests has revealed the discontents to be predominantly white and older, while those requiring assistance from food banks in major cities are often younger, economically-precarious people of color, a demographic notably absent from images of anti-stay-at-home protesters.

The astroturfing operation has therefore worked its way through an entire information cycle. Political donor money is used to fund fringe actors’ online efforts, purchasing websites and organizing on social media. These sites are used to generate kinetic action in the form of protests. These protests are then covered by the traditional media, broadcasting and legitimizing the initial message of the organizers, inserting their narrative into the mainstream.