Categories
Trialectic

Trialectic – Can Technology be Moral?

Kicking off another Trialectic, this article makes the case that technology plays a key role in our ethics, but that ultimately humans can only hold themselves responsible for the shaping morality.

Trialectic

A Wonk Bridge debate format between three champions designed to test their ability to develop their knowledge through exposure to each other and the audience, as well as maximise the audience’s learning opportunities on a given motion.

For more information, please read our introduction to the format here, The Trialectic.

Motioning for a Trialectic Caucus

Behind every Trialectic is a motion and a first swing at the motion, which is designed to kick-start the conversation. Please find my motion for a Trialectic on the question “Can Technology be Moral?” below.

I would like to premise this with a formative belief: Humans have, among many motivations, sought to seek “a better life” or “the good life” through the invention and use of technology and tools.

From this perspective, technology and human agency have served as variables in a so-called “equation of happiness” (hedonism) or in the pursuit of another goal: power, glory, respect, access to the kingdom of Heaven.

At the risk of begging the question, I would like to premise this motion with a preambulatory statement of context. I would like to focus our contextual awareness around three societal problems.

First, the incredible transformation of the human experience through technology mediation has changed the way we see and experience the world. Making most of our existing epistemological frameworks inadequate as well as turning our political, cultural systems unstable if not obsolete.

Another interpretation of this change is that parts of our world are becoming “hyper-historic”, where information-communication technologies are becoming the focal point, not a background feature of human civilisations (Floridi, 2012).

Next, the driving force behind “the game” and the rules of “the game”, which can be generally referred to as Late Capitalism, are being put under question with Postmodern thought exposing it’s weaknesses and unfairness, and a growing body of Climate Change thinkers documenting its unsustainability and nefarious effect on long term human survival. More practically, since the 2008 financial crash, Capitalism has taken a turn towards excluding human agents from the creation of wealth and commodifying distraction/attention. In short, the exclusion of the Human from human activity.

Third, the gradual irrelevance of a growing share of humans in economic and political activity, as well as the lack of tools for both experts and regular citizens to understand the new world(s) being crafted (this “Networked Society” which is a Hybrid of Digital Civilization and of Technologically Mediated Analog world) (Castells, 2009), has created an identity crisis a both the collective and individual levels. We know what is out there, have lost sight of the How and can’t even contemplate the Why anymore.

  • A better understanding of the forces shaping our world
  • An intentional debate on defining what this collective “Why” must be

Can help us find a new “True North” and begin acting morally by designing intentional technologies based around helping us act more morally.

Introductory Thesis

I based my initial stance on this topic atop the shoulders of a modern giant in Digital Ethics – Peter-Paul Verbeek, based on his 2011 work Moralising Technology.

Verbeek, who wants us to believe that the role of “things”, which includes “technologies”, inherently holds moral value. That we need to examine ethics through not an exclusively human-centric lens but also from a materialistic-angle. That we cannot ignore any longer the deep interlink between humans and their tools.

There is first the question of technological mediation. Humans depend on their senses to develop an appreciation of the world around them. Their senses are, however, limited. Our sense of sight can be limited by myopia or other delibitating conditions. We can use eyeglasses to “correct” our vision, and develop an appreciation of our surroundings in higher-definition.

This is a case of using technology that helps us reach a similar level of sensing as our peers, perhaps because living in a society comes with its own “system requirements”? We correct our vision with eyeglasses because we want to participate in society, be in the world, and place ourselves in the best position to abide by the ethics and laws. Technology is necessary to see the world like others, because when we see a common image of the world, we are able to draw conclusions as to how to behave within it.

When a new technology helps us develop our sense-perception even further, we can intuitively affirm that technological mediation occurs in the “definition” of ethics and values. Technologies help us see more of the world. Before the invention of the electric street-lamp system, as part of a wider system of urban reorganisation in the 19th century, western cultures looked-down on the practice of activities at night. An honest man (or woman) would not lurk on in the streets of Paris or London at night.

The darkness of dimly-lit streets made it easy for criminals and malfeasants to hide from the police and to harass the vulnerable. Though still seen as relatively more dangerous than moving in the light of day, it is now socially accepted (even romanticized) to ambulate under the city street-lamps and pursue a full-night’s entertainment.

A technology, the street-lamp system, helped people see more of the world (literally) and our ethics grew out of the previous equilibrium and into a new one. By affecting the way we perceive reality, technology also helps shape our constructed reality, and therefore also directly interferes in the moral thought-process of both individual and collective thought-processes.

From the pre-operative level, my thesis doesn’t diverge too far from Verbeek or Latour’s initial propositions. It will in terms of operative or practical applications seek to put a greater emphasis.

It seems clear that Technology has a role to play in defining what can be a moral practice. The question examined in this thesis therefore seeks to go a step further in exploring whether the creation (technology) can be considered independently from its creator (inventor/designer).

Are human agents responsible for the direct and indirect effects of the tools they build?

Of course, it is clear that adopting an perspective on the morality of technology that is solely anchored in the concept of technology mediation is problematic. As Verbeek mentioned in his book, the isolation of human subjects from material objects is deeply entrenched in our Modernist metaphysical schemes (cf. Latour 1993), contextualises ethics as a solely human affair and keeps us from approaching ethics as a hybrid.

This out-dated metaphysical scheme, sees human beings as active and intentional while material objects as passive and instrumental (Verbeek, 2011). Human behaviour can be assessed in moral terms good or bad but a technological artifact can be assessed only in terms of its functionality (functioning well or poorly) (Verbeek, 2011). Indeed, technologies have a tendency to reveal their true utility after having been used or applied, not before as they were being created or designed.

It is also key to my argument that technologies resembling intentionality are not in themselves intentional. Science fiction relating to artificial general intelligence aside, the context within which technology is being discussed today (2021), is a context of where technologies operate with a semblance of autonomy, situated in a complex web of interelated human and machine agents.

Just because the behaviour of some technologies today (i.e. Google search algorithms) are not decipherable, does not mean that they are autonomous nor intentional. What is intentional is the decision to create a system that contains no checks nor balances. To build a car without breaks or a network without an off-switch.

Technology does have the power to change our ethics.

An example Verbeek uses frequently is the pre-natal ultrasound scan that parents use to see and check whether their unborn child or fetus has any birth defects. This technology gives parents the chance or transfer the responsibility of making a potentially life-threatening or life-defining decision. It also gives them the first glimpse of what their unborn baby looks like through the monitor.

While the birth of a child before the scan was seen ethically as the work of a higher power, outside of human responsibility and agency, the scanner has given parents the tools and the responsibility to make a decision. As Verbeek documents at several occasions in the book, it changes dramatically the way parents’ (especially the fathers) label what they see through the monitor: from a fetus to an unborn child.

The whole ceremony around the scan visit, with the doctor’s briefing and the evaluation of results, creates a new moral dilemma for parents and a new moral responsibility to give life or not to a child with birth defects, rather than accepting whatever outcome is given to you at birth.

But let’s take this a step further and ask the age-old question: Who benefits?

The pre-natal ultrasound scan and the many other tests offered by hospitals today will service the patients. It will give them the chance to see their specimen and make choices about its future. But the clients of these machines are in fact hospitals and doctors, they are also, indirectly, policy-makers and healthcare institutions. The clients seek to begin shifting responsibility away from hospitals and doctors, and onto the parents who will have gained newfound commitment to the unborn babies that they have had the chance to see for the first time. The reasons driving this are manifold, but hospitals and governments are financially/economically interested in baby-births and also in having parents be committed to seeing through the stages of a natality.

When considering the morality of technologies, of systems and objects that are part of those systems, it’s worth paying close attention to what Bruno Latour calls systems of morality indicators; moral messages exist everywhere in society and inform each other, from the speed bump dissuading the driver from driving fast because “the area is unsafe, and driving fast would damage the car” to the suburban house fence informing bystanders “this is my private property”.

But also on who benefits from the widespread usage of said technological-products. The discussions around the morality of technology tend to focus on the effects deriving from the usage or application of said technologies rather than the financial or other benefits deriving from the adoption of said technologies at a large-scale.

Social Media as an example

The bundle of technologies that we call social media is a clear example of why this way of thinking matters. The nefarious consequences of mass-scale social media usage in a society and for an individual are clear and well-documented. We have documented its effects on warping and changing our conception of reality (technological mediation), in political sphere with our astroturfing piece and on our social relationships in our syndication of the friend piece.

In our discussions responding to the acclaimed Netflix documentary The Social Dilemma, we spotted an interesting pattern in the accounts. That one man or woman was powerless in stopping a system that was so lodged in the interweaving interests of Big Tech’s shareholders. The economic-logic of social media makes it so that acting on the nefarious consequences like fake news or information echo-chambers, would be null impossible due to the fact that it would require altering social media’s ad-based business model.

The technology of social media works and keeps being used because it is not concerned with the side-effects, but with the desired effect which is to provide companies or interested parties (usually with large digital marketing budgets) with panopticon-esque insights into its users (who happen to be over 80% of people living in the US, according to Pew Research Center 2019).

Technologies are tools. I mean, this is pretty obvious and doesn’t really need further explanation in writing, but they are not always tools like hammers or pencils that would prove useful to most human beings. They are sometimes network-spanning systems of surveillance that are used by billions, only to provide actual benefit to a chosen few.

The intention of the designer is thus primordial when considering technology and morality because the application of said technology will inevitably have an effect on agents that encounter the technology, but it will also have an effect on the designer themself. There will be a financial benefit and, more than this, ‘the financial benefit will inform future action’ (reflected Oliver Cox, uponing editing this piece).

So yes, the reverse situation is also true, some technologies may be designed with a particular social mission in mind, and then used for a whole suite of unforeseen nefarious applications.

In this case, should the designer be blamed or made responsible for the new applications of their technology, should the technology itself be the subject of moral inquisition and the designer be absolved from their ignorance, or should each application of such technology be considered “a derivative” and thus conceptually separate from the original creation?

Another titan in digital ethics, Luciano Floridi of the Oxford Internet Institute, thinks that intentions are tied to the concept of responsibility; “If you turn on a light in your house and the neighbour’s house goes BANG! It isn’t your responsibility, you did not intend for it to happen.” Yes, the BANG may have had something to do with the turning on of the light, but as he goes on to mention, “accountability is different, it is the process of cause and effect relationships that connects the action to the reaction.”

Accountability as the missing link

With this in mind, we can assume that the missing link between designing a technology and placing the responsibility over to designers is accountability. To hold someone accountable for their actions, one must have access to knowledge or to data that would provide some sort of a paper trail for the observer to trace the effects of said design on the environment and the interactions of the environment with said design.

While it is indeed possible to measure the effects of a technology like social-media from an external perspective, it is far easier and more informative to do so from the source. Yes, what would hold designers of technologies most accountable is for them to hold themselves accountable.

There is therefore a problem of competing priorities when it comes to accountability, derived from the problem of the access to knowledge (or data).

In the three examples given: of the pre-natal ultrasound scanner, social media and the light-switch-that-turns-out-to-be-a-bomb. The intentions of the designer varied across a spectrum, from zero intention to blow up your neighbour’s house, to the pre-natal ultrasound scan where the intention to provide parents with a choice regarding the future of their child was deliberate.

In all three cases, an element beyond intentionality plays an role; the designer is either unaware of (with the light-switch) or unwilling to investigate (with social media) the consequences of applying technology. Behind the veil of claims of technological sophistication, designers reneage from their moral duties to “control their creations”.

If the attribution of responsibility in technologies lies in both intentionality and accountability, then, deontologically, shouldn’t the designers of such technologies provide the necessary information and build the structures to allow for accountability?

The designers should be held accountable for their creations, however autonomous they may initially seem. If so how, feasibly, can they be held accountable?

Many of these questions have been approached and tackled to some extent in the legal world, with intellectual property and copyright laws on the question of ownership of an original work. And this has also been examined to some length by the insurance industry which uses risk management frameworks to determine burden sharing of new initiatives between a principal and agent.

But in the realm of ethics and the impact of technologies on the social good, the focus that may best suit the issue we have here is the Tragedy of the Commons. It is the case where technologies that are widely available (as accessible as water or breathable air) have become commodities and are being used as building blocks for other purposes by a number of different actors.

The argument that technologies have inherent moral value is besides the point. The argument is that moral value should be ascribed to the ways in which technologies are used (whether those be called derivatives or original new technologies); the designers need to be inherently tied to their designs.

  1. In the GDPR example: where processing of personal data represents a genus of technologies where the moral value is ascribed to the processors and controllers of the personal data. The natural resource behind the technology, personal data, remains under control of the owner of that resource.
  2. Ethics by design: The process by which technologies are designed needs to be more inclusive and considerate. Its impact on stakeholders (suppliers, consumers, investors, employees, broader society, and the environment) need to be assessed and factored in during development. It cannot be something that can be wholly predicted but it can also be understood and managed if taken with particular due care. Example: regulated industries such as Life Sciences and Aerospace have lengthy trialling processes involving many stakeholders which makes the introduction of new products more rigorous.

Accountability as the other-side of the equation

As mentioned, the emergence of new technologies such as blockchain governance systems (e.g. Ethereum smart contracts) provide clear examples of how new technologies have created new ways of holding agents accountable for their actions. Those actions that without such enabling technologies, would have been considered outside-of-their-control.

It seems that technology can work on both sides of a theoretical ethical-accountability equation. If some technologies make it easier to act outside of pre-existing ethical parameters and unseen by the panoply of accountability tools in-use, then others can provide stakeholders with more tools to hold each other into account.

Can Technology Be Moral? Yes, it can given its ability to provide more tools to tighten the gap between agents actions and the responsibility they have for those actions. But some technology can be immoral, and stay immoral, without an effective counterweight in place. Technology is therefore an amoral subject, but very much moral in its role as both a medium and as an object for moral actors.

Closing

It will be my honour and pleasure to debate with our two other Trialectic champions, Alice Thwaite and Jamie Woodcock. I am looking forward to what promises to be a learning experience and to update this piece accordingly after their expert takes.

Please send us a message or comment on this article if you would like to join the audience (our audience is also expected to jump-in!).

Leave a Reply

Your email address will not be published. Required fields are marked *