Categories
Editorial

Internet Walden: Introduction—What Is the Internet?

Internet users speak with an Internet accent, but the only manner of speaking that doesn’t sound like an accent is one’s own.

What is the Internet? Why ask a question like this? As I mentioned in this piece’s equal companion Why We Should Be Free Online, we are in the Internet like fish in water and forget both its life-giving presence and its nature. We must answer this question, because, given a condition of ignorance in this domain, it is not a matter of whether but when one’s rights and pocket book will be infringed.

What sort of answer does one expect? Put it this way: ask an American what the USA is. There are at least three styles of answer:

  1. Someone who might never have considered the question before might say that this is their country, it’s where they live, etc.
  2. Another might say that America is a federal constitutional republic, bordering Canada to the North and Mexico to the South, etc.
  3. Another might talk about the country’s philosophy and its style of law and life, how, for example, the founding fathers wrote the Constitution so as to express rights in terms of what the government may not do rather than naming any entitlements, or how the USA does not have an official language.

The latter is the style of answer that I’m seeking, so as to elucidate the real human implications of what we have, the system’s style and the philosophy behind it. This will tell us, in the case of the Internet as it does in the case of the USA, why we have what we have, why we are astonishingly fortunate to have it in this configuration and what is wrong and how best to amend the system or build new systems to better uphold our rights and promote human flourishing.

In pursuit of this goal, I will address what I think are the three main mistaken identities of the Internet:

  1. The Web. The Web is the set of HTML documents, websites and the URLs connecting them; it is one of many applications which run on the Internet.
  2. Computer Network of Arbitrary Scale (CNAS). CNAS is a term of my own creation which will be explained in full later. In short: Ford is a car, the Internet is a CNAS.
  3. Something Natural, Deistic or Technical. As with many other technologies, it is tempting to believe that the way the Internet is derives from natural laws or even technical considerations; these things are relevant, but the nature of the internet is incredibly personal to its creators and users, and derives significantly from philosophy and other fields.

Finally, I will ask a supporting and related question, Who Owns the Internet? which will bring this essay to a close.

With our attention redirected away from these erroneous ideas and back to the actual Internet, we might then better celebrate what we have, and better understand what to build next. More broadly, I think that we are building a CNAS society and haven’t quite caught up to the fact; we need to understand the civics of CNAS in order to act and represent ourselves ethically. Otherwise, we are idiots: idiots in the ancient sense of the word, meaning those who do not participate.

Pulling on that strand, I claim, and will elaborate later, that we need should be students of the Civics of CNASs, in that we are citizens of a connected society; and I don’t mean merely that our pre-existing societies are becoming connected, rather that the new connections afforded by tools like the Internet are facilitating brand new societies.

The Internet has already demonstrated the ability to facilitate communication between people, nationalities and other groups that would, in most other periods of time, have found it impossible not just to get along but to form the basis for communication through which to get along. With an active CNAS citizenry to steward our systems of communication, I expect that our achievements in creativity, innovation and compassion over the next few decades will be unimaginable.

The Internet is Not the Web

You may, dear reader, already be aware of this distinction; please do stick with me, though, as clarifying this misapprehension will clarify much else. The big difference between the Web and the Internet is this: the Internet is the global system of interconnected networks, running on the TCP/IP suite of protocols; the Web is one of the things you can do on the Internet, other things include email, file-sharing, etc.

It’s not surprising that we confuse the two concepts, or say, go on the Internet when we mean go on the Web, in that the Web is perhaps the Internet application that most closely resembles the Internet itself: people and machines, connected and communicating. This is not unlike how, as a child, I thought that the monitor was the computer, disregarding the grey box. Please don’t take this as an insult; the monitor may not be where the processing happens, but it’s where the things that actually matter to us find a way to manifest in human consciousness.

As you can see in the above diagram, the Web is one of many types of application that one can use on the Internet. It’s not even the only hypermedia or hypertext system (the HT in HTTPS stands for hypertext).

The application layer sits on top of a number of other functions that, for the most part, one barely or never notices, and rightly so. However, we ought to concern ourselves with these things because of how unique and interesting they are, so let’s go through them one by one.

Broadly, the Internet suite is based on a system of layers. As I will explore later on, Internet literature actually warns against strict layering. Caveats aside, the Internet protocol stack looks like this:

Physical

Not always included in summaries of the Internet protocol suite, the physical layer refers to the actual physical connection between machines. This can be WiFi signals, CAT-5 cables, DSL broadband lines, cellular transmissions, etc.

The link layer layer handles data transmission between devices. For example, the Link layer handles the transmission of data from your computer to your router, such as via WiFi or Ethernet, and then over, say, Ethernet via a DSL line to the target network (say to a webserver from which you’re accessing a site). The Link layer was specifically designed for it not to matter what the physical layer actually is, so long as it provides the basic necessities.

Internet

The Link layer handled the transmission between devices, and the Internet layer organizes the jumps between networks: in particular between Internet routers. The Link layer on its own can get the data from your computer to your router, but to get to the router for the target network, it needs the Internet layer’s help: this is why (confusingly) it’s called the Internet layer, it provides a means of interconnecting networks.

Your devices, your router, and all Internet-accessible machines are assigned the famous IP addresses, which come in the form of a 32-bit number, of four bytes separated by dots. My server’s IP address is 209.95.52.144.

This layer thinks in terms of getting data from one IP address to another.

Transport

Now, with the Link and Internet layers buzzing away, transmitting data, the Transport layer works above them both, establishing communication between hosts. This is akin to how I have something of a direct connection with someone to whom I send a letter, even though that letter passes through letterboxes and sorting offices; the Transport layer sets up this direct communication between machines, so that they can act independently with respect to the underlying conditions of the connection itself. There are a number of Transport layer protocols, but the most famous is TCP.

One of the most recognizable facets of the Transport layer is the port number. The TCP protocol assigns numbered “ports” to identify different processes; for the Web, for example, HTTP uses port 80 and HTTPS, port 443. To see this in action, try this tool, which will see which ports are open on a given host: https://pentest-tools.com/network-vulnerability-scanning/tcp-port-scanner-online-nmap — try it first with my server, omc.fyi.

This layer thinks in terms of passing data over a direct connection to another host.

Application

The Application layer is responsible for the practicalities associated with doing the things you want to do: HTTPS for the Web, SMTP for email, SSH for a remote command line connection, are all Application layer protocols. If it wasn’t clear by now, this is where The Web lives, it is one of many Applications running on the Internet.

How it Works in Practice

Here’s an example of how all this works:

  • Assume a user has clicked on a Web link in their browser, and that the webserver has already received this signal, which manifests in the Application layer. In response, the server sends the desired webpage, using HTTPS, which lives on the Application layer.
  • The Transport Layer is then responsible for establishing a connection (identified by a port) between the server and the user’s machine, through which to communicate.
  • The Internet Layer is responsible for transmitting the appropriate data between routers, such as the user’s home router and the router at the location of the Web server.
  • The Link Layer is responsible for transmitting data between the user’s machine and their router, between their router and the router for the webserver’s network, and between the webserver and its router.
  • The Physical Layer is the physical medium that connects all of this: fiberoptic cable, phone lines, electromagnetic radiation in the air.

Why is this interesting? Well, firstly, I think it’s interesting for its importance; as I claim in this piece’s equal counterpart on Internet freedom, the Internet is used for so much that questions of communication are practically the same as questions of the Internet in many cases. Secondly, the Internet is Interesting for its peculiarity, which I will address next.

“Internet” Should Not Synonymous with “Computer Network of Arbitrary Scale”

When addressing the Internet as a system, there appear to be two ways in which people use the word:

  • One refers to the Internet as in the system we have now and, in particular, that runs on the TCP/IP protocol suite.
  • The other refers to the Internet as a system of interconnected machines and networks.

Put it this way: the first definition is akin to a proper noun, like Mac or Ford, the latter is a common noun, like computer or car.

This is not uncommon: for years I really thought that “hoover” was a generic term, and learned only a year or so ago that TASER is a brand name (the generic term is “electroshock” weapon). Then of course we have non-generic names that are, and some times deliberately so, generic-sounding: “personal computer” causes much confusion, in that it could refer to IBM’s line of computers by that name, something compatible with the former, or merely a computer for an individual to use; there is of course the Web, which is one of many hypertext systems that allow users to navigate interconnected media at their liberty, whose name sounds merely descriptive but, in fact, refers to a specific system of protocols and styles. The same is true for the Internet.

For the purpose of clarifying things, I’ve coined a new term: computer network of arbitrary scale (CNAS or seenas). A CNAS is:

  1. A computer network
  2. Using any protocol, technology or sets thereof
  3. That can operate at any scale

Point three is important: we form computer networks all the time, but one of the things about the Internet is that its protocols are robust enough for it to be global. If you activate the WiFi hotspot on our phone and have someone connect, that is a network, but it’s not a CNAS because, configured thus, it would have no chance of scaling. So, not all networks are CNASs; today, the only CNAS is the thing we call the Internet, but I think this will change in a matter of years.

There’s a little wiggle room in this definition: for example, the normal Internet protocol stack cannot work in deep space (hours of delay due to absurd distances, and connections that come in and out because the sun gets in the way make it hard), so one could argue the today’s Internet is not a CNAS because it can’t scale arbitrarily.

I’d rather keep this instability in the definition:

  • Firstly, because (depending on upcoming discoveries in physics) it may be possible that no network can scale arbitrarily: there are parts of the universe that light from us will never reach, because of cosmic expansion.
  • Secondly, because the overall system in which all this talk is relevant is dynamic (we update our protocols, the machines on the network change and the networks themselves change in size and configuration); a computer network that hits growing pains at a certain size, and then surmounts them with minor protocol updates didn’t cease to be a CNAS then become one again.

Quite interestingly, in this RFC on “bundle protocol” (BP) for interplanetary communication (RFCs being a series of publications by the Internet Society, setting out various standards and advice) the author says the following:

BP uses the “native” internet protocols for communications within a given internet. Note that “internet” in the preceding is used in a general sense and does not necessarily refer to TCP/IP.

This is to say that people are creating new things that have the properties of networking computers, and can scale, but are not necessarily based on TCP/IP. I say that we should not use the term Internet for this sort of thing; we ought to differentiate so as to show how undifferentiated things are (on Earth).

Similarly, much of what we call the internet of things isn’t really the Internet. For example, Bluetooth devices can form networks, sometimes very large ones, but it’s only really the Internet if they connect to the actual Internet using TCP/IP, which doesn’t always happen.

I hope, dear reader, that you share with me the sense that it is absolutely absurd, that our species has just one CNAS (the Internet) and one hypertext system with anything like global usage (the Web). We should make it our business to change this:

  • One, to give people some choice
  • Two, to establish some robustness (the Internet itself is robust, but relying on a single system to perform this function is extremely fragile)
  • Three, to see if we can actually make something better

At this point I’m reminded of the scene in the punchy action movie Demolition Man, in which the muscular protagonist (frozen for years and awoken in a strange future civilization) is taken to dinner by the leading lady, who explains that in the future, all restaurants are Taco Bell.

This is and should be recognized to be absurd. To be clear, I’m not saying that the Internet is anything like Taco Bell, only that we ought to have options.

The Internet is not Natural, Deistic or even that Technical

I want to rid you of a dangerous misapprehension. It is a common one, but, all the same, I can’t be sure that you suffer from it; all I can say is that, if you’ve already been through this, try to enjoy rehearsing it with me one more time.

Here goes:

Many, if not most, decisions in technology have little to do with technical considerations, or some objective standard for how things should be; for the most part they relate, at best, to personal philosophy and taste, and, at worst, ignorance and laziness.

Ted Nelson provides a lovely introduction, here:

Here’s a ubiquitous example: files and folders on your computer. Let’s say I want to save a movie, 12 Angry Men, on my machine: do I put it in my Movies that Take Place Mainly in One Room folder with The Man from Earth, or my director Sidney Arthur Lumet folder with Dog Day Afternoon? Ideally I’d put it in both, but most modern operating systems will force me to put it in just one folder. In Windows (very bad) I can make it sort of show up in more than one place with “shortcuts” that break if I move the original, with MacOS (better) I have “aliases” which are more robust.

But why am I prevented from putting it in more than one place, ab initio? Technically, especially in Unix-influenced systems (like MacOS, Linux, BSD, etc.) there is no reason why not to: it’s just that the people who created the first versions of these systems decades ago didn’t think you needed to, or thought you shouldn’t—and it’s been this way for so long that few ask why.

A single, physical file certainly can’t be in more than one place at a time, but the whole point of electronic media is the ability to structure things arbitrarily, liberating us from the physical.

Technology is a function of constraints—those things that hold us back, like the speed of processors, how much data can pass through a communications line, money—and values: values influence the ideas, premises, conceptual structures that we use to design and build things, and these things often reflect the nature of their creators: they can be open, closed, free, forced, messy, neat, abstract, narrow.

As you might have guessed, the creators and administrators of technology often express choices (such as how a file can’t be in two places at once) as technicalities, sometimes this is a tactic to get one’s way, sometimes just ignorance.

Why does this matter? It matters because we won’t get technology that inculcates ethical action in us and that opens the scope of human imagination by accident, we need the right people with the right ideas to build it. In the case of the Internet, we were particularly fortunate. To illustrate this, I’m going to go through two archetypical values that shape what the Internet became, and explore how things could have been, otherwise.

Robustness

See below a passage from RFC 1122. It’s on the long side, but I reproduce it in full for you to enjoy the style and vision:

At every layer of the protocols, there is a general rule whose application can lead to enormous benefits in robustness and interoperability [IP:1]:

“Be liberal in what you accept, and conservative in what you send”

Software should be written to deal with every conceivable error, no matter how unlikely; sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue. In general, it is best to assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect. This assumption will lead to suitable protective design, although the most serious problems in the Internet have been caused by unenvisaged mechanisms triggered by low-probability events; mere human malice would never have taken so devious a course!

Adaptability to change must be designed into all levels of Internet host software. As a simple example, consider a protocol specification that contains an enumeration of values for a particular header field — e.g., a type field, a port number, or an error code; this enumeration must be assumed to be incomplete. Thus, if a protocol specification defines four possible error codes, the software must not break when a fifth code shows up. An undefined code might be logged (see below), but it must not cause a failure.

The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features. It is unwise to stray far from the obvious and simple, lest untoward effects result elsewhere. A corollary of this is “watch out for misbehaving hosts”; host software should be prepared, not just to survive other misbehaving hosts, but also to cooperate to limit the amount of disruption such hosts can cause to the shared communication facility.

This is not just good technical writing, this is some of the best writing. In just a few lines, Postel assures us of the not a case of whether but when orientation that can almost be applied universally, which almost predicts Taleb’s Ludic Fallacy—how the things that really hurt you are those for which you weren’t being vigilant, especially not ones that belong to familiar, mathematical-feeling or game-like scenarios; Taleb identifies another error type: not planning enough for the scale of the damage—Postel understood that in a massively interconnected environment, small errors can compound into something disastrous.

Then Postel explains one of the subtler parts of of his imperative: on first reading, I had thought that Be liberal in what you accept meant “Permit communications that are not fully compliant with the standard, but which are nonetheless parseable”. It goes beyond this, meaning that one should do so while being liberal in an almost metaphorical sense: being tolerant of and therefore not breaking down in response to aberrant behavior.

This is stunningly imaginative: Postel set out how Internet hosts might communicate without imposing uniform of versions of the software among all Internet users. Remember, and as I mention this essay’s counterpart on freedom, the Internet is stunningly interoperable: today, in 2021, you still can’t reliably switch storage media formatted for Mac and Windows, but it’s so easy to hook new devices up to the Internet that people seem to say why not, giving us Internet toothbrushes and fridges.

Finally, the latter part, calling hosts to be Conservative in what you send, is likewise more subtle than one might gather on first reading. It doesn’t mean that one should merely adhere to the standards (which is hard enough), it means do so while avoiding doing something that, while permitted, risks causing issues in other devices that are out-of-date or not set up properly. Don’t just adhere to the standard, imagine whether some part of the standard might be obscure or new enough that using it might cause errors.

This supererogation reaches out of the bounds of mere specification and into philosophy.

Postel’s Law is, of course, not dogma, and people in the Internet community have put forward proposals to move beyond it. I’m already beyond my skill and training, so can’t comment on the specifics here, but wish to show only that the Law is philosophical and beautiful, not necessarily perfect and immortal.

Simplicity

See RFC 3439:

“While adding any new feature may be considered a gain (and in fact frequently differentiates vendors of various types of equipment), but there is a danger. The danger is in increased system complexity.”

And RFC 1925:

“It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.”

You might not need more proof than the spelling error to understand that the Internet was not created by Gods. But if you needed more, I wish for you to take note of how these directives relate to a particular style of creation, the implication being that the Internet could have gone many other ways and would have made our lives very different.

Meanwhile, these ideas are popular but actually quite against the grain, overall. With respect to the first point, it’s quite hard to find arguments to the contrary; this seems to be because features are the only way get machines to do things, and doing things is what machines is for—this seems to be the same as the reason why there’s no popular saying meaning “more is more” but we do have the saying “less is more,” because more is actually more, but things get weird with scale.

The best proponents for features and lots of them are certainly software vendors themselves, like Microsoft here:

Again, I’m not saying that features are bad—everything your computer does is a feature. This is, however, why it’s so tempting to increase them without limit.

Deliberately limiting features, or at least spreading features among multiple self-contained programs appears to have originated within the Unix community, best encapsulated by what is normally called the Unix philosophy, here are my two favorite points (out of four, from one of the main iterations of the philosophy):

  1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.
  2. Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information.

The first point, there, neatly encompasses the two ideas referenced before in RFCs: don’t add too many features, don’t try to solve all your problems with one thing.

This philosophy is best expressed by the Internet Protocol layer of the stack (see the first section of this essay for our foolishly heathen recap of the layers). It is of course tempting to have IP handle more stuff; right now all it does is rout traffic between the end users, those users are responsible for anything more clever than that. This confers two main advantages:

  1. Simple systems mean less stuff to break; connectivity between networks is vital to the proper function of the Internet, better to lighten the load on the machinery of connection and have the devices on the edge of the network be responsible for what remains.
  2. Adding complex features to the IP layer, for example, would add new facilities that we could use; but any new feature imposes a cost on all users, whether it’s widely used or not. Again, better to keep things simple when it comes to making connections and transmitting data, and get complex on your own system and your own time.

At risk of oversimplifying, the way the Internet is is derived from a combination of technical considerations, ingenuity, and the combination of many philosophies of technology. There are, one can imagine, better ways in which we could have done this; but for now I want to focus on what could have been: imagine if the Internet had been built by IBM (it would have been released in the year 2005 and would require proprietary hardware and software) or Microsoft (it would have come out at around the same time, but would be run via a centralized system that crashes all the time).

Technology is personal first, philosophical second, and technical last; corollary: understand the philosophy of tech, and see to it that you and the people that make your systems have robust and upright ideas.

Who Owns and Runs the Internet?

As seems to be the theme: there’s a good deal of confusion about who owns and runs the internet, and our intuitions can be a little unhelpful because the Internet is an odd creature.

We have a long history of understanding who owns physical objects like our computers and phones, and if we don’t own them fully, have contractual evidence as to who does. Digital files can be more confusing, especially if stored in the cloud or on third party services like social media. See this piece’s counterpart on freedom for my call to action to own and store your stuff.

That said, a great deal of the Internet, in terms of software and conceptually, is hidden from us, or at least shows up in a manner that is confusing.

The overall picture looks something like this (from the Internet Engineering Task Force):

“The Internet, a loosely-organized international collaboration of autonomous, interconnected networks, supports host-to-host communication through voluntary adherence to open protocols and procedures defined by Internet Standards.”

Hardware

First, the hardware. Per its name, the Internet interconnects smaller networks. Those networks—like the one in your home, an office network, one at a university, or something ad hoc that you set up among friends—are controlled by the uncountable range of individuals and groups that own networks and/or the devices on them.

Don’t forget, of course, that the ownership of this physical hardware can be confusing, too: it’s my home network, but ComCast owns the router.

Then you have the physical infrastructure that connects these smaller networks: fiberoptic cables, ADSL lines, wireless (both cellular and WISP setups) which is owned by internet service providers (ISPs). Quite importantly, the term ISP says nothing about nature or organizational structure: we often know ISPs as huge companies like AT&T, but ISPs can be municipal governments, non-profits, small groups of people or even individuals.

Don’t assume that you have to settle for internet service provided by a supercorporation. There may be alternatives in your area, but their marketing budgets are likely small, so you need to look for them. Here are some information sources:

ISPs have many different roles, and transport data varying distances and in different ways. Put it this way: to get between two hosts (e.g. your computer and a webserver) the data must transit over a physical connection. But, there is no one organization that own all these connections: it’s a patchwork of different networks, of different sizes and shapes, owned by a variety of organizations.

To the user, the Internet feels like just one thing: we can’t detect when an Internet data packet has to transition between, say, AT&T’s cabling to Cogent Communications’—it acts as one thing because (usually) the ISPs coordinate to ensure that the traffic gets where it is supposed to go. The implication of this (which I only realized upon starting research for this article) is that the ISPs have to connect their hardware together, which is done at physical locations known as Internet exchange points, like the Network Access Point of the Americas, where more than 125 networks are interconnected.

Intangibles

The proper function of the Internet relies heavily on several modes of identifying machines and resources online: IP addresses and domain names. There are other things, but these are the most important and recognizable.

At the highest level, ICANN manages these intangibles. ICANN is a massively confusing and complicated organization to address, not least because it has changed a great deal, and because it delegates many of its important functions to other organizations.

I’m going to make this very quick and very simple, and for those who would like to learn more, see the Wikipedia article on Internet governance. ICANN is responsible for three of the things we care about: IP addresses, domain names, and Internet technology standards; there’s more, but we don’t want to be here all day. There must be some governance of IP addresses and domain names, if nothing else so that we ensure that no single IP address is assigned to more than one device, or one domain name assigned to more than one owner.

The first function ICANN delegates to one of several regional organizations that hand out unique IP addresses. IP addresses themselves aren’t really ownable in a normal sense, they are assigned.

The second function was once handled by ICANN itself, now by an affiliate organization, Public Technical Identifiers (PTI). Have you heard of this organization before? It is very important, but doesn’t even have a Wikipedia page.

PTI, among other things, is responsible for managing the domain name system (DNS) and for delegating the companies and organizations that manage these domains, such as GoDaddy, VeriSign and Tucows, etc. I might register my domain with GoDaddy, for example, but, quite importantly, I don’t own it, I just have the exclusive right to use it.

These organizations allow users to register the domains, but PTI itself manages the very core of the DNS, the root zone. The way DNS works is actually rather simple. If your computer wishes, say, to pull up a website at site.example.com:

  1. It will first ask the DNS root were to find the server responsible for the com zone.
  2. The DNS root file will tell your computer the IP address of the server responsible for com.
  3. Your machine will then go to this IP address and ask the server where to find example.com.
  4. The com server will tell you where to find example.com.
  5. And, finally, the example server will tell you where to find site.example.com.

You might have noticed that this is rather centralized; it’s not fully centralized in that everything after the first lookup (where we found how to get to com) is run by different people, but it’s centralized to the extent that PTI controls the very core of the system.

Fundamentally, however, the PTI can’t prevent anyone else from providing a DNS service: computers know to go to the official DNS root zone, but can be instructed to get information from anywhere. As such, here are some alternatives and new ideas:

  • GNS, via the GNUNET project, which provides a totally decentralized name system run on radically different principles.
  • Handshake, which provides a decentralized DNS, based on a cryptographic ledger.
  • OpenNIC, which is not as radical as GNS or Handshake, but which, not being controlled by ICANN, provides a range of top-level domains not available via the official DNS (e.g. “.libre” which can be accessed by OpenNIC users only).

The Internet Engineering Task Force (IETF) handles the third function, which I will explore in the next section.

Before ICANN, Jon Postel, mentioned above, handled many of these functions personally: on a voluntary basis, if you please. ICANN, created in 1998, is a non-profit: it was originally contracted to perform these functions by the US Department of Commerce. In 2016, the Department of Commerce made it independent, performing its duties in collaboration with a “multistakeholder” community, made up of members of the Internet technical community, businesses, users, governments, etc.

I simply don’t have the column inches to go into detail on the relative merits of this, e.g. which is better, DOC control or multistakeholder, or something else? Of course, there are plenty of individuals and governments that would have the whole Internet, or at least the ICANN functions, be government controlled: I think we ought to fight this with much energy, because we can guarantee that what any government with this level of control would use it to victimize its enemies.

I think I’m right in saying that in 1998 there was no way to coordinate the unique assignment of IP addresses and domain names without some central organization. Not any more: Handshake, GNUNET (see above) and others are already pioneering ways to handle these functions in a decentralized way. See subsequent articles in this series for more detail.

Dear reader, you may be experiencing a feeling somewhat similar to what I felt, such as when first discovering that there are alternative name systems. That is, coming upon the intuition that the way technology generally is set up today is not normal or natural, rather that it done by convention and, at that, is one among many alternatives.

If you are starting to feel something like this, or already do, I encourage you to cultivate this feeling: it will make you much harder to deceive.

Standards

The Internet is very open, meaning that all you need, really, to create something for the Internet is the skill to do so; this doesn’t mean you can do anything or that anything is possible (there are legal and technical limitations). One of the many results of this openness is that no single organization is responsible for all the concepts and systems used on the Internet.

This is not unlike how, in open societies, there is no single organization responsible for all the writing that is published: you only get this sort of thing in dictatorships. Contrast this, for example, to the iPhone and its accompanying app store, for which developers must secure permission in order to list their apps. I, unlike some others, say that this is not inherently unethical: however, we are all playing a game of chose your own adventure, and the best I can do is commend the freer adventure to you.

There are, however, a few very important organizations responsible for specifying Internet systems. Before we address them, it’s worth looking at the concept of a standards organization. If you’re already familiar, please skip this.

  1. What is a standard, in this context? A standard is a description of the way things work within a particular system such that, if someone follows that standard, they will be able to create things that work with others that follow the standard. ASCII, USB, and, course, Internet Protocol, are all standards.
  2. Why does this matter? I address this question at length in this piece’s counterpart on freedom; put simply, standards are like languages, they facilitate communication. USB works so reliably, for example, because manufacturers and software makers agree to the standard, and without these agreements, we the users would have no guarantee that these tools would operate together.
  3. Who creates the standards? Anyone can create a standard, but standards matter to the extent that they are adopted by the creators of technology and used. Quite commonly, people group together for the specific purpose of creating a standard or group of standards, sometimes this might be a consortium of relevant companies in the field (such as the USB Implementers Forum) or an organization specifically set up for this purpose, such as the ISO or ITU. Other times, a company might create a protocol for its own purposes, which becomes the de facto standard; this is often but not necessarily undesirable, because that firm will likely have created something to suit their own needs rather than those of the whole ecosystem. Standards like ASCII and TCP/IP, for example, are big exceptions to the popular opprobrium for things designed by committees.

In the case of the Internet, the main standards organization is the Internet Engineering Task Force (IETF), you can see their working groups page for a breakdown of who does what. Quite importantly, the IETF is responsible for specifying Internet Protocol and TCP, which, you will remember from above, represent the core of the Internet system.

The IETF publishes the famous RFC publication that I have referenced frequently. The IETF itself is part of the Internet Society, a non-profit devoted to stewarding the Internet more broadly. Do you care about the direction of the Internet? Join the Internet Society: it’s free.

There are other relevant standards, far too many to count; it’s incumbent upon me to mention that the World Wide Web Consortium handles the Web, one of the Internet’s many mistaken identities.

Nobody is forcing anyone to use these standards; nor is the IETF directly financially incentivized to have you use them. Where Apple makes machines that adhere to its standards and would have you buy them (and will sue anyone that violates its intellectual property), all the Internet Society can do is set the best standard it can and commend it to you, and perhaps wag its finger at things non-compliant.

If I wanted to, I could make my own, altered version of TCP/IP; the only disincentive to use it would be the risk that it wouldn’t work or, if it only played with versions of itself, that I would have no one to talk to. What I’m trying to say is that the Internet is very open, relative to most systems in use today: the adoption of its protocols is voluntary, manufacturers and software makers adhere to these standards because it makes their stuff work.

There is, of course, Internet coercion, and all the usual suspects re clamoring for control, every day: for my ideas on this subject, please refer to this piece’s counterpart on freedom.

Conclusion: We Need a Civics of Computer Networks of Arbitrary Size, or We Are Idiots

I propose a new field, or at least a sub-field: the civics of CNASs; which we might consider part of the larger field of civics and/or the digital humanities. Quite importantly, this field is distinct from some (quite interesting) discussions around “Internet civics” that are really about regular civics, just with the Internet as a medium for organization.

I’m talking about CNASs as facilitating societies in themselves, which confer rights, and demand understanding, duties, and reform. And, please, let’s please not call this Internet Civics, which would be like founding a field of Americs or Britanics and calling our work done.

So, to recapitulate this piece in the CNAS civics mode:

  1. The subject of our study, the Internet, is often confused for the Web, not unlike the UK and England, Holland and the Netherlands. This case of mistaken identity is instrumental because it deceives people as to what they have and how they might influence it.
  2. The Internet is also confused for the class of things to which it belongs: computer networks of arbitrary scale (CNAS). This is deceptive because it robs us of the sense (as citizens of one country get by looking at another country) that things can be done differently, while having us flirt with great fragility.
  3. The Internet’s founding fathers are much celebrated and quite well known in technical circles, but their position in the public imagination is dwarfed by that of figures from the corporate consumer world, despite the fact that the Internet is arguably the most successful technology in history. Because of its obscurity, there’s the sense that the Internet’s design is just so, normal or objective, or worse, magical, when quite the opposite is true: the Internet’s founding fathers brought their own philosophies to its creation; the proper understanding of any thing can’t omit its founding and enduring philosophy.
  4. The Internet’s structure of governance, ownership and organization it so complex that it is a study unto itself. The Internet combines immense openness with a curious organizational structure that includes a range of people and interest groups, while centralizing important functions among obscure, barely-known bodies. The Internet Society, which is the main force behind Internet technology, is free to join, but has only 70,000 members worldwide; Internet users are both totally immersed in it and mostly disengaged from the idea of influencing it.

As I say in this piece’s counterpart on freedom, the Internet is a big, strange, unique monster: one that all the usual suspects would have us carve up and lobotomize for all the usual reasons; we must prevent them from doing so. This means trading in the ignorance and disengagement for knowledge and instrumentality. Concurrently, we must find new ways of connecting and structuring those connections. If we do both of these things, we might have a chance of building and nurturing the network our species deserves.

Categories
Editorial

Internet Walden: Introduction—Why We Should Be Free Online

Image credit: Marta de la Figuera

Goodness, truth, beauty. These are not common terms to encounter during a discussion of the Internet or computers; for the most part, the normal model seems to be that people can do good or bad things online, but the Internet is just technology.

This approach, I think, is one of the gravest mistakes of our age: thinking or acting as though technology is separate from fields like philosophy or literature, and/or that criticisms from other fields are either irrelevant or at best secondary to technicalities. This publication, serving the Digital Humanities, is part of a much-needed correction.

I say that technology can be just or unjust in the same sense that a law can: an unjust law doesn’t possess the sort of ethical failing available only to a sentient being, rather, it has an ethical character (such as fairness or unfairness) as does the action that it encourages in us. We should accept the burden of seeing these qualities as ways: towards or away from goodness, truth and beauty.

Such a way is akin to a method or a path, like for example mediation or the practice of empathy: it’s not necessarily virtuous in itself, but the idea is that with consistent application one develops one’s virtue, or undermines it. My claim is that this is especially true for the ways in which we use technology, both as individuals and collectively.

In Computer Lib, Ted Nelson describes “the mask of technology,” which serves to hide the real intentions of computer people (technicians, programmers, managers, etc.) behind pretend technical considerations. (“We can’t do it that way, the computer won’t allow it.”) There’s another mask, that works in the opposite way: the mask of technological ignorance. We wear it either to avoid facing difficult ethical questions about our systems (hiding behind the fact that we don’t understand the systems) or as an excuse when we offload responsibilities onto others.

This essay concerns itself primarily with three ways: the secondary characteristics that lend themselves to our pursuit of goodness, truth and beauty, specifically in the technology of communication. They are, freedom, interoperability, and ideisomorphism; the latter is a concept which I haven’t heard defined before, but which can be summarized thus: the quality of systems which are both flexible enough to express the complexity and nuance of human thought and which have features that lend themselves to the shape of our cognition. (Ide, as in idea, iso as in equal to, morph as in shape.)

We should care about freedom, because we require it to build and experiment with systems in pursuit of the good; interoperability, because it forces us to formulate the truth in its purest form and allows us to communicate it; ideisomorphism, because it allows us to combine maximal taste and creativity with minimal technological impositions and restrictions in our pursuit of beauty. For details on these ways, please read on.

I won’t claim that this is a complete treatment of the ethical character of machines, my subject is machines for communication, and the best I can hope for is to start well.

In short, bad communications technology causes and covers up ethical failures. Take off the mask. We have nothing to lose but convenient excuses and stand to gain firstly, tools that act as force-multipliers for our best qualities and, secondly, some of the ethical clarity that comes from freedom and diverse conversation, and, if nothing else, a better understanding of ourselves.

Oliver Meredith Cox, January 28th, 2021

An Introduction to Walden: Life on the Internet

I argue that anyone who cares about human flourishing should concern themselves with Internet freedom, interoperability and ideisomorphism; I make this claim because the ethical character of the Internet appears to be the issue which contains or casts its shadow over the greatest number of other meaningful issues, and because important facts about the nature of the Internet are ways to inculcate our highest values in ourselves.

The Internet offers us the opportunity to shed the mask of technological ignorance: by understanding its proper function we should know how to spot the lies, and how to use the technology freely. We might then transform it into the retina and brain of an ideisomorphic system that molds to and enhances, rather than constricting, our cognition.

As such, with respect to the Internet, I say that we should:

  1. Learn/understand the tools and their nature.
  2. Use, build or demand tools that lend themselves to being learned.
  3. Use, build or demand tools that promote the scale, nuance and synchrony of human imagination, alongside those that nurture people’s capacity to communicate.

This piece is one part of a two-part introduction to the series, both parts are equal in importance and in order.

The other (What Is the Internet?) is an introduction to the technology of the Internet itself, so whenever questions come up about such things or if anything is not clear, consider either referring to that piece or reading it first; specifically, one of the reasons why I think we should turn our attention very definitely to the Internet is the fact that most people know so little about it, categorize it incorrectly and mistake it for other things (many confuse the Internet and the Web, for example).

Prospectus

  • Part 1: Introduction
    • Why We Should be Free Online: (this article) in which I explain why you should care about Internet freedom.
    • What Is the Internet: An explanation of what the Internet is (it probably isn’t what you think it is).
  • Part 2: Diary
    • Hypertext (one of an indefinite number of articles on the most popular and important Internet technologies)
    • Email
    • Cryptocurrency
    • Your article here: want to write an article for this series? Reach out: oliver dot cox at wonk bridge dot com.
  • Part 3: Conclusion
    • What Should We Create?A manifesto for what new technology we should create for the purpose of communicating with each other.

Call to Action

Do you care about Internet freedom and ethics? Do you want to take an Internet technology, master it and use it on your own terms? Do you want to write about it? Reach out: oliver dot cox at wonk bridge dot com.

A Note on Naming

For a full discussion on naming conventions, please see this piece’s companion, What Is the Internet?. However, I must clarify something up front. Hereafter, I will use a new term: computer network of arbitrary scale (CNAS [seenas]), which refers to a network of computers and/or the technology used to facilitate it, which can achieve arbitrarily large scale.

I use to distinguish between the 1. Internet in the sense of a singular brand, and 2. the class of networks of which the Internet is one example. The Internet is our name for the network running on a particular set of protocols (TCP/IP), it is a CNAS, and today it is the only CNAS. Imagine if a single brand so dominated an industry, like if Ford sold 99.9 percent of cars, so that there was no word for “car” (you would just say “Ford”), and you could hardly imagine the idea of there being another company. But, I predict that soon there will be more, and that they will be organized in different ways and run on different protocols.

Why the Internet Matters

First: the question of importance. Why do I think that the Internet matters relative to any other question of freedom that one might have? I know very well that many people with strong opinions think their subject the most important of all: politics, art, culture, literature, sport, cuisine, technology, engineering; if you care about something, it is likely that you think others should care, too. I know that there isn’t much more that I can do than to take a number, stand in line, and make my case that I should get an audience with you.

Here’s why I think you should care:

1. The Internet is engulfing everything everything we care about.

I won’t bore you with froth on the number of connected devices around or how everyone has a smartphone now; rather, numerous functions and technologies that were separate and many of which preceded the Internet are being replaced by it, taking place on it, or merging with it: telephony, mail, publishing, science, the coordination of people, commerce.

2. The Internet is the main home of both speech and information retrieval.

This is arguably part of the first point, but I think it deserves its own column inches: most speech and information exchange now happens online, most legacy channels (such as radio) are partly transmitted over the Internet, and even those media that are farthest from the digital (perhaps print) are organized using the Internet. At risk of over-reaching, I say that the question of free speech in general is swiftly becoming chiefly a question of free speech online. Or, conversely, that offline free speech is relevant to the extent that online speech isn’t free.

3. The Internet is high in virtuality.

When I claim above that the issue of all issues, someone might respond, “What, is it more important than food?” That is a strong point, and I am extremely radical when it comes to food and think that people should understand what they eat, know what’s in it, hold food corporations to account, and that to the extent that we don’t know how to cook or work with food, we will always be victim to people who want to control or fleece us. However, the Internet and cuisine are almost as far apart on the scale of virtuality as it is possible to be.

Virtuality, as defined by Ted Nelson, describes how something can seem or feel a certain way, as opposed to how it actually is, physically. For example, a ladder has no virtuality, (usually) the way it looks and how we engage with it corresponds 100% to the arrangement of its parts. A building, on the other hand, has much more virtuality: the lines and shape of a building give it a mood and feel, beyond the mere structure of the bricks, cement and glass.

Food is has almost no virtuality (apart from cuisine with immense artifice); the Internet, however, has almost total virtuality: the things that we do with it, the Web, email, cryptocurrency, have realities in the screen and in our imagination that are almost limitless, and the only physical thing that we typically notice is the “router” box in our home, the Wifi symbol on our device, the engineer in their truck and, of course, the bill. This immense virtuality is both what makes the Internet so profound, but also so dangerous: there are things going on beneath the virtual that threaten our rights. You are free to the extent that you understand and control these things.

Ted Nelson explains virtuality during his TED conference speech (start at 31:16):

4. The Internet has lots of technicalities, and the technicalities have bad branding.

All this stuff: TCP/IP, DNS, DHCP, the barrage of initialisms is hard to master and confusing, especially for those who are non-engineers or non-technical. I’m sorry, but I think we should all have to learn it or at least some of it. Not understanding something gives organizations with a growth obligation perhaps the best opportunity to extract profit or freedom from you.

5. The Internet is the best example that humanity has created of an open, interoperable system to connect people.

It is our first CNAS. As fish with water, it is easy to forget what we have achieved in the form of the Internet: it connects people of all cultures and religions, and nationalities (those that are excluded are usually so because of who governs them, not who they are), it works on practically all modern operating systems, it brings truths about the universe to those in authoritarian countries or oppressive cultures, and connects the breadth of human thinkers together.

To see the profundity of this achievement, remember that, today, many Mac- and Windows-formatted disks are incompatible with the other system, and that computer firms still attempt to lock their customers into using their systems by trapping them in formats and ways of thinking that don’t work with other systems, or that, even, culturally, some people refuse to use other systems or won’t permit other systems in their corporate, university or other departments.

Mac, Windows, GNU/Linux, Unix, BSD, Plan 9, you name it, it will be able to connect to the Internet; it is the best example of a system that can bridge types of technology and people. Imagine separate and incompatible websites, only for users of particular systems: this was an entirely possible outcome and we’re lucky it didn’t happen a lot more than the little it did (see Flash). The Internet, despite it’s failures and limitations, massively outclasses other technology on a regular basis, and is therefore something of a magnetic North, pulling worse, incompatible and closed systems along with it.

6. The Internet is part of a technological feedback loop.

As I mentioned in point 2. above, the Internet is now the main way in which we store, access and present information; the way in which we structure and present information today influences what we want to pursue in the future, the ideas we have and what, ultimately, we build. The Internet hosts and influences an innovation cycle:

  1. Available storage and presentation systems influence how we think
  2. The way we think influences our ideas
  3. Our ideas influence the technology we build, which takes us back to the start

This means that bad, inflexible, closed systems will have a detrimental effect on future systems, as will open, flexible systems engender better future systems. There is innovation, of course, but many design paradigms and ways of doing things get baked in, and sometimes are compounded. As such, I say that we ought to exert immense effort in creating virtuous Internet systems, such that these systems will compound into systems of even more virtue: much like how those who save a lot, wisely and early are (allowing for market randomness, disaster and war) typically rewarded with comfort decades later.

Put briefly, the Internet combines: the most integrating and connecting force in history, difficulty, virtuality, working in a feedback loop; it is the best we have, it is under constant threat, and we need to take action now.

The rest of this introduction will speak to the following topics:

  • Six imperatives for communications freedom
  • What we risk losing if we don’t shape the Internet to our values
  • Why, ultimately, I’m optimistic about technology and particularly the technology of connection
  • Why this moment of crisis tells us that we are overdue for taking action to improve the Internet and make it freer
  • What we have to gain

Six Imperatives for Communications Freedom

The technology of communication should:

  1. Be free and open source.
  2. Be owned and controlled by the users, and should help the rightful entity, whether an individual, group or the collective, to maintain ownership over their information and their modes of organizing information.
  3. Have open and logical interfaces, and be interoperable where possible.
  4. Help users to understand and master it.
  5. Let users to communicate in any style or format.
  6. Help users to work towards a system that facilitates the storage, transmission and presentation of both the totality of knowledge and of the ways in which it is organized.

1. The technology of communication should be free and open source.

First: what is free software? A program is free if it allows users the following:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute and make copies so you can help your neighbour.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

You will, dear reader, detect that this use of the word free relates to freedom not merely to something being provided free of charge. Open source, although almost synonymous, is a separate concept promoted by a different organization: the Open Source Initiative promotes open source and the Free Software Foundation, free.

A note, I think that we should obey a robustness principle when it comes to software and licenses: Be conservative with respect to the software you access (i.e. obey the law, respect trademarks, patents and copyright; pay what you owe for software, donate to and promote projects that give you stuff without charging a fee); be liberal with respect to the software you create (i.e. make it free and open source wherever and to the extent possible).

Fundamentally, the purpose of free software is to maximize freedom, not to impoverish software creators or get free stuff; any moral system based on free software must build effective mechanisms to give developers the handsome rewards they deserve.

To dig further into the concept and its sister concept, why do we say open source? The word “source” here refers to a program’s source code, instructions usually in “high-level” languages that allow programmers to write programs in terms that are more abstract and closer to the ways in which humans think, making programming more intuitive and faster. These programs are either compiled or interpreted into instructions in binary (the eponymous zeroes and ones) that a computer’s processor can understand directly.

Having just these binary instructions is much less useful than having the source, because the binary is very hard (perhaps impossible in some cases) for humans to understand. As such, what we call open source might be termed, software for which the highest level of abstraction of its workings is publicly available. Or, software that shows you how it does what it does.

Point 0. matters because the technology of communication is useful to the extent that we can use it: we shouldn’t use or create technology, for example, that makes it impossible to criticise the government or religion.

Of course, one might challenge this point, asking, for example, whether or why software shouldn’t include features that prevent us from breaking the law. I have ideas and opinions on this, but will save them for another time. Suffice to say that free software has an accompanying literature as diverse and exacting as the commentary on the free speech provision of the First Amendment: there is much debate about how exactly to interpret and apply these ideas, but that doesn’t stop them from being immensely useful.

Point 1. is extremely important for any software that concerns privacy, security or, for that matter, anything important. If you can’t inspect your software’s core nature, how can you see whether it contains functions that spy on you, provide illicit access to your computer, or bugs that its creators missed that will later provide unintentional access to hackers? See the WannaCry debacle for a recent example of a costly and disastrous vulnerability in proprietary software.

Point 2. matters for communications in that when software or parts of software can be copied and distributed freely, this maximises the number of people that have access and can, thus, communicate. It matters also in that if you can see how a system works, it’s much easier to create systems that can to talk to it.

However, the “free” in free software is the cause of confusion, as it makes it sound like people that create free or open source software will or can never make money. This is a mistake worth correcting:

  1. Companies can and do charge for free software, for example, Red Hat charges for it’s GNU/Linux operating system distro, Red Hat Enterprise Linux. The fee gets you the operating system under the exclusive Red Hat trademark, support and training: the operating system itself is free software (you can read up on this firm to see that they really have made money).
  2. A good deal of programmers are sponsored to create free software by their employers, at one point, Microsoft developers were the biggest contributors to the Linux kernel (open source software like Linux is just too good to ignore).

Point 3. might be clearer with the help of a metaphor. Imagine if you bought a car, but, upon trying to fit a catalytic converter, or a more efficient engine, were informed that you were not permitted to do so, or even found devices that prevented you from modifying it. This is the state that one finds when trying to improve most proprietary software.

In essence, most things that make their way to us could be better, and in the realm of communication, surmounting limitations inherent in the means of communication opens new ways of expressing ourselves and connecting with others. Our minds and imaginations are constrained by the means of communication just as they are by language; the more freedom we have, the better. Look, for example, to the WordPress ecosystem and range of plugins to see what people will do given the ability to make things better.

There are names in tech that are well known among the public: most notably Bill Gates and Microsoft, Steve Jobs and Apple; we teach school children about them, and rightly so, they and those like them have done a great deal for a great many. However, I argue that there are countless other names of a very different type whose stories you should know, here are two: Jon Postel, a pioneer of the Internet who made the sort of lives we life now possible through immense wisdom and foresight, his brand: TCP/IP; Linus Torvalds, who created the Linux kernel, which (usually installed as the core of the GNU operating system) powers all of the top supercomputers, most servers, most smart-phones and a non-trivial share personal computers.

Richard Dawkins has an equation to evaluate the value of a theory:

value = what it explains ÷ what it assumes

Here’s my formulation but for technology:

value = what it does ÷ the restrictions accompanying it

Such restrictions include proprietary data structures, non-interoperable interfaces, and anything else that might limit the imagination.

Gates and Jobs’ innovations are considerable, but almost all of them came with a set of restrictions that separate users from users and communities from communities. Postel and Torvalds, their collaborators, and others like them in other domains not mentioned, built and build systems that are open and interoperable, and that generate wealth for the whole world by sharing new instrumentality with everyone. All I’m saying is that we should celebrate this sort of innovator a lot more.

2. The technology of communication should be owned and controlled by the users, and should help the rightful entity, whether an individual, group or the collective, to maintain ownership over their information and their modes of organizing information

I will try to be brief with what risks being a sprawling point. In encounter after encounter, and interaction after interaction, users sign ideas, identities, privacy and control over how we communicate to unaccountable corporations. This is a hazard because (confining ourselves only to social media and the Web) we might pour years of work into writing and building an audience, say, on Twitter, to have everything taken away because we stored our speech on a medium that we didn’t own and, for example, a network like Twitter represents a single choke-point for authoritarian regimes like the government of Turkey.

On a slightly subtler note, expressing our ideas via larger sites makes us dependent on them for the conversation around ideas, also: conversations should accompany the original material, not live on social profiles far from it, where they are sprayed into the dustbin by the endless stream of other content.

We the users should pay for our web-hosting and set up our own sites: we already have the technology necessary to do this. If you care about it, own it.

3. The technology of communication should have open and logical interfaces, and be interoperable where possible.

What is interoperability, the supposed North Star here? I think the best way to explain interoperability is to think of it as a step above compatibility. Compatibility means that some thing is can work properly in connection with another thing, e.g. a given USB microphone is compatible with, say, a given machine running a particular version of Windows. Interoperability takes us a step further, requiring there to be some standard (usually agreed by invested organizations and companies) which 1. is publicly available and 2. as many relevant parties as possible agree to obey. USB is a great example: all devices carrying the USB logo will be able to interface with USB equipment; these devices are interoperable with respect to this standard.

There are two main types of interoperability: syntactic and semantic. The former refers to the ability of machines to transmit data effectively: this means that there has to be a standard for how information (like images, text, etc.) is encoded into a stream of data that you can transmit, say, down a telephone line. These days, much of this is handled without us noticing or caring. If you’d like to see this in action, right-click or ⌘-click on this page and select “View Page Source” — you ought to see a little piece of code that says ‘ charSet=”utf-8″ ‘ — this is the Webpage announcing what system it is using. This page is interoperable with devices and software that can use the utf-8 standard.

Semantic interoperability is much more interesting: it builds on the syntactic interoperability and adds the ability to actually do work with the information in question. Your browser has this characteristic in that (hopefully) it can take the data that came down your Internet connection and use it to present a Webpage that looks the way it should.

Sounds great, right? Well, people break interoperability all the time, for a variety of reasons:

  1. Sometimes there’s no need: One-off, test or private software projects usually don’t need to be interoperable.
  2. Interoperability is hard: The industry collaboration and consortia necessary to create interoperable standards require a great deal of effort and expense. These conversations can be dry and often acrimonious: we owe a great deal to those who have them on our behalf.
  3. Some organizations create non-interoperable systems for business reasons: For example, a company might create a piece of software that saves user files in a proprietary format and, thus, users must keep using/paying for the company’s software can access their information.
  4. Innovation: New approaches eventually get too far from older technology to work together; sometimes this is a legitimate reason, sometimes it’s an excuse for reason 3.

Reason three is never an excuse for breaking interoperability, reason two is contingent, and reason one and four fine. In cases where it is just too hard or expensive to work up a common, open standard, creators can help by making interfaces that work logically, predictably and, if possible, document them: this way collaborators can at least learn how to build compatible systems.

4. The technology of communication should help users to understand and master it.

Mastery of something is a necessary condition for freedom from whatever force that would control it. To the extent that you don’t know how to build a website or operating system or a mail server, you are a captive audience for those who will offer to do it for you—there is nothing wrong with this, per se, but I argue that the norm should be that any system that makes these things easy should be pedagogical: it should act as a tutorial to get you at least to the stage of knowing what you don’t know, rather than keeping your custom through ignorance. We should profit through assisting users in the pursuit of excellence and mastery.

Meanwhile, remember virtuality: the faulty used car might have visible rust that scares you off, or might rattle on the way home, letting you know that it’s time to have a word with the salesperson. Software that abuses your privacy or exposes your data might do so for years without you realizing, all this stuff can happen in the background; software, therefore, should permit and encourage users to “pop the hood” and have a look around.

Users: understand your tools. Software creators: educate your users.

5. The technology of communication should let users communicate in any style or format.

Modern Internet communication systems, particularly the Web and to an extent email, beguile us with redundant and costly styling, user interfaces, images, etc. The most popular publishing platforms, website builders like WordPress and social media, force users either to adopt particular styling or to make premature or unnecessary choices in this regard. The medium is the message: forced styling changes the message; forced styling choices dilute the message.

6. The technology of communication should help users to work towards a system that facilitates the storage, transmission and presentation of both the totality of knowledge and of the ways in which it is organized.

This is a cry for action for the future: expect more at the end of this article series. Picture this: all humanity’s knowledge, artistic and other cultural creations, visibly sorted, thematically and conceptually, via sets, links and other connections, down to the smallest functional unit. This would allow any user, from a researcher to a student to someone who is curious to someone looking for entertainment, to see how any thing created by humanity relates to all other things. This system would get us a lot closer to ideisomedia. Let’s call it the Knowledge Explorer.

This is not the Web. The Web gave us the ability to publish easily electronically, but because links on the Web point only one way, there can exist no full view of the way in which things are connected. Why? For example, if you look at website x.com, you can quite easily see all the other websites to which it links: all you need to do is you can look at all the pages on that site and make a record.

How, what if you asked what other websites link to x.com? The way the Web functions now, with links stored in the page and going one-way, the only way to see what other websites link to a given site is to inspect every other site on the rest of the Web. This is why the closest things we have to an index of all links are expensive proprietary tools like Google and SEMRush. If links pointed both ways, seeing how things are connected would be trivial.

Jaron Lanier explains this beautifully in the video below (his explanation starts at 15:48):

Google and SEMRush are useful, but deep down it’s all a travesty: we the users, companies, research groups, Universities and other organizations set down information in digital form, but practically throw away useful information on how it is structured. We have already done the work to realize the vision of the Knowledge Explorer, but because we have bad tools, the work is mostly lost. Links, connections, analogies are the fuel and fire of thinking, and ought to be the common inheritance of humanity, and we should build tools that let us form and preserve them properly.

As you might have already realized, building two-way links for a hypertext system is non-trivial. All I can say is that this problem as been solved. More on this much later in this series.

This concludes the discussion of my six imperatives. Now, what happens if these ideas fail?

What Do We Have to Lose?

1. Freedom

People with ideas more profound than mine have explored the concept of freedom of expression more extensively than I can here and have been doing so for some time; there seems little point in rehearsing well-worn arguments. But, as this is my favourite topic, I will give you just one point, on error-correction. David Deutsch put it like this, in his definition of “rational:”

Attempting to solve problems by seeking good explanations; actively pursuing error correction by creating criticisms of both existing ideas and new proposals.

The generation of knowledge is principally about the culling of falsehoods rather than the accrual of facts. The extent to which we prevent discourse on certain topics or hold certain facts or ideas to be unalterably true or free from criticism, is the extent to which we prevent error correction in those areas. This is something of a recapitulation of Popper’s idea of falsification in formal science: in essence, you can never prove that something correct, only incorrect, therefore what we hold to be correct is so unless we find a way to disprove it.

As mentioned above with respect to the First Amendment, I’m aware of how contentious this issue is; as such, I will set out below a framework, which, I hope, simplifies the issue and provides both space for agreement and firm ground for debate. Please note that this framework is designed to be simple and generalizable, which requires generalizations: my actual opinions and the realities are more complex, but I won’t waste valuable column inches on them.

My framework for free expression online:

  • In most countries (especially the USA and those in its orbit), most spaces are either public or private; the street is public, the home is private, for example. (When I say “legal” in this section, I mean practically: incitement to violence muttered under one’s breath at home is irrelevant.)
    • In public, one can say anything legal.
    • In private, one can say anything legal and permitted by the owner.
  • Online, there are only private spaces: 1. The personal devices, servers and other storage that host people’s information (email, websites, blockchains, chat logs, etc.) are owned just like one owns one’s home; 2. Similarly, the physical infrastructure through which this information passes (fiberoptic cables, satellite links, cellular networks) is owned also, usually by private companies like ISPs; some governments or quasi-public institutions own infrastructure, but we can think of this as public only in the sense that a government building is, therefore carrying no free speech precedent.
    • Put simply, all Internet spaces are private spaces.
    • As in the case of private spaces in the physical world, one can say anything legal and permitted by the owner.

From this framework we can derive four conclusions:

  1. There is nothing analagous to a public square on the Internet: think of it instead as of a variety of private homes, halls, salons, etc. You are free to the extent that you own the technology of communication or work with people who properly uphold values of freedom, hence #2 of my six imperatives. This will mean doing things that aren’t that uncommon (like getting your own hosting for your website) through to things that are very unusual (like creating our own ISPs) and more. I’m not kidding.
  2. Until we achieve imperative #2, and if you care about free expression, you should a. encrypt your communications, b. own as many pieces of the chain of communication through which your speech passes, c. collaborate and work with individuals, organizations and companies that share your values.
  3. We made a big mistake in giving so much of our lives and ideas to social networks like Twitter and Facebook, and their pretended public squares. We should build truly social and free networks, on a foundation that we actually own. Venture capitalists Balaji Srinivasan and Naval Ravikant are both exploring ideas of this sort.
  4. Prediction: in 2030, 10% of people will access the Internet, host their content, and build their networks via distributed ISPs, server solutions and social networks.

Remember, I’m not necessarily happy about any of this, but I think this is a clear view of the facts. I apologize if I sound cynical, but it’s better to put yourself in a defensible position than to rely on your not being attacked. As Hunter S. Thompson said, “Put your faith in God, but row away from the rocks.”

I am aware that this isn’t a total picture, and there are competing visions of what the CNASs can and should be; I am more than delighted to hear from and discuss with people who disagree with me on the above. I can’t do them justice, but here are some honourable mentions and thorns:

  1. The Internet (or other CNAS) as a public service. Pro: This could feasibly create a true public square. Con: It seems like it would be too tempting for any administration to use their control to victimize people.
  2. Public parts within the overall Internet or CNAS; think of the patchwork of public and private areas that exist in countries, reflected online—this might feasibly include free speech zones in public areas. See the beautifully American “free speech booth” in St. Louis Airport for a physical example.
  3. Truly distributed systems like the Bitcoin and other blockchains, which are stored on the machines of all members raise the question of whether these are the truly communal or public spaces; more on this in future writings.

I think that the case I made here for freedom of expression is broadly the same when applied to privacy: one might even say that privacy is the freedom not to be observed. In essence, you are private to the extent that you control the means of communication or trust those that do. Your computer, your ISP, and any service that you use all represent snooping opportunities.

We should be prepared to do difficult and unusual things to preserve our freedom and privacy: start our own ISPs, start our own distributed Internet access system, or, better, our own CNAS. I note a sense of learned helplessness with respect to this aspect of my connectivity (speaking especially for myself) there are communities out there to support you.

Newish technology will be very helpful, too:

  • WISPs: wireless internet providers, which operate without the need to establish physical connections to people’s homes.
  • Wireless mesh networks: wireless networks, including among peers, wherein data is transmitted throughout a richly connected “mesh” rather than relying on a central hub.

Finally, and fascinating as it is, I simply don’t have the space to go into the discussion of how to combine our rights with the proper application of justice. For example, if everyone used encryption, it would be harder for police to monitor communications as part of their investigations. All I can say is that I support the enforcement of just laws, including through the use of communications technology, and think that the relevant parties should collaborate to support both criminal justice and our rights: this approach has served the countries that use it rather well, thus far.

2. Interoperability

To illustrate how much the Internet has done for us and how good we have it now in terms of interoperability, let’s look back to pre-Internet days. In the 70s, say, many people would usually access a single computer via terminal, often within the same building or campus, or far away via a phone line. For readers who aren’t familiar, the “Terminal” or “Command Line” program on your Mac, PC, Linux machine, etc. emulates how these terminals behaved.

These terminals varied in design between models, manufacturers and through time: most had keyboards with which to type inputs into the computer, some had printouts, some had screens, and sometimes more stuff. However, not all terminals could communicate with all computers: for example, most companies used the ASCII character encoding standard (for translating between binary and letters, numbers and punctuation), but IBM used its own proprietary EBCDIC system—as a result, it was challenging to use IBM terminals with other computers and vice-versa.

This is more than just inconvenient: it locked users and institutions into particular hardware and data structures, and trapped them in a universe constituted by that technology—as usual, the only people truly free were those wealthy or well-connected enough to access several sets of equipment. Actions like this break us up into groups, and prevent such groups from accessing each other’s ideas, systems, innovations, etc. Incompatibility, thought sometimes an expedient in business, is a pure social ill.

To be clear, I am not saying that you have to talk to or be friends with everyone or be promiscuous with the tech you use. If you want to be apart from someone, fine, but being apart from them because of tech is an absurd thing to permit. We need to be able to understand each other’s data structures, codes and approaches: the world is divided enough, along party, religious and cultural lines to permit new artificial divisions.

The most magnanimous thing about the Internet is that it is totally interoperable, based on open standards. I almost feel silly saying it: this beautiful fact is so under-appreciated that I would have to go looking to find another person making the same point. Put it this way, no other technology is as interoperable as the Internet.

It’s tempting to think of the Internet as something normal or even natural; the truth is far from it: it’s sui generis. 53% of the world population use it, making it bigger than the World’s greatest nations, religions and corporations: anything of a similar scale has waged war, sought profits or sent missionaries; the Internet has no need for any of these things.

The Internet is one of the few things actually deserving of that much overused word: unique. But it does what it does because of something much more boring: standards, as discussed above. These standards aren’t universal truths derived from the fabric of the universe, they’re created by fallible, biased people, with their own motivations and philosophical influences. Getting to the point of making a standard is not the whole story: making it good and useful depends on the character of these people.

We should care more about these people and this process: remember, all the normal forces that pull us into cliques and break connections haven’t declared neutrality with respect to the Internet: they can’t help themselves, and would be delighted to see it broken into incompatible fiefdoms; rather, we should focus immense intellectual energy and interest:

  1. on maintaining the philosophical muscle necessary to insist that the Internet stay interoperable
  2. on proposing virtuous standards
  3. on selecting and supporting excellent people to represent us in this endeavour

The Internet feels normal and natural, even effortless in an odd way; the truth is the exact opposite, it is one of a kind, it is not just artificial, it was made by just a few people, and it requires constant energy and attention. Let us give this big, strange monster the attention it deserves, lest the walls go up.

Beyond this, the fact that the Internet is our only CNAS puts us in a perilous position. We should create new CNASs with a variety of philosophies and approaches; this will afford us:

  1. Choice
  2. Some measure of antifragility, in that a variety of approaches and technologies increases the chances of survival if one or more breaks
  3. Perhaps, even, something better than what we have now

3. Ideisomorphism

Bad technology generally puts constrains on the imagination, and on the way in which we think and communicate, but arguing and articulating the effect is much harder and the conclusions less clear cut than with my previous point on free expression. Put it this way: most, practically all of us take what we are given when it comes to tools and technology, some might have ideas about things that could be better, fewer still actually insist that things really ought to be better, and the small few that have the self-belief, tenacity, good fortune and savvy to being their ideas to market, we call innovators and entrepreneurs.

More importantly, these things influence the way we think. For example, Visicalc, the first spreadsheet program for personal computers (and the Apple II’s killer app) made possible a whole range of mathematical and organizational functions that were impossible or painfully slow before: it opened and deepened a range of analytical and experimental thinking. Some readers recognize what I might call “spreadsheet muscle-memory”—when a certain workflow or calculation comes to mind in a form ready to realize in a spreadsheet.

With repeated use, the brain changes shape to thicken well-worn neural pathways: and if you use computers, the available tools, interfaces and data structures train your brain. Digital tools can, therefore, be mind-expanding, but also stultifying. To borrow from an example often used by Ted Nelson, before Xerox PARC, the phrase “cut and paste” referred to the act of cutting a text on paper (printed or written) into many pieces, then re-organizing those pieces to improve the structure.

The team at PARC cast aside this act of total thinking and multiple concurrent actions, and instead gave the name “cut and paste” to a set of functions allowing the user to select just one thing and place it somewhere else. Still today, our imaginations are stunted relative to those who were familiar with the original cut and paste—if you know anything about movies, music or programming, you’ll recognize that many of the best things happen more than one thing at a time.

This is why I argue so vehemently that we shouldn’t accept what we are given online so passively: everything you do online, especially what you do often, is training your mind to work in a certain way. What way? That depends on what you do online.

For the sake of space, I’ll confine myself to the Web. My thesis is this:

  1. The Web as it stands today is primarily focused on beguiling and distracting us.
  2. It presents with two-dimensional worlds (yes there is motion and simulated depth of field, but most of the time these devices gussy up a two-dimensonal frame rather than expressing a multi-dimensional idea).
  3. It is weighed down with unnecessary animation and styling, leaving practically no attention (or for that matter bandwidth) left for information.

I’m here to tell you that you need not suffer through endless tracking, bloated styling, interfaces designed to entrap or provoke tribal feelings while expressing barely any meaning. If you agree, say something. Take to heart what Nelson said: “If the button is not shaped like the thought, the thought will end up shaped like the button.” This is why we have become what we’ve become: divided, enraged, barely able to empathize with someone of a different political origin or opinion.

Then there are the more profound issues: as mentioned above, links only go one way, the Web typically makes little use of the magic of juxtaposition and parallel text, there are few robust ways of witnessing, visually, how things are connected, and for the most part, Web documents are usually one-dimensional (they have an order, start to finish) or two-dimensional (they have an order, and they have headings).

People, this is hypertext we’re dealing with, you can have as many dimensions as you like, document structures that branch, merge, move in parallel, loop, even documents that lack hierarchy altogether: imagine a document with, instead of numbered and nested headings, the overlapping circles of a Venn diagram.

Our thinking is so confined that being 2-D is a compliment.

Digital media offered us the sophistication and multidimensionality necessarily, finally, to reflect human thought, and an end to the hierarchical and either-or structures that are necessary with physical filing (you can can put a file in only one folder in your filing cabinet, but with digital media, you can put it in as many as you like or have multiple headings that contain the same sentence (not copies!)), but we got back into all our worst habits. This, to quote Christopher Hitchens, “is to throw out the ripening vintage and to reach greedily for the Kool-Aid.”

Some, or even you, dear reader, might object that all this multidimensionality and complexity will be too confusing for users. This is fair. But first, I want to establish a key distinction, between confusion arising from unnecessary complexity introduced by the creators of the system and confusion arising from the fact that something is new and different. The former is unnecessary and we should make all efforts to eliminate it; the latter is as necessary to the extent that new things sometimes confuse us.

It might sometimes seem that two-dimensionality is something of a ceiling for the complexity of systems or media. There is no such ceiling; for example, most musicians will perform in the following dimensions simultaneously: facets of individual notes like pitch, dynamic (akin to volume), timbre, and facets of larger sections of music that develop concurrently with the notes but at independent scales, like harmony and phrasing.

In my view, we should build massively multidimensional systems, which start as simply as possible and, pedagogically, work from simple and familiar concepts up to complex ideas well beyond the beginner. Ideisomedia will, 1. free us from the clowning condescension of the Web and 2. warrant our engaging with it by speaking to us at our level, and reward us for taking the time to learn how to use it.

Before I talk at length and through the medium of graphs about what we stand to gain by doing this thing correctly, I’d like to make two supporting points. One frames why the mood of this introduction is so imperative, the other frames technological growth and development in a way that, I think, offers us cause for optimism.

Why Now, and Why I Think We’re Up to the Task

The Connectional Imperative

Firstly, I think that we are experiencing an emergency of communication in many of our societies, particularly in the USA and its satellites. My hypothesis (which is quite similar to many others in the news at the moment) is that the technology of communication, as it is configured currently, is encouraging the development of a set viewpoints that are non-interoperable and exclusive: this is to say that people believe things and broach the things that they believe in ways that are impossible to combine or that preclude their interacting productively.

Viewpoint diversity is necessary for a well-functioning society, but this diversity matters to the extent that we can communicate; this means firstly, actually parsing each others communications (which is analogous to syntactic interoperability: regardless of whether we understand the communication, do we regard it as a genuine communication and not mere noise, do we accept it or ignore it?); secondly, this means actually understanding what we’re saying (which is analogous to semantic interoperability: can we reliably convey meaning to each other?).

I think that both of these facets are under threat; often people call this “polarisation,” which I think is close but not the right characterisation; I am less concerned with how far apart the poles are than whether they can interact and coordinate.

Why is this happening? I think that it is because we don’t control the means of communications and, therefore, we are subject to choices and ideas about how we talk that are not in our interest. Often these choices are profit-driven (like limbic hijacks on Facebook that keep you on page). Sometimes it’s accidental, sometimes expedient design (like the one-way links and two-dimensional documents that characterize the Web, as mentioned earlier). Why is it a surprise so many of us see issues as “us versus them,” or to assume that if someone thinks one thing that they necessarily accept all other aspects associated with that idea, when we typically compress a fantastically multidimensional discussion (politics) onto a single dimension (left-right)?

We need multidimensional conversations, and we already have the tools to express them: we should start using them.

This really is an emergency. We don’t grow only by agreement, we grow by disagreement, error correction, via the changing of minds, and the collision of our ideas with others’: this simply won’t happen if we stay trapped in non-interoperable spaces of ideas, or worse, technology.

On the topic of technology, I am quite optimistic: everyone that uses the Internet can connect, practically seamlessly, with any other person, regardless of sex, gender, race, creed, nationality, ideology, etc. The exceptions here (please correct me if I’m wrong) are always to do with whether you’re prevented (say by your government) from accessing the Internet or because your device just isn’t supposed to connect (it was made before the Internet became relevant, or it is not a communications device (e.g. a lamp)).

TCP/IP is the true technological universal language, it can bring anyone to the table: when at the table you might find enemies and confusion, but you’re here and have at least the opportunity to communicate.

Therefore, I think that we should regard that which is not interoperable, not meaningfully interoperable, or at least not intentionally open and logical, with immense scepticism, and conserve what remains, especially TCP/IP, standards like this and their successors in new CNASs.

Benign Technology

I think that technology is good. Others say that technology is neutral, that people can apply it to purposes that help us or that hurt us. Of course, still others say that it is overall a corrupting influence. My argument is simple, technology forces you to do two things: 1. to the extent that whatever you create works, it will have forced you to be rational; 2. to the extent that you want your creation to function properly in concert with other devices, it will have forced you to think in terms of communication, compatibility and, at best, interoperability. I’m not saying that all tech is good, but rather that to the extent that tech works, it forces you to exhibit two virtues: rationality and openness.

In the first case, building things that work forces you to adopt a posture that accepts evidence and some decent model of reality: obviously this is not a dead cert, people noticeably “partition” their minds into areas, one for science, another for superstitions and so on. My claim is that going through the motions of facing reality head on is sufficient to improve things just a little; tech that works means understanding actions and consequences. This is akin to how our knowledge of DNA trashed pseudo-scientific theories of “race” by showing us our familiarity with all other humans, or how the germ theory of disease has helped to free us of our terror of plagues sent by witches or deities.

I’m not being naive here: I know that virtue isn’t written in the stars; rather, I claim that rationality is available to us as one might use a sifter: anyone can pick it up and use their philosophical values to distinguish gold (in various gradations) from rock. Technology requires us to pick up the sifter, or create it, or refine it, even if you would otherwise be disinclined.

In the case of the second faculty, openness, once you have created a technology, you can make it arbitrarily more functional by giving it the ability to talk to others like it or, better, others unlike it. Think of a computer that stands alone, versus one that can communicate with any other number of other computers over over a telecommunication line. But, in order to create machines that can connect, you have to think in terms of communication, you have to at least open yourself to and model the needs and function of other devices and other people. Allowing for some generalizing, the more more more capable the system, the more considerate the design.

Ironically, the totality of the production of technology is engaged in a tug-of-war: on one side, the need and desire to make good systems pulls us towards interoperability and on the other side, short-sighted profit-seeking and the fact that it’s hard to make systems that can talk pulls us towards non-interoperability. Incidentally, the Internet is a wonderful forcing function, here: the usual suspects like Apple, IBM and Microsoft amazingly Internet-interoperable.

Put simply, if you want to make your tech work, you have to face reality; if you want your tech to access the arbitrarily large benefits of communicating with other tech, you have to imagine the needs of other people and systems. Wherever you’re going, the road will likely take you past something virtuous.

A Rhapsody In Graphs: Up and to the Right, or What Do We Have to Gain?

To begin, remember this diagram:

Technological innovation exists in a feedback loop: so if you want virtuous systems in the future, create virtuous systems today.

You’re familiar, I hope, with Moore’s Law, which states that around every two years, the number of transistors in an integrated circuit doubles, meaning roughly that it doubles in computing power. This means that if you plot computing power against time, it looks something like this:

Moore’s law describes the immense increase in processing capacity that has facilitated a lot of the good stuff we have. Today, the general public can get computers more powerful than the one that guided the Saturn V rocket. The ARPANET (the predecessor to the Internet) used minicomputers to fulfil a role somewhat similar to that of a router today—the PDP-11 minicomputers popular for the ARPANET started at $7,700 (more than $54,000 in 2020 dollars); most routers today are relatively cheap pieces of consumer equipment, coming in at less than $100 a piece. This graph represents more humans getting access to the mind-expanding and mind-connecting capabilities of computers, that the computers themselves are getting better, and that this trend is quickening.

But, one might ask, what about some of the other indexes discussed in this introduction: infreedom, interoperability, ideisomorophism?

Interoperability

For the purposes of this question, we first need to find a meaningful way to think about overall interoperability. For instance, it doesn’t really matter to us that coders create totally incompatible software for their own purposes, all the time; meanwhile, as time passes, the volume of old technology that can no longer work with new technology increases due to changing standards and innovation: this matters only to the extent that we have reason to talk to that old gear (there are lots of possible reasons, if you were wondering). So, let’s put it like this:

overall meaningful interoperability (OMI) = the proportion of all devices and programs that are interoperable, excluding private and obsolete technology

This give us an answer as a fraction:

  • 100% means that everything that we could meaningfully expect to talk to other stuff can do so.
  • 0% would mean that nothing that we could reasonably expect to talk to other stuff can do so.
  • As time passes we would expect this number to fluctuate, as corporate policy, public interest, innovation, etc. affect what sort of technology we create.

As mentioned above, a variety of different pressures influence overall meaningful interoperability; some companies, for example, might release non-interoperable products to lock in their customers, other firms might form consortia to create shared, open standards.

I think that, long-term, the best we can expect for interoperability would look like the below:

What are you looking at?

  • Back Then (I’m being deliberately vague here) represents a time in the past when computers were so rare, and often very bespoke, that interoperability was extremely difficult to achieve.
  • Now represents the relative present: we have mass computer adoption, and consortia and other groups give us standards like Unicode, USB, TCP/IP and more. At the same time, some groups are still doing their best to thwart interoperability to lock in their customers.
  • The Future is ours to define; I hope that through collaboration and by putting pressure on the creators of technology, we can continuously increase OMI. You’ll notice that the shape is the opposite of Moore’s Law’s exponential growth: this is, firstly, because there’s an upper limit of 100% and, secondly, because it seems fair to assume that we will reach a point where we hit diminishing returns.
  • It is theoretically possible that we might reach a future of total OMI, but perhaps it’s more realistic to assume that through accidents, difficulty and innovation, some islands of non-interoperability will remain.

Freedom

How are things looking for free software? It’s very hard to tell, because the computer world is so diverse and because the subject matter itself is so complex. For example, the growth of Android is excellent news on one level, because it is based on the open source Linux kernel; it is less good news in that the rest of Android is totally proprietary, which makes it confusing. See the graphs below for a recent assessment of things (data from statcounter):

Desktop:

Mobile:

I think it is imperative that we work to create and use more free tools, for no other reason that we as people deserve to know what the stuff in our homes, on our devices or that is manipulating our information is doing. With the right effort, we might be able to recreate the growth of Linux among supercomputer operating systems. I am enthusiastic about this, and see the growth of free software as something as unstoppable, say, as the growth of democracy.

 

Wikipedia

Ideisomorphism

First, dimensions. As mentioned above, we frequently try to express complex ideas in two few dimensions, and this hampers our ability to think and communicate. Computers are, potentially, a way for us to increase the dimensionality of our communication, but only if we use them to their full potential.

The diagram below sets out some ideas, along with their dimensions:

To be clear, I’m not making a value-judgement against lower-dimensional things. Rather, I am saying:

  • Firstly, that one should study any given thing in a manner that allows us to engage with it with the proper number of dimensions.
  • Secondly, that poor tools for the studying, engaging with and communicating that which is in more than two-dimensions acts as a barrier, keeping more of us from learning about some very fun topics.
  • Thirdly, high dimensionality can scare us off when it shouldn’t, e.g. if you can talk you already know how to modulate your voice in more than five dimensions simultaneously: pitch, volume, timbre, lip position, tongue position, etc.

I think that, pedagogically and technologically, we should strive to master the higher dimensions and structures of thinking that allow us to communicate thus. However, we seem to be hitting two walls:

  1. The paper/screen wall: it’s hard to present things in more than two dimensions on paper or screens, and we get stuck with things like document structure, spreadsheets, etc., when more nuanced tools are available.
  2. The reality wall: it’s weird and sometimes scary to think in more than three dimensions, because it’s tempting to try to visualize this sort of thing as a space and, as our reality has just three spacial dimensions, this gets very confusing. This is tragic because a. we already process in multiple dimensions quite easily and b. multidimensionality doesn’t have to manifest spatially, nor they do manifest spatially must all dimensions manifest at once; what matters is the ability to inspect information from an arbitrary number of dimensions seamlessly.

Let us break the multidimensionality barrier! The nuance of our conversations and our thinking requires it. We should:

  1. Where possible, use tools like mind-maps and Venn diagrams (which allow for an arbitrary relationships and dimensions) over strictly hierarchical or relational structures (like regular documents, spreadsheets or relational databases, which are almost always two-dimensional).
  2. Use and build systems that allow for the easy communication and sharing of these structures: it’s easy to show someone a mind-map, but quite hard to share between systems because there’s no standard datastructure.
  3. Remember the technological feedback loop: 2-D systems engender 2-D thinking, meaning more 2-D systems in the future; we need concerted efforts to make things better now, such that things can be better in the future.

For our last graph, I’d like to introduce a new value, that combines the three concerns of this introduction (freedom, interoperability, ideisomorphism) into one, we can call it FII. Where before, I expressed interoperability and freedom as proportions (e.g. the percentage of software that is interoperable); this time, let’s think of these values as a relative quantity with no limit on it’s size (e.g. let’s say that 2020 and 2030 had the same percentage of free software, but 2030 had more software doing more things: this means that 2030 is higher on the freedom scale).

So:

FII = freedom x interoperability x ideisomorphism

We should, therefore, strive to achieve something like the graph above with respect to the technology of communication; think of it as Moore’s law, but for particular aspects of technology that represent our species’ ability to endure and flourish. It’s worth noting, of course, that Moore’s law isn’t a law in the physical sense, the companies whose products follow it achieve these results through continuous, intense effort. It seems only fair that we might expend such efforts to make the technology of communication not just more powerful, but better able to serve our pursuit of virtue; to the extent that I’m right about the moral arc of technology naturally curving upward, we may be rewarded quicker than we think.

What If We Are Successful?

What might happen if we’re successful? Here’s just one of many possibilities, and to explain I will need the help of a metaphor: the Split-brain condition. This condition afflicts people who have had their corpus callosum severed; this part of the brain connects the right and left hemispheres and, without it, the hemispheres have been known to act and perceive the world independently. For example, for someone with this condition, if only one eye sees something, the hemisphere associated with the other eye will not be aware of it.

I liken this to the current condition of humanity, except that instead of two hemispheres, we have numerous overlapping groupings of different sizes: nations, religions, ideologies, technologies, and more. Like split brain patients, often these parts don’t understand what the other is doing, might have trouble coordinating, or even come into conflict.

We have the opportunity to build our species’ corpus callosum, not that we might be unify the parts, but that the parts might coordinate; and, in that the density of connections is a power function of the number of nodes in the system, this global brain might dwarf the achievements of history’s greatest nations with feats on a planetary scale and in its pursuit of goodness, truth and beauty.

Categories
Trialectic

Trialectic – Can Technology be Moral?

Trialectic

A Wonk Bridge debate format between three champions designed to test their ability to develop their knowledge through exposure to each other and the audience, as well as maximise the audience’s learning opportunities on a given motion.

For more information, please read our introduction to the format here, The Trialectic.

Motioning for a Trialectic Caucus

Behind every Trialectic is a motion and a first swing at the motion, which is designed to kick-start the conversation. Please find my motion for a Trialectic on the question “Can Technology be Moral?” below.

I would like to premise this with a formative belief: Humans have, among many motivations, sought to seek “a better life” or “the good life” through the invention and use of technology and tools.

From this perspective, technology and human agency have served as variables in a so-called “equation of happiness” (hedonism) or in the pursuit of another goal: power, glory, respect, access to the kingdom of Heaven.

At the risk of begging the question, I would like to premise this motion with a preambulatory statement of context. I would like to focus our contextual awareness around three societal problems.

First, the incredible transformation of the human experience through technology mediation has changed the way we see and experience the world. Making most of our existing epistemological frameworks inadequate as well as turning our political, cultural systems unstable if not obsolete.

Another interpretation of this change is that parts of our world are becoming “hyper-historic”, where information-communication technologies are becoming the focal point, not a background feature of human civilisations (Floridi, 2012).

Next, the driving force behind “the game” and the rules of “the game”, which can be generally referred to as Late Capitalism, are being put under question with Postmodern thought exposing it’s weaknesses and unfairness, and a growing body of Climate Change thinkers documenting its unsustainability and nefarious effect on long term human survival. More practically, since the 2008 financial crash, Capitalism has taken a turn towards excluding human agents from the creation of wealth and commodifying distraction/attention. In short, the exclusion of the Human from human activity.

Third, the gradual irrelevance of a growing share of humans in economic and political activity, as well as the lack of tools for both experts and regular citizens to understand the new world(s) being crafted (this “Networked Society” which is a Hybrid of Digital Civilization and of Technologically Mediated Analog world) (Castells, 2009), has created an identity crisis a both the collective and individual levels. We know what is out there, have lost sight of the How and can’t even contemplate the Why anymore.

  • A better understanding of the forces shaping our world
  • An intentional debate on defining what this collective “Why” must be

Can help us find a new “True North” and begin acting morally by designing intentional technologies based around helping us act more morally.

Introductory Thesis

I based my initial stance on this topic atop the shoulders of a modern giant in Digital Ethics – Peter-Paul Verbeek, based on his 2011 work Moralising Technology.

Verbeek, who wants us to believe that the role of “things”, which includes “technologies”, inherently holds moral value. That we need to examine ethics through not an exclusively human-centric lens but also from a materialistic-angle. That we cannot ignore any longer the deep interlink between humans and their tools.

There is first the question of technological mediation. Humans depend on their senses to develop an appreciation of the world around them. Their senses are, however, limited. Our sense of sight can be limited by myopia or other delibitating conditions. We can use eyeglasses to “correct” our vision, and develop an appreciation of our surroundings in higher-definition.

This is a case of using technology that helps us reach a similar level of sensing as our peers, perhaps because living in a society comes with its own “system requirements”? We correct our vision with eyeglasses because we want to participate in society, be in the world, and place ourselves in the best position to abide by the ethics and laws. Technology is necessary to see the world like others, because when we see a common image of the world, we are able to draw conclusions as to how to behave within it.

When a new technology helps us develop our sense-perception even further, we can intuitively affirm that technological mediation occurs in the “definition” of ethics and values. Technologies help us see more of the world. Before the invention of the electric street-lamp system, as part of a wider system of urban reorganisation in the 19th century, western cultures looked-down on the practice of activities at night. An honest man (or woman) would not lurk on in the streets of Paris or London at night.

The darkness of dimly-lit streets made it easy for criminals and malfeasants to hide from the police and to harass the vulnerable. Though still seen as relatively more dangerous than moving in the light of day, it is now socially accepted (even romanticized) to ambulate under the city street-lamps and pursue a full-night’s entertainment.

A technology, the street-lamp system, helped people see more of the world (literally) and our ethics grew out of the previous equilibrium and into a new one. By affecting the way we perceive reality, technology also helps shape our constructed reality, and therefore also directly interferes in the moral thought-process of both individual and collective thought-processes.

From the pre-operative level, my thesis doesn’t diverge too far from Verbeek or Latour’s initial propositions. It will in terms of operative or practical applications seek to put a greater emphasis.

It seems clear that Technology has a role to play in defining what can be a moral practice. The question examined in this thesis therefore seeks to go a step further in exploring whether the creation (technology) can be considered independently from its creator (inventor/designer).

Are human agents responsible for the direct and indirect effects of the tools they build?

Of course, it is clear that adopting an perspective on the morality of technology that is solely anchored in the concept of technology mediation is problematic. As Verbeek mentioned in his book, the isolation of human subjects from material objects is deeply entrenched in our Modernist metaphysical schemes (cf. Latour 1993), contextualises ethics as a solely human affair and keeps us from approaching ethics as a hybrid.

This out-dated metaphysical scheme, sees human beings as active and intentional while material objects as passive and instrumental (Verbeek, 2011). Human behaviour can be assessed in moral terms good or bad but a technological artifact can be assessed only in terms of its functionality (functioning well or poorly) (Verbeek, 2011). Indeed, technologies have a tendency to reveal their true utility after having been used or applied, not before as they were being created or designed.

It is also key to my argument that technologies resembling intentionality are not in themselves intentional. Science fiction relating to artificial general intelligence aside, the context within which technology is being discussed today (2021), is a context of where technologies operate with a semblance of autonomy, situated in a complex web of interelated human and machine agents.

Just because the behaviour of some technologies today (i.e. Google search algorithms) are not decipherable, does not mean that they are autonomous nor intentional. What is intentional is the decision to create a system that contains no checks nor balances. To build a car without breaks or a network without an off-switch.

Technology does have the power to change our ethics.

An example Verbeek uses frequently is the pre-natal ultrasound scan that parents use to see and check whether their unborn child or fetus has any birth defects. This technology gives parents the chance or transfer the responsibility of making a potentially life-threatening or life-defining decision. It also gives them the first glimpse of what their unborn baby looks like through the monitor.

While the birth of a child before the scan was seen ethically as the work of a higher power, outside of human responsibility and agency, the scanner has given parents the tools and the responsibility to make a decision. As Verbeek documents at several occasions in the book, it changes dramatically the way parents’ (especially the fathers) label what they see through the monitor: from a fetus to an unborn child.

The whole ceremony around the scan visit, with the doctor’s briefing and the evaluation of results, creates a new moral dilemma for parents and a new moral responsibility to give life or not to a child with birth defects, rather than accepting whatever outcome is given to you at birth.

But let’s take this a step further and ask the age-old question: Who benefits?

The pre-natal ultrasound scan and the many other tests offered by hospitals today will service the patients. It will give them the chance to see their specimen and make choices about its future. But the clients of these machines are in fact hospitals and doctors, they are also, indirectly, policy-makers and healthcare institutions. The clients seek to begin shifting responsibility away from hospitals and doctors, and onto the parents who will have gained newfound commitment to the unborn babies that they have had the chance to see for the first time. The reasons driving this are manifold, but hospitals and governments are financially/economically interested in baby-births and also in having parents be committed to seeing through the stages of a natality.

When considering the morality of technologies, of systems and objects that are part of those systems, it’s worth paying close attention to what Bruno Latour calls systems of morality indicators; moral messages exist everywhere in society and inform each other, from the speed bump dissuading the driver from driving fast because “the area is unsafe, and driving fast would damage the car” to the suburban house fence informing bystanders “this is my private property”.

But also on who benefits from the widespread usage of said technological-products. The discussions around the morality of technology tend to focus on the effects deriving from the usage or application of said technologies rather than the financial or other benefits deriving from the adoption of said technologies at a large-scale.

Social Media as an example

The bundle of technologies that we call social media is a clear example of why this way of thinking matters. The nefarious consequences of mass-scale social media usage in a society and for an individual are clear and well-documented. We have documented its effects on warping and changing our conception of reality (technological mediation), in political sphere with our astroturfing piece and on our social relationships in our syndication of the friend piece.

In our discussions responding to the acclaimed Netflix documentary The Social Dilemma, we spotted an interesting pattern in the accounts. That one man or woman was powerless in stopping a system that was so lodged in the interweaving interests of Big Tech’s shareholders. The economic-logic of social media makes it so that acting on the nefarious consequences like fake news or information echo-chambers, would be null impossible due to the fact that it would require altering social media’s ad-based business model.

The technology of social media works and keeps being used because it is not concerned with the side-effects, but with the desired effect which is to provide companies or interested parties (usually with large digital marketing budgets) with panopticon-esque insights into its users (who happen to be over 80% of people living in the US, according to Pew Research Center 2019).

Technologies are tools. I mean, this is pretty obvious and doesn’t really need further explanation in writing, but they are not always tools like hammers or pencils that would prove useful to most human beings. They are sometimes network-spanning systems of surveillance that are used by billions, only to provide actual benefit to a chosen few.

The intention of the designer is thus primordial when considering technology and morality because the application of said technology will inevitably have an effect on agents that encounter the technology, but it will also have an effect on the designer themself. There will be a financial benefit and, more than this, ‘the financial benefit will inform future action’ (reflected Oliver Cox, uponing editing this piece).

So yes, the reverse situation is also true, some technologies may be designed with a particular social mission in mind, and then used for a whole suite of unforeseen nefarious applications.

In this case, should the designer be blamed or made responsible for the new applications of their technology, should the technology itself be the subject of moral inquisition and the designer be absolved from their ignorance, or should each application of such technology be considered “a derivative” and thus conceptually separate from the original creation?

Another titan in digital ethics, Luciano Floridi of the Oxford Internet Institute, thinks that intentions are tied to the concept of responsibility; “If you turn on a light in your house and the neighbour’s house goes BANG! It isn’t your responsibility, you did not intend for it to happen.” Yes, the BANG may have had something to do with the turning on of the light, but as he goes on to mention, “accountability is different, it is the process of cause and effect relationships that connects the action to the reaction.”

Accountability as the missing link

With this in mind, we can assume that the missing link between designing a technology and placing the responsibility over to designers is accountability. To hold someone accountable for their actions, one must have access to knowledge or to data that would provide some sort of a paper trail for the observer to trace the effects of said design on the environment and the interactions of the environment with said design.

While it is indeed possible to measure the effects of a technology like social-media from an external perspective, it is far easier and more informative to do so from the source. Yes, what would hold designers of technologies most accountable is for them to hold themselves accountable.

There is therefore a problem of competing priorities when it comes to accountability, derived from the problem of the access to knowledge (or data).

In the three examples given: of the pre-natal ultrasound scanner, social media and the light-switch-that-turns-out-to-be-a-bomb. The intentions of the designer varied across a spectrum, from zero intention to blow up your neighbour’s house, to the pre-natal ultrasound scan where the intention to provide parents with a choice regarding the future of their child was deliberate.

In all three cases, an element beyond intentionality plays an role; the designer is either unaware of (with the light-switch) or unwilling to investigate (with social media) the consequences of applying technology. Behind the veil of claims of technological sophistication, designers reneage from their moral duties to “control their creations”.

If the attribution of responsibility in technologies lies in both intentionality and accountability, then, deontologically, shouldn’t the designers of such technologies provide the necessary information and build the structures to allow for accountability?

The designers should be held accountable for their creations, however autonomous they may initially seem. If so how, feasibly, can they be held accountable?

Many of these questions have been approached and tackled to some extent in the legal world, with intellectual property and copyright laws on the question of ownership of an original work. And this has also been examined to some length by the insurance industry which uses risk management frameworks to determine burden sharing of new initiatives between a principal and agent.

But in the realm of ethics and the impact of technologies on the social good, the focus that may best suit the issue we have here is the Tragedy of the Commons. It is the case where technologies that are widely available (as accessible as water or breathable air) have become commodities and are being used as building blocks for other purposes by a number of different actors.

The argument that technologies have inherent moral value is besides the point. The argument is that moral value should be ascribed to the ways in which technologies are used (whether those be called derivatives or original new technologies); the designers need to be inherently tied to their designs.

  1. In the GDPR example: where processing of personal data represents a genus of technologies where the moral value is ascribed to the processors and controllers of the personal data. The natural resource behind the technology, personal data, remains under control of the owner of that resource.
  2. Ethics by design: The process by which technologies are designed needs to be more inclusive and considerate. Its impact on stakeholders (suppliers, consumers, investors, employees, broader society, and the environment) need to be assessed and factored in during development. It cannot be something that can be wholly predicted but it can also be understood and managed if taken with particular due care. Example: regulated industries such as Life Sciences and Aerospace have lengthy trialling processes involving many stakeholders which makes the introduction of new products more rigorous.

Accountability as the other-side of the equation

As mentioned, the emergence of new technologies such as blockchain governance systems (e.g. Ethereum smart contracts) provide clear examples of how new technologies have created new ways of holding agents accountable for their actions. Those actions that without such enabling technologies, would have been considered outside-of-their-control.

It seems that technology can work on both sides of a theoretical ethical-accountability equation. If some technologies make it easier to act outside of pre-existing ethical parameters and unseen by the panoply of accountability tools in-use, then others can provide stakeholders with more tools to hold each other into account.

Can Technology Be Moral? Yes, it can given its ability to provide more tools to tighten the gap between agents actions and the responsibility they have for those actions. But some technology can be immoral, and stay immoral, without an effective counterweight in place. Technology is therefore an amoral subject, but very much moral in its role as both a medium and as an object for moral actors.

Closing

It will be my honour and pleasure to debate with our two other Trialectic champions, Alice Thwaite and Jamie Woodcock. I am looking forward to what promises to be a learning experience and to update this piece accordingly after their expert takes.

Please send us a message or comment on this article if you would like to join the audience (our audience is also expected to jump-in!).