Categories
Editorial

Internet Walden: Introduction—What Is the Internet?

Internet users speak with an Internet accent, but the only manner of speaking that doesn’t sound like an accent is one’s own.

What is the Internet? Why ask a question like this? As I mentioned in this piece’s equal companion Why We Should Be Free Online, we are in the Internet like fish in water and forget both its life-giving presence and its nature. We must answer this question, because, given a condition of ignorance in this domain, it is not a matter of whether but when one’s rights and pocket book will be infringed.

What sort of answer does one expect? Put it this way: ask an American what the USA is. There are at least three styles of answer:

  1. Someone who might never have considered the question before might say that this is their country, it’s where they live, etc.
  2. Another might say that America is a federal constitutional republic, bordering Canada to the North and Mexico to the South, etc.
  3. Another might talk about the country’s philosophy and its style of law and life, how, for example, the founding fathers wrote the Constitution so as to express rights in terms of what the government may not do rather than naming any entitlements, or how the USA does not have an official language.

The latter is the style of answer that I’m seeking, so as to elucidate the real human implications of what we have, the system’s style and the philosophy behind it. This will tell us, in the case of the Internet as it does in the case of the USA, why we have what we have, why we are astonishingly fortunate to have it in this configuration and what is wrong and how best to amend the system or build new systems to better uphold our rights and promote human flourishing.

In pursuit of this goal, I will address what I think are the three main mistaken identities of the Internet:

  1. The Web. The Web is the set of HTML documents, websites and the URLs connecting them; it is one of many applications which run on the Internet.
  2. Computer Network of Arbitrary Scale (CNAS). CNAS is a term of my own creation which will be explained in full later. In short: Ford is a car, the Internet is a CNAS.
  3. Something Natural, Deistic or Technical. As with many other technologies, it is tempting to believe that the way the Internet is derives from natural laws or even technical considerations; these things are relevant, but the nature of the internet is incredibly personal to its creators and users, and derives significantly from philosophy and other fields.

Finally, I will ask a supporting and related question, Who Owns the Internet? which will bring this essay to a close.

With our attention redirected away from these erroneous ideas and back to the actual Internet, we might then better celebrate what we have, and better understand what to build next. More broadly, I think that we are building a CNAS society and haven’t quite caught up to the fact; we need to understand the civics of CNAS in order to act and represent ourselves ethically. Otherwise, we are idiots: idiots in the ancient sense of the word, meaning those who do not participate.

Pulling on that strand, I claim, and will elaborate later, that we need should be students of the Civics of CNASs, in that we are citizens of a connected society; and I don’t mean merely that our pre-existing societies are becoming connected, rather that the new connections afforded by tools like the Internet are facilitating brand new societies.

The Internet has already demonstrated the ability to facilitate communication between people, nationalities and other groups that would, in most other periods of time, have found it impossible not just to get along but to form the basis for communication through which to get along. With an active CNAS citizenry to steward our systems of communication, I expect that our achievements in creativity, innovation and compassion over the next few decades will be unimaginable.

The Internet is Not the Web

You may, dear reader, already be aware of this distinction; please do stick with me, though, as clarifying this misapprehension will clarify much else. The big difference between the Web and the Internet is this: the Internet is the global system of interconnected networks, running on the TCP/IP suite of protocols; the Web is one of the things you can do on the Internet, other things include email, file-sharing, etc.

It’s not surprising that we confuse the two concepts, or say, go on the Internet when we mean go on the Web, in that the Web is perhaps the Internet application that most closely resembles the Internet itself: people and machines, connected and communicating. This is not unlike how, as a child, I thought that the monitor was the computer, disregarding the grey box. Please don’t take this as an insult; the monitor may not be where the processing happens, but it’s where the things that actually matter to us find a way to manifest in human consciousness.

As you can see in the above diagram, the Web is one of many types of application that one can use on the Internet. It’s not even the only hypermedia or hypertext system (the HT in HTTPS stands for hypertext).

The application layer sits on top of a number of other functions that, for the most part, one barely or never notices, and rightly so. However, we ought to concern ourselves with these things because of how unique and interesting they are, so let’s go through them one by one.

Broadly, the Internet suite is based on a system of layers. As I will explore later on, Internet literature actually warns against strict layering. Caveats aside, the Internet protocol stack looks like this:

Physical

Not always included in summaries of the Internet protocol suite, the physical layer refers to the actual physical connection between machines. This can be WiFi signals, CAT-5 cables, DSL broadband lines, cellular transmissions, etc.

The link layer layer handles data transmission between devices. For example, the Link layer handles the transmission of data from your computer to your router, such as via WiFi or Ethernet, and then over, say, Ethernet via a DSL line to the target network (say to a webserver from which you’re accessing a site). The Link layer was specifically designed for it not to matter what the physical layer actually is, so long as it provides the basic necessities.

Internet

The Link layer handled the transmission between devices, and the Internet layer organizes the jumps between networks: in particular between Internet routers. The Link layer on its own can get the data from your computer to your router, but to get to the router for the target network, it needs the Internet layer’s help: this is why (confusingly) it’s called the Internet layer, it provides a means of interconnecting networks.

Your devices, your router, and all Internet-accessible machines are assigned the famous IP addresses, which come in the form of a 32-bit number, of four bytes separated by dots. My server’s IP address is 209.95.52.144.

This layer thinks in terms of getting data from one IP address to another.

Transport

Now, with the Link and Internet layers buzzing away, transmitting data, the Transport layer works above them both, establishing communication between hosts. This is akin to how I have something of a direct connection with someone to whom I send a letter, even though that letter passes through letterboxes and sorting offices; the Transport layer sets up this direct communication between machines, so that they can act independently with respect to the underlying conditions of the connection itself. There are a number of Transport layer protocols, but the most famous is TCP.

One of the most recognizable facets of the Transport layer is the port number. The TCP protocol assigns numbered “ports” to identify different processes; for the Web, for example, HTTP uses port 80 and HTTPS, port 443. To see this in action, try this tool, which will see which ports are open on a given host: https://pentest-tools.com/network-vulnerability-scanning/tcp-port-scanner-online-nmap — try it first with my server, omc.fyi.

This layer thinks in terms of passing data over a direct connection to another host.

Application

The Application layer is responsible for the practicalities associated with doing the things you want to do: HTTPS for the Web, SMTP for email, SSH for a remote command line connection, are all Application layer protocols. If it wasn’t clear by now, this is where The Web lives, it is one of many Applications running on the Internet.

How it Works in Practice

Here’s an example of how all this works:

  • Assume a user has clicked on a Web link in their browser, and that the webserver has already received this signal, which manifests in the Application layer. In response, the server sends the desired webpage, using HTTPS, which lives on the Application layer.
  • The Transport Layer is then responsible for establishing a connection (identified by a port) between the server and the user’s machine, through which to communicate.
  • The Internet Layer is responsible for transmitting the appropriate data between routers, such as the user’s home router and the router at the location of the Web server.
  • The Link Layer is responsible for transmitting data between the user’s machine and their router, between their router and the router for the webserver’s network, and between the webserver and its router.
  • The Physical Layer is the physical medium that connects all of this: fiberoptic cable, phone lines, electromagnetic radiation in the air.

Why is this interesting? Well, firstly, I think it’s interesting for its importance; as I claim in this piece’s equal counterpart on Internet freedom, the Internet is used for so much that questions of communication are practically the same as questions of the Internet in many cases. Secondly, the Internet is Interesting for its peculiarity, which I will address next.

“Internet” Should Not Synonymous with “Computer Network of Arbitrary Scale”

When addressing the Internet as a system, there appear to be two ways in which people use the word:

  • One refers to the Internet as in the system we have now and, in particular, that runs on the TCP/IP protocol suite.
  • The other refers to the Internet as a system of interconnected machines and networks.

Put it this way: the first definition is akin to a proper noun, like Mac or Ford, the latter is a common noun, like computer or car.

This is not uncommon: for years I really thought that “hoover” was a generic term, and learned only a year or so ago that TASER is a brand name (the generic term is “electroshock” weapon). Then of course we have non-generic names that are, and some times deliberately so, generic-sounding: “personal computer” causes much confusion, in that it could refer to IBM’s line of computers by that name, something compatible with the former, or merely a computer for an individual to use; there is of course the Web, which is one of many hypertext systems that allow users to navigate interconnected media at their liberty, whose name sounds merely descriptive but, in fact, refers to a specific system of protocols and styles. The same is true for the Internet.

For the purpose of clarifying things, I’ve coined a new term: computer network of arbitrary scale (CNAS or seenas). A CNAS is:

  1. A computer network
  2. Using any protocol, technology or sets thereof
  3. That can operate at any scale

Point three is important: we form computer networks all the time, but one of the things about the Internet is that its protocols are robust enough for it to be global. If you activate the WiFi hotspot on our phone and have someone connect, that is a network, but it’s not a CNAS because, configured thus, it would have no chance of scaling. So, not all networks are CNASs; today, the only CNAS is the thing we call the Internet, but I think this will change in a matter of years.

There’s a little wiggle room in this definition: for example, the normal Internet protocol stack cannot work in deep space (hours of delay due to absurd distances, and connections that come in and out because the sun gets in the way make it hard), so one could argue the today’s Internet is not a CNAS because it can’t scale arbitrarily.

I’d rather keep this instability in the definition:

  • Firstly, because (depending on upcoming discoveries in physics) it may be possible that no network can scale arbitrarily: there are parts of the universe that light from us will never reach, because of cosmic expansion.
  • Secondly, because the overall system in which all this talk is relevant is dynamic (we update our protocols, the machines on the network change and the networks themselves change in size and configuration); a computer network that hits growing pains at a certain size, and then surmounts them with minor protocol updates didn’t cease to be a CNAS then become one again.

Quite interestingly, in this RFC on “bundle protocol” (BP) for interplanetary communication (RFCs being a series of publications by the Internet Society, setting out various standards and advice) the author says the following:

BP uses the “native” internet protocols for communications within a given internet. Note that “internet” in the preceding is used in a general sense and does not necessarily refer to TCP/IP.

This is to say that people are creating new things that have the properties of networking computers, and can scale, but are not necessarily based on TCP/IP. I say that we should not use the term Internet for this sort of thing; we ought to differentiate so as to show how undifferentiated things are (on Earth).

Similarly, much of what we call the internet of things isn’t really the Internet. For example, Bluetooth devices can form networks, sometimes very large ones, but it’s only really the Internet if they connect to the actual Internet using TCP/IP, which doesn’t always happen.

I hope, dear reader, that you share with me the sense that it is absolutely absurd, that our species has just one CNAS (the Internet) and one hypertext system with anything like global usage (the Web). We should make it our business to change this:

  • One, to give people some choice
  • Two, to establish some robustness (the Internet itself is robust, but relying on a single system to perform this function is extremely fragile)
  • Three, to see if we can actually make something better

At this point I’m reminded of the scene in the punchy action movie Demolition Man, in which the muscular protagonist (frozen for years and awoken in a strange future civilization) is taken to dinner by the leading lady, who explains that in the future, all restaurants are Taco Bell.

This is and should be recognized to be absurd. To be clear, I’m not saying that the Internet is anything like Taco Bell, only that we ought to have options.

The Internet is not Natural, Deistic or even that Technical

I want to rid you of a dangerous misapprehension. It is a common one, but, all the same, I can’t be sure that you suffer from it; all I can say is that, if you’ve already been through this, try to enjoy rehearsing it with me one more time.

Here goes:

Many, if not most, decisions in technology have little to do with technical considerations, or some objective standard for how things should be; for the most part they relate, at best, to personal philosophy and taste, and, at worst, ignorance and laziness.

Ted Nelson provides a lovely introduction, here:

Here’s a ubiquitous example: files and folders on your computer. Let’s say I want to save a movie, 12 Angry Men, on my machine: do I put it in my Movies that Take Place Mainly in One Room folder with The Man from Earth, or my director Sidney Arthur Lumet folder with Dog Day Afternoon? Ideally I’d put it in both, but most modern operating systems will force me to put it in just one folder. In Windows (very bad) I can make it sort of show up in more than one place with “shortcuts” that break if I move the original, with MacOS (better) I have “aliases” which are more robust.

But why am I prevented from putting it in more than one place, ab initio? Technically, especially in Unix-influenced systems (like MacOS, Linux, BSD, etc.) there is no reason why not to: it’s just that the people who created the first versions of these systems decades ago didn’t think you needed to, or thought you shouldn’t—and it’s been this way for so long that few ask why.

A single, physical file certainly can’t be in more than one place at a time, but the whole point of electronic media is the ability to structure things arbitrarily, liberating us from the physical.

Technology is a function of constraints—those things that hold us back, like the speed of processors, how much data can pass through a communications line, money—and values: values influence the ideas, premises, conceptual structures that we use to design and build things, and these things often reflect the nature of their creators: they can be open, closed, free, forced, messy, neat, abstract, narrow.

As you might have guessed, the creators and administrators of technology often express choices (such as how a file can’t be in two places at once) as technicalities, sometimes this is a tactic to get one’s way, sometimes just ignorance.

Why does this matter? It matters because we won’t get technology that inculcates ethical action in us and that opens the scope of human imagination by accident, we need the right people with the right ideas to build it. In the case of the Internet, we were particularly fortunate. To illustrate this, I’m going to go through two archetypical values that shape what the Internet became, and explore how things could have been, otherwise.

Robustness

See below a passage from RFC 1122. It’s on the long side, but I reproduce it in full for you to enjoy the style and vision:

At every layer of the protocols, there is a general rule whose application can lead to enormous benefits in robustness and interoperability [IP:1]:

“Be liberal in what you accept, and conservative in what you send”

Software should be written to deal with every conceivable error, no matter how unlikely; sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue. In general, it is best to assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect. This assumption will lead to suitable protective design, although the most serious problems in the Internet have been caused by unenvisaged mechanisms triggered by low-probability events; mere human malice would never have taken so devious a course!

Adaptability to change must be designed into all levels of Internet host software. As a simple example, consider a protocol specification that contains an enumeration of values for a particular header field — e.g., a type field, a port number, or an error code; this enumeration must be assumed to be incomplete. Thus, if a protocol specification defines four possible error codes, the software must not break when a fifth code shows up. An undefined code might be logged (see below), but it must not cause a failure.

The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features. It is unwise to stray far from the obvious and simple, lest untoward effects result elsewhere. A corollary of this is “watch out for misbehaving hosts”; host software should be prepared, not just to survive other misbehaving hosts, but also to cooperate to limit the amount of disruption such hosts can cause to the shared communication facility.

This is not just good technical writing, this is some of the best writing. In just a few lines, Postel assures us of the not a case of whether but when orientation that can almost be applied universally, which almost predicts Taleb’s Ludic Fallacy—how the things that really hurt you are those for which you weren’t being vigilant, especially not ones that belong to familiar, mathematical-feeling or game-like scenarios; Taleb identifies another error type: not planning enough for the scale of the damage—Postel understood that in a massively interconnected environment, small errors can compound into something disastrous.

Then Postel explains one of the subtler parts of of his imperative: on first reading, I had thought that Be liberal in what you accept meant “Permit communications that are not fully compliant with the standard, but which are nonetheless parseable”. It goes beyond this, meaning that one should do so while being liberal in an almost metaphorical sense: being tolerant of and therefore not breaking down in response to aberrant behavior.

This is stunningly imaginative: Postel set out how Internet hosts might communicate without imposing uniform of versions of the software among all Internet users. Remember, and as I mention this essay’s counterpart on freedom, the Internet is stunningly interoperable: today, in 2021, you still can’t reliably switch storage media formatted for Mac and Windows, but it’s so easy to hook new devices up to the Internet that people seem to say why not, giving us Internet toothbrushes and fridges.

Finally, the latter part, calling hosts to be Conservative in what you send, is likewise more subtle than one might gather on first reading. It doesn’t mean that one should merely adhere to the standards (which is hard enough), it means do so while avoiding doing something that, while permitted, risks causing issues in other devices that are out-of-date or not set up properly. Don’t just adhere to the standard, imagine whether some part of the standard might be obscure or new enough that using it might cause errors.

This supererogation reaches out of the bounds of mere specification and into philosophy.

Postel’s Law is, of course, not dogma, and people in the Internet community have put forward proposals to move beyond it. I’m already beyond my skill and training, so can’t comment on the specifics here, but wish to show only that the Law is philosophical and beautiful, not necessarily perfect and immortal.

Simplicity

See RFC 3439:

“While adding any new feature may be considered a gain (and in fact frequently differentiates vendors of various types of equipment), but there is a danger. The danger is in increased system complexity.”

And RFC 1925:

“It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.”

You might not need more proof than the spelling error to understand that the Internet was not created by Gods. But if you needed more, I wish for you to take note of how these directives relate to a particular style of creation, the implication being that the Internet could have gone many other ways and would have made our lives very different.

Meanwhile, these ideas are popular but actually quite against the grain, overall. With respect to the first point, it’s quite hard to find arguments to the contrary; this seems to be because features are the only way get machines to do things, and doing things is what machines is for—this seems to be the same as the reason why there’s no popular saying meaning “more is more” but we do have the saying “less is more,” because more is actually more, but things get weird with scale.

The best proponents for features and lots of them are certainly software vendors themselves, like Microsoft here:

Again, I’m not saying that features are bad—everything your computer does is a feature. This is, however, why it’s so tempting to increase them without limit.

Deliberately limiting features, or at least spreading features among multiple self-contained programs appears to have originated within the Unix community, best encapsulated by what is normally called the Unix philosophy, here are my two favorite points (out of four, from one of the main iterations of the philosophy):

  1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.
  2. Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information.

The first point, there, neatly encompasses the two ideas referenced before in RFCs: don’t add too many features, don’t try to solve all your problems with one thing.

This philosophy is best expressed by the Internet Protocol layer of the stack (see the first section of this essay for our foolishly heathen recap of the layers). It is of course tempting to have IP handle more stuff; right now all it does is rout traffic between the end users, those users are responsible for anything more clever than that. This confers two main advantages:

  1. Simple systems mean less stuff to break; connectivity between networks is vital to the proper function of the Internet, better to lighten the load on the machinery of connection and have the devices on the edge of the network be responsible for what remains.
  2. Adding complex features to the IP layer, for example, would add new facilities that we could use; but any new feature imposes a cost on all users, whether it’s widely used or not. Again, better to keep things simple when it comes to making connections and transmitting data, and get complex on your own system and your own time.

At risk of oversimplifying, the way the Internet is is derived from a combination of technical considerations, ingenuity, and the combination of many philosophies of technology. There are, one can imagine, better ways in which we could have done this; but for now I want to focus on what could have been: imagine if the Internet had been built by IBM (it would have been released in the year 2005 and would require proprietary hardware and software) or Microsoft (it would have come out at around the same time, but would be run via a centralized system that crashes all the time).

Technology is personal first, philosophical second, and technical last; corollary: understand the philosophy of tech, and see to it that you and the people that make your systems have robust and upright ideas.

Who Owns and Runs the Internet?

As seems to be the theme: there’s a good deal of confusion about who owns and runs the internet, and our intuitions can be a little unhelpful because the Internet is an odd creature.

We have a long history of understanding who owns physical objects like our computers and phones, and if we don’t own them fully, have contractual evidence as to who does. Digital files can be more confusing, especially if stored in the cloud or on third party services like social media. See this piece’s counterpart on freedom for my call to action to own and store your stuff.

That said, a great deal of the Internet, in terms of software and conceptually, is hidden from us, or at least shows up in a manner that is confusing.

The overall picture looks something like this (from the Internet Engineering Task Force):

“The Internet, a loosely-organized international collaboration of autonomous, interconnected networks, supports host-to-host communication through voluntary adherence to open protocols and procedures defined by Internet Standards.”

Hardware

First, the hardware. Per its name, the Internet interconnects smaller networks. Those networks—like the one in your home, an office network, one at a university, or something ad hoc that you set up among friends—are controlled by the uncountable range of individuals and groups that own networks and/or the devices on them.

Don’t forget, of course, that the ownership of this physical hardware can be confusing, too: it’s my home network, but ComCast owns the router.

Then you have the physical infrastructure that connects these smaller networks: fiberoptic cables, ADSL lines, wireless (both cellular and WISP setups) which is owned by internet service providers (ISPs). Quite importantly, the term ISP says nothing about nature or organizational structure: we often know ISPs as huge companies like AT&T, but ISPs can be municipal governments, non-profits, small groups of people or even individuals.

Don’t assume that you have to settle for internet service provided by a supercorporation. There may be alternatives in your area, but their marketing budgets are likely small, so you need to look for them. Here are some information sources:

ISPs have many different roles, and transport data varying distances and in different ways. Put it this way: to get between two hosts (e.g. your computer and a webserver) the data must transit over a physical connection. But, there is no one organization that own all these connections: it’s a patchwork of different networks, of different sizes and shapes, owned by a variety of organizations.

To the user, the Internet feels like just one thing: we can’t detect when an Internet data packet has to transition between, say, AT&T’s cabling to Cogent Communications’—it acts as one thing because (usually) the ISPs coordinate to ensure that the traffic gets where it is supposed to go. The implication of this (which I only realized upon starting research for this article) is that the ISPs have to connect their hardware together, which is done at physical locations known as Internet exchange points, like the Network Access Point of the Americas, where more than 125 networks are interconnected.

Intangibles

The proper function of the Internet relies heavily on several modes of identifying machines and resources online: IP addresses and domain names. There are other things, but these are the most important and recognizable.

At the highest level, ICANN manages these intangibles. ICANN is a massively confusing and complicated organization to address, not least because it has changed a great deal, and because it delegates many of its important functions to other organizations.

I’m going to make this very quick and very simple, and for those who would like to learn more, see the Wikipedia article on Internet governance. ICANN is responsible for three of the things we care about: IP addresses, domain names, and Internet technology standards; there’s more, but we don’t want to be here all day. There must be some governance of IP addresses and domain names, if nothing else so that we ensure that no single IP address is assigned to more than one device, or one domain name assigned to more than one owner.

The first function ICANN delegates to one of several regional organizations that hand out unique IP addresses. IP addresses themselves aren’t really ownable in a normal sense, they are assigned.

The second function was once handled by ICANN itself, now by an affiliate organization, Public Technical Identifiers (PTI). Have you heard of this organization before? It is very important, but doesn’t even have a Wikipedia page.

PTI, among other things, is responsible for managing the domain name system (DNS) and for delegating the companies and organizations that manage these domains, such as GoDaddy, VeriSign and Tucows, etc. I might register my domain with GoDaddy, for example, but, quite importantly, I don’t own it, I just have the exclusive right to use it.

These organizations allow users to register the domains, but PTI itself manages the very core of the DNS, the root zone. The way DNS works is actually rather simple. If your computer wishes, say, to pull up a website at site.example.com:

  1. It will first ask the DNS root were to find the server responsible for the com zone.
  2. The DNS root file will tell your computer the IP address of the server responsible for com.
  3. Your machine will then go to this IP address and ask the server where to find example.com.
  4. The com server will tell you where to find example.com.
  5. And, finally, the example server will tell you where to find site.example.com.

You might have noticed that this is rather centralized; it’s not fully centralized in that everything after the first lookup (where we found how to get to com) is run by different people, but it’s centralized to the extent that PTI controls the very core of the system.

Fundamentally, however, the PTI can’t prevent anyone else from providing a DNS service: computers know to go to the official DNS root zone, but can be instructed to get information from anywhere. As such, here are some alternatives and new ideas:

  • GNS, via the GNUNET project, which provides a totally decentralized name system run on radically different principles.
  • Handshake, which provides a decentralized DNS, based on a cryptographic ledger.
  • OpenNIC, which is not as radical as GNS or Handshake, but which, not being controlled by ICANN, provides a range of top-level domains not available via the official DNS (e.g. “.libre” which can be accessed by OpenNIC users only).

The Internet Engineering Task Force (IETF) handles the third function, which I will explore in the next section.

Before ICANN, Jon Postel, mentioned above, handled many of these functions personally: on a voluntary basis, if you please. ICANN, created in 1998, is a non-profit: it was originally contracted to perform these functions by the US Department of Commerce. In 2016, the Department of Commerce made it independent, performing its duties in collaboration with a “multistakeholder” community, made up of members of the Internet technical community, businesses, users, governments, etc.

I simply don’t have the column inches to go into detail on the relative merits of this, e.g. which is better, DOC control or multistakeholder, or something else? Of course, there are plenty of individuals and governments that would have the whole Internet, or at least the ICANN functions, be government controlled: I think we ought to fight this with much energy, because we can guarantee that what any government with this level of control would use it to victimize its enemies.

I think I’m right in saying that in 1998 there was no way to coordinate the unique assignment of IP addresses and domain names without some central organization. Not any more: Handshake, GNUNET (see above) and others are already pioneering ways to handle these functions in a decentralized way. See subsequent articles in this series for more detail.

Dear reader, you may be experiencing a feeling somewhat similar to what I felt, such as when first discovering that there are alternative name systems. That is, coming upon the intuition that the way technology generally is set up today is not normal or natural, rather that it done by convention and, at that, is one among many alternatives.

If you are starting to feel something like this, or already do, I encourage you to cultivate this feeling: it will make you much harder to deceive.

Standards

The Internet is very open, meaning that all you need, really, to create something for the Internet is the skill to do so; this doesn’t mean you can do anything or that anything is possible (there are legal and technical limitations). One of the many results of this openness is that no single organization is responsible for all the concepts and systems used on the Internet.

This is not unlike how, in open societies, there is no single organization responsible for all the writing that is published: you only get this sort of thing in dictatorships. Contrast this, for example, to the iPhone and its accompanying app store, for which developers must secure permission in order to list their apps. I, unlike some others, say that this is not inherently unethical: however, we are all playing a game of chose your own adventure, and the best I can do is commend the freer adventure to you.

There are, however, a few very important organizations responsible for specifying Internet systems. Before we address them, it’s worth looking at the concept of a standards organization. If you’re already familiar, please skip this.

  1. What is a standard, in this context? A standard is a description of the way things work within a particular system such that, if someone follows that standard, they will be able to create things that work with others that follow the standard. ASCII, USB, and, course, Internet Protocol, are all standards.
  2. Why does this matter? I address this question at length in this piece’s counterpart on freedom; put simply, standards are like languages, they facilitate communication. USB works so reliably, for example, because manufacturers and software makers agree to the standard, and without these agreements, we the users would have no guarantee that these tools would operate together.
  3. Who creates the standards? Anyone can create a standard, but standards matter to the extent that they are adopted by the creators of technology and used. Quite commonly, people group together for the specific purpose of creating a standard or group of standards, sometimes this might be a consortium of relevant companies in the field (such as the USB Implementers Forum) or an organization specifically set up for this purpose, such as the ISO or ITU. Other times, a company might create a protocol for its own purposes, which becomes the de facto standard; this is often but not necessarily undesirable, because that firm will likely have created something to suit their own needs rather than those of the whole ecosystem. Standards like ASCII and TCP/IP, for example, are big exceptions to the popular opprobrium for things designed by committees.

In the case of the Internet, the main standards organization is the Internet Engineering Task Force (IETF), you can see their working groups page for a breakdown of who does what. Quite importantly, the IETF is responsible for specifying Internet Protocol and TCP, which, you will remember from above, represent the core of the Internet system.

The IETF publishes the famous RFC publication that I have referenced frequently. The IETF itself is part of the Internet Society, a non-profit devoted to stewarding the Internet more broadly. Do you care about the direction of the Internet? Join the Internet Society: it’s free.

There are other relevant standards, far too many to count; it’s incumbent upon me to mention that the World Wide Web Consortium handles the Web, one of the Internet’s many mistaken identities.

Nobody is forcing anyone to use these standards; nor is the IETF directly financially incentivized to have you use them. Where Apple makes machines that adhere to its standards and would have you buy them (and will sue anyone that violates its intellectual property), all the Internet Society can do is set the best standard it can and commend it to you, and perhaps wag its finger at things non-compliant.

If I wanted to, I could make my own, altered version of TCP/IP; the only disincentive to use it would be the risk that it wouldn’t work or, if it only played with versions of itself, that I would have no one to talk to. What I’m trying to say is that the Internet is very open, relative to most systems in use today: the adoption of its protocols is voluntary, manufacturers and software makers adhere to these standards because it makes their stuff work.

There is, of course, Internet coercion, and all the usual suspects re clamoring for control, every day: for my ideas on this subject, please refer to this piece’s counterpart on freedom.

Conclusion: We Need a Civics of Computer Networks of Arbitrary Size, or We Are Idiots

I propose a new field, or at least a sub-field: the civics of CNASs; which we might consider part of the larger field of civics and/or the digital humanities. Quite importantly, this field is distinct from some (quite interesting) discussions around “Internet civics” that are really about regular civics, just with the Internet as a medium for organization.

I’m talking about CNASs as facilitating societies in themselves, which confer rights, and demand understanding, duties, and reform. And, please, let’s please not call this Internet Civics, which would be like founding a field of Americs or Britanics and calling our work done.

So, to recapitulate this piece in the CNAS civics mode:

  1. The subject of our study, the Internet, is often confused for the Web, not unlike the UK and England, Holland and the Netherlands. This case of mistaken identity is instrumental because it deceives people as to what they have and how they might influence it.
  2. The Internet is also confused for the class of things to which it belongs: computer networks of arbitrary scale (CNAS). This is deceptive because it robs us of the sense (as citizens of one country get by looking at another country) that things can be done differently, while having us flirt with great fragility.
  3. The Internet’s founding fathers are much celebrated and quite well known in technical circles, but their position in the public imagination is dwarfed by that of figures from the corporate consumer world, despite the fact that the Internet is arguably the most successful technology in history. Because of its obscurity, there’s the sense that the Internet’s design is just so, normal or objective, or worse, magical, when quite the opposite is true: the Internet’s founding fathers brought their own philosophies to its creation; the proper understanding of any thing can’t omit its founding and enduring philosophy.
  4. The Internet’s structure of governance, ownership and organization it so complex that it is a study unto itself. The Internet combines immense openness with a curious organizational structure that includes a range of people and interest groups, while centralizing important functions among obscure, barely-known bodies. The Internet Society, which is the main force behind Internet technology, is free to join, but has only 70,000 members worldwide; Internet users are both totally immersed in it and mostly disengaged from the idea of influencing it.

As I say in this piece’s counterpart on freedom, the Internet is a big, strange, unique monster: one that all the usual suspects would have us carve up and lobotomize for all the usual reasons; we must prevent them from doing so. This means trading in the ignorance and disengagement for knowledge and instrumentality. Concurrently, we must find new ways of connecting and structuring those connections. If we do both of these things, we might have a chance of building and nurturing the network our species deserves.

Categories
Editorial

Internet Walden: Introduction—Why We Should Be Free Online

Image credit: Marta de la Figuera

Goodness, truth, beauty. These are not common terms to encounter during a discussion of the Internet or computers; for the most part, the normal model seems to be that people can do good or bad things online, but the Internet is just technology.

This approach, I think, is one of the gravest mistakes of our age: thinking or acting as though technology is separate from fields like philosophy or literature, and/or that criticisms from other fields are either irrelevant or at best secondary to technicalities. This publication, serving the Digital Humanities, is part of a much-needed correction.

I say that technology can be just or unjust in the same sense that a law can: an unjust law doesn’t possess the sort of ethical failing available only to a sentient being, rather, it has an ethical character (such as fairness or unfairness) as does the action that it encourages in us. We should accept the burden of seeing these qualities as ways: towards or away from goodness, truth and beauty.

Such a way is akin to a method or a path, like for example mediation or the practice of empathy: it’s not necessarily virtuous in itself, but the idea is that with consistent application one develops one’s virtue, or undermines it. My claim is that this is especially true for the ways in which we use technology, both as individuals and collectively.

In Computer Lib, Ted Nelson describes “the mask of technology,” which serves to hide the real intentions of computer people (technicians, programmers, managers, etc.) behind pretend technical considerations. (“We can’t do it that way, the computer won’t allow it.”) There’s another mask, that works in the opposite way: the mask of technological ignorance. We wear it either to avoid facing difficult ethical questions about our systems (hiding behind the fact that we don’t understand the systems) or as an excuse when we offload responsibilities onto others.

This essay concerns itself primarily with three ways: the secondary characteristics that lend themselves to our pursuit of goodness, truth and beauty, specifically in the technology of communication. They are, freedom, interoperability, and ideisomorphism; the latter is a concept which I haven’t heard defined before, but which can be summarized thus: the quality of systems which are both flexible enough to express the complexity and nuance of human thought and which have features that lend themselves to the shape of our cognition. (Ide, as in idea, iso as in equal to, morph as in shape.)

We should care about freedom, because we require it to build and experiment with systems in pursuit of the good; interoperability, because it forces us to formulate the truth in its purest form and allows us to communicate it; ideisomorphism, because it allows us to combine maximal taste and creativity with minimal technological impositions and restrictions in our pursuit of beauty. For details on these ways, please read on.

I won’t claim that this is a complete treatment of the ethical character of machines, my subject is machines for communication, and the best I can hope for is to start well.

In short, bad communications technology causes and covers up ethical failures. Take off the mask. We have nothing to lose but convenient excuses and stand to gain firstly, tools that act as force-multipliers for our best qualities and, secondly, some of the ethical clarity that comes from freedom and diverse conversation, and, if nothing else, a better understanding of ourselves.

Oliver Meredith Cox, January 28th, 2021

An Introduction to Walden: Life on the Internet

I argue that anyone who cares about human flourishing should concern themselves with Internet freedom, interoperability and ideisomorphism; I make this claim because the ethical character of the Internet appears to be the issue which contains or casts its shadow over the greatest number of other meaningful issues, and because important facts about the nature of the Internet are ways to inculcate our highest values in ourselves.

The Internet offers us the opportunity to shed the mask of technological ignorance: by understanding its proper function we should know how to spot the lies, and how to use the technology freely. We might then transform it into the retina and brain of an ideisomorphic system that molds to and enhances, rather than constricting, our cognition.

As such, with respect to the Internet, I say that we should:

  1. Learn/understand the tools and their nature.
  2. Use, build or demand tools that lend themselves to being learned.
  3. Use, build or demand tools that promote the scale, nuance and synchrony of human imagination, alongside those that nurture people’s capacity to communicate.

This piece is one part of a two-part introduction to the series, both parts are equal in importance and in order.

The other (What Is the Internet?) is an introduction to the technology of the Internet itself, so whenever questions come up about such things or if anything is not clear, consider either referring to that piece or reading it first; specifically, one of the reasons why I think we should turn our attention very definitely to the Internet is the fact that most people know so little about it, categorize it incorrectly and mistake it for other things (many confuse the Internet and the Web, for example).

Prospectus

  • Part 1: Introduction
    • Why We Should be Free Online: (this article) in which I explain why you should care about Internet freedom.
    • What Is the Internet: An explanation of what the Internet is (it probably isn’t what you think it is).
  • Part 2: Diary
    • Hypertext (one of an indefinite number of articles on the most popular and important Internet technologies)
    • Email
    • Cryptocurrency
    • Your article here: want to write an article for this series? Reach out: oliver dot cox at wonk bridge dot com.
  • Part 3: Conclusion
    • What Should We Create?A manifesto for what new technology we should create for the purpose of communicating with each other.

Call to Action

Do you care about Internet freedom and ethics? Do you want to take an Internet technology, master it and use it on your own terms? Do you want to write about it? Reach out: oliver dot cox at wonk bridge dot com.

A Note on Naming

For a full discussion on naming conventions, please see this piece’s companion, What Is the Internet?. However, I must clarify something up front. Hereafter, I will use a new term: computer network of arbitrary scale (CNAS [seenas]), which refers to a network of computers and/or the technology used to facilitate it, which can achieve arbitrarily large scale.

I use to distinguish between the 1. Internet in the sense of a singular brand, and 2. the class of networks of which the Internet is one example. The Internet is our name for the network running on a particular set of protocols (TCP/IP), it is a CNAS, and today it is the only CNAS. Imagine if a single brand so dominated an industry, like if Ford sold 99.9 percent of cars, so that there was no word for “car” (you would just say “Ford”), and you could hardly imagine the idea of there being another company. But, I predict that soon there will be more, and that they will be organized in different ways and run on different protocols.

Why the Internet Matters

First: the question of importance. Why do I think that the Internet matters relative to any other question of freedom that one might have? I know very well that many people with strong opinions think their subject the most important of all: politics, art, culture, literature, sport, cuisine, technology, engineering; if you care about something, it is likely that you think others should care, too. I know that there isn’t much more that I can do than to take a number, stand in line, and make my case that I should get an audience with you.

Here’s why I think you should care:

1. The Internet is engulfing everything everything we care about.

I won’t bore you with froth on the number of connected devices around or how everyone has a smartphone now; rather, numerous functions and technologies that were separate and many of which preceded the Internet are being replaced by it, taking place on it, or merging with it: telephony, mail, publishing, science, the coordination of people, commerce.

2. The Internet is the main home of both speech and information retrieval.

This is arguably part of the first point, but I think it deserves its own column inches: most speech and information exchange now happens online, most legacy channels (such as radio) are partly transmitted over the Internet, and even those media that are farthest from the digital (perhaps print) are organized using the Internet. At risk of over-reaching, I say that the question of free speech in general is swiftly becoming chiefly a question of free speech online. Or, conversely, that offline free speech is relevant to the extent that online speech isn’t free.

3. The Internet is high in virtuality.

When I claim above that the issue of all issues, someone might respond, “What, is it more important than food?” That is a strong point, and I am extremely radical when it comes to food and think that people should understand what they eat, know what’s in it, hold food corporations to account, and that to the extent that we don’t know how to cook or work with food, we will always be victim to people who want to control or fleece us. However, the Internet and cuisine are almost as far apart on the scale of virtuality as it is possible to be.

Virtuality, as defined by Ted Nelson, describes how something can seem or feel a certain way, as opposed to how it actually is, physically. For example, a ladder has no virtuality, (usually) the way it looks and how we engage with it corresponds 100% to the arrangement of its parts. A building, on the other hand, has much more virtuality: the lines and shape of a building give it a mood and feel, beyond the mere structure of the bricks, cement and glass.

Food is has almost no virtuality (apart from cuisine with immense artifice); the Internet, however, has almost total virtuality: the things that we do with it, the Web, email, cryptocurrency, have realities in the screen and in our imagination that are almost limitless, and the only physical thing that we typically notice is the “router” box in our home, the Wifi symbol on our device, the engineer in their truck and, of course, the bill. This immense virtuality is both what makes the Internet so profound, but also so dangerous: there are things going on beneath the virtual that threaten our rights. You are free to the extent that you understand and control these things.

Ted Nelson explains virtuality during his TED conference speech (start at 31:16):

4. The Internet has lots of technicalities, and the technicalities have bad branding.

All this stuff: TCP/IP, DNS, DHCP, the barrage of initialisms is hard to master and confusing, especially for those who are non-engineers or non-technical. I’m sorry, but I think we should all have to learn it or at least some of it. Not understanding something gives organizations with a growth obligation perhaps the best opportunity to extract profit or freedom from you.

5. The Internet is the best example that humanity has created of an open, interoperable system to connect people.

It is our first CNAS. As fish with water, it is easy to forget what we have achieved in the form of the Internet: it connects people of all cultures and religions, and nationalities (those that are excluded are usually so because of who governs them, not who they are), it works on practically all modern operating systems, it brings truths about the universe to those in authoritarian countries or oppressive cultures, and connects the breadth of human thinkers together.

To see the profundity of this achievement, remember that, today, many Mac- and Windows-formatted disks are incompatible with the other system, and that computer firms still attempt to lock their customers into using their systems by trapping them in formats and ways of thinking that don’t work with other systems, or that, even, culturally, some people refuse to use other systems or won’t permit other systems in their corporate, university or other departments.

Mac, Windows, GNU/Linux, Unix, BSD, Plan 9, you name it, it will be able to connect to the Internet; it is the best example of a system that can bridge types of technology and people. Imagine separate and incompatible websites, only for users of particular systems: this was an entirely possible outcome and we’re lucky it didn’t happen a lot more than the little it did (see Flash). The Internet, despite it’s failures and limitations, massively outclasses other technology on a regular basis, and is therefore something of a magnetic North, pulling worse, incompatible and closed systems along with it.

6. The Internet is part of a technological feedback loop.

As I mentioned in point 2. above, the Internet is now the main way in which we store, access and present information; the way in which we structure and present information today influences what we want to pursue in the future, the ideas we have and what, ultimately, we build. The Internet hosts and influences an innovation cycle:

  1. Available storage and presentation systems influence how we think
  2. The way we think influences our ideas
  3. Our ideas influence the technology we build, which takes us back to the start

This means that bad, inflexible, closed systems will have a detrimental effect on future systems, as will open, flexible systems engender better future systems. There is innovation, of course, but many design paradigms and ways of doing things get baked in, and sometimes are compounded. As such, I say that we ought to exert immense effort in creating virtuous Internet systems, such that these systems will compound into systems of even more virtue: much like how those who save a lot, wisely and early are (allowing for market randomness, disaster and war) typically rewarded with comfort decades later.

Put briefly, the Internet combines: the most integrating and connecting force in history, difficulty, virtuality, working in a feedback loop; it is the best we have, it is under constant threat, and we need to take action now.

The rest of this introduction will speak to the following topics:

  • Six imperatives for communications freedom
  • What we risk losing if we don’t shape the Internet to our values
  • Why, ultimately, I’m optimistic about technology and particularly the technology of connection
  • Why this moment of crisis tells us that we are overdue for taking action to improve the Internet and make it freer
  • What we have to gain

Six Imperatives for Communications Freedom

The technology of communication should:

  1. Be free and open source.
  2. Be owned and controlled by the users, and should help the rightful entity, whether an individual, group or the collective, to maintain ownership over their information and their modes of organizing information.
  3. Have open and logical interfaces, and be interoperable where possible.
  4. Help users to understand and master it.
  5. Let users to communicate in any style or format.
  6. Help users to work towards a system that facilitates the storage, transmission and presentation of both the totality of knowledge and of the ways in which it is organized.

1. The technology of communication should be free and open source.

First: what is free software? A program is free if it allows users the following:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute and make copies so you can help your neighbour.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

You will, dear reader, detect that this use of the word free relates to freedom not merely to something being provided free of charge. Open source, although almost synonymous, is a separate concept promoted by a different organization: the Open Source Initiative promotes open source and the Free Software Foundation, free.

A note, I think that we should obey a robustness principle when it comes to software and licenses: Be conservative with respect to the software you access (i.e. obey the law, respect trademarks, patents and copyright; pay what you owe for software, donate to and promote projects that give you stuff without charging a fee); be liberal with respect to the software you create (i.e. make it free and open source wherever and to the extent possible).

Fundamentally, the purpose of free software is to maximize freedom, not to impoverish software creators or get free stuff; any moral system based on free software must build effective mechanisms to give developers the handsome rewards they deserve.

To dig further into the concept and its sister concept, why do we say open source? The word “source” here refers to a program’s source code, instructions usually in “high-level” languages that allow programmers to write programs in terms that are more abstract and closer to the ways in which humans think, making programming more intuitive and faster. These programs are either compiled or interpreted into instructions in binary (the eponymous zeroes and ones) that a computer’s processor can understand directly.

Having just these binary instructions is much less useful than having the source, because the binary is very hard (perhaps impossible in some cases) for humans to understand. As such, what we call open source might be termed, software for which the highest level of abstraction of its workings is publicly available. Or, software that shows you how it does what it does.

Point 0. matters because the technology of communication is useful to the extent that we can use it: we shouldn’t use or create technology, for example, that makes it impossible to criticise the government or religion.

Of course, one might challenge this point, asking, for example, whether or why software shouldn’t include features that prevent us from breaking the law. I have ideas and opinions on this, but will save them for another time. Suffice to say that free software has an accompanying literature as diverse and exacting as the commentary on the free speech provision of the First Amendment: there is much debate about how exactly to interpret and apply these ideas, but that doesn’t stop them from being immensely useful.

Point 1. is extremely important for any software that concerns privacy, security or, for that matter, anything important. If you can’t inspect your software’s core nature, how can you see whether it contains functions that spy on you, provide illicit access to your computer, or bugs that its creators missed that will later provide unintentional access to hackers? See the WannaCry debacle for a recent example of a costly and disastrous vulnerability in proprietary software.

Point 2. matters for communications in that when software or parts of software can be copied and distributed freely, this maximises the number of people that have access and can, thus, communicate. It matters also in that if you can see how a system works, it’s much easier to create systems that can to talk to it.

However, the “free” in free software is the cause of confusion, as it makes it sound like people that create free or open source software will or can never make money. This is a mistake worth correcting:

  1. Companies can and do charge for free software, for example, Red Hat charges for it’s GNU/Linux operating system distro, Red Hat Enterprise Linux. The fee gets you the operating system under the exclusive Red Hat trademark, support and training: the operating system itself is free software (you can read up on this firm to see that they really have made money).
  2. A good deal of programmers are sponsored to create free software by their employers, at one point, Microsoft developers were the biggest contributors to the Linux kernel (open source software like Linux is just too good to ignore).

Point 3. might be clearer with the help of a metaphor. Imagine if you bought a car, but, upon trying to fit a catalytic converter, or a more efficient engine, were informed that you were not permitted to do so, or even found devices that prevented you from modifying it. This is the state that one finds when trying to improve most proprietary software.

In essence, most things that make their way to us could be better, and in the realm of communication, surmounting limitations inherent in the means of communication opens new ways of expressing ourselves and connecting with others. Our minds and imaginations are constrained by the means of communication just as they are by language; the more freedom we have, the better. Look, for example, to the WordPress ecosystem and range of plugins to see what people will do given the ability to make things better.

There are names in tech that are well known among the public: most notably Bill Gates and Microsoft, Steve Jobs and Apple; we teach school children about them, and rightly so, they and those like them have done a great deal for a great many. However, I argue that there are countless other names of a very different type whose stories you should know, here are two: Jon Postel, a pioneer of the Internet who made the sort of lives we life now possible through immense wisdom and foresight, his brand: TCP/IP; Linus Torvalds, who created the Linux kernel, which (usually installed as the core of the GNU operating system) powers all of the top supercomputers, most servers, most smart-phones and a non-trivial share personal computers.

Richard Dawkins has an equation to evaluate the value of a theory:

value = what it explains ÷ what it assumes

Here’s my formulation but for technology:

value = what it does ÷ the restrictions accompanying it

Such restrictions include proprietary data structures, non-interoperable interfaces, and anything else that might limit the imagination.

Gates and Jobs’ innovations are considerable, but almost all of them came with a set of restrictions that separate users from users and communities from communities. Postel and Torvalds, their collaborators, and others like them in other domains not mentioned, built and build systems that are open and interoperable, and that generate wealth for the whole world by sharing new instrumentality with everyone. All I’m saying is that we should celebrate this sort of innovator a lot more.

2. The technology of communication should be owned and controlled by the users, and should help the rightful entity, whether an individual, group or the collective, to maintain ownership over their information and their modes of organizing information

I will try to be brief with what risks being a sprawling point. In encounter after encounter, and interaction after interaction, users sign ideas, identities, privacy and control over how we communicate to unaccountable corporations. This is a hazard because (confining ourselves only to social media and the Web) we might pour years of work into writing and building an audience, say, on Twitter, to have everything taken away because we stored our speech on a medium that we didn’t own and, for example, a network like Twitter represents a single choke-point for authoritarian regimes like the government of Turkey.

On a slightly subtler note, expressing our ideas via larger sites makes us dependent on them for the conversation around ideas, also: conversations should accompany the original material, not live on social profiles far from it, where they are sprayed into the dustbin by the endless stream of other content.

We the users should pay for our web-hosting and set up our own sites: we already have the technology necessary to do this. If you care about it, own it.

3. The technology of communication should have open and logical interfaces, and be interoperable where possible.

What is interoperability, the supposed North Star here? I think the best way to explain interoperability is to think of it as a step above compatibility. Compatibility means that some thing is can work properly in connection with another thing, e.g. a given USB microphone is compatible with, say, a given machine running a particular version of Windows. Interoperability takes us a step further, requiring there to be some standard (usually agreed by invested organizations and companies) which 1. is publicly available and 2. as many relevant parties as possible agree to obey. USB is a great example: all devices carrying the USB logo will be able to interface with USB equipment; these devices are interoperable with respect to this standard.

There are two main types of interoperability: syntactic and semantic. The former refers to the ability of machines to transmit data effectively: this means that there has to be a standard for how information (like images, text, etc.) is encoded into a stream of data that you can transmit, say, down a telephone line. These days, much of this is handled without us noticing or caring. If you’d like to see this in action, right-click or ⌘-click on this page and select “View Page Source” — you ought to see a little piece of code that says ‘ charSet=”utf-8″ ‘ — this is the Webpage announcing what system it is using. This page is interoperable with devices and software that can use the utf-8 standard.

Semantic interoperability is much more interesting: it builds on the syntactic interoperability and adds the ability to actually do work with the information in question. Your browser has this characteristic in that (hopefully) it can take the data that came down your Internet connection and use it to present a Webpage that looks the way it should.

Sounds great, right? Well, people break interoperability all the time, for a variety of reasons:

  1. Sometimes there’s no need: One-off, test or private software projects usually don’t need to be interoperable.
  2. Interoperability is hard: The industry collaboration and consortia necessary to create interoperable standards require a great deal of effort and expense. These conversations can be dry and often acrimonious: we owe a great deal to those who have them on our behalf.
  3. Some organizations create non-interoperable systems for business reasons: For example, a company might create a piece of software that saves user files in a proprietary format and, thus, users must keep using/paying for the company’s software can access their information.
  4. Innovation: New approaches eventually get too far from older technology to work together; sometimes this is a legitimate reason, sometimes it’s an excuse for reason 3.

Reason three is never an excuse for breaking interoperability, reason two is contingent, and reason one and four fine. In cases where it is just too hard or expensive to work up a common, open standard, creators can help by making interfaces that work logically, predictably and, if possible, document them: this way collaborators can at least learn how to build compatible systems.

4. The technology of communication should help users to understand and master it.

Mastery of something is a necessary condition for freedom from whatever force that would control it. To the extent that you don’t know how to build a website or operating system or a mail server, you are a captive audience for those who will offer to do it for you—there is nothing wrong with this, per se, but I argue that the norm should be that any system that makes these things easy should be pedagogical: it should act as a tutorial to get you at least to the stage of knowing what you don’t know, rather than keeping your custom through ignorance. We should profit through assisting users in the pursuit of excellence and mastery.

Meanwhile, remember virtuality: the faulty used car might have visible rust that scares you off, or might rattle on the way home, letting you know that it’s time to have a word with the salesperson. Software that abuses your privacy or exposes your data might do so for years without you realizing, all this stuff can happen in the background; software, therefore, should permit and encourage users to “pop the hood” and have a look around.

Users: understand your tools. Software creators: educate your users.

5. The technology of communication should let users communicate in any style or format.

Modern Internet communication systems, particularly the Web and to an extent email, beguile us with redundant and costly styling, user interfaces, images, etc. The most popular publishing platforms, website builders like WordPress and social media, force users either to adopt particular styling or to make premature or unnecessary choices in this regard. The medium is the message: forced styling changes the message; forced styling choices dilute the message.

6. The technology of communication should help users to work towards a system that facilitates the storage, transmission and presentation of both the totality of knowledge and of the ways in which it is organized.

This is a cry for action for the future: expect more at the end of this article series. Picture this: all humanity’s knowledge, artistic and other cultural creations, visibly sorted, thematically and conceptually, via sets, links and other connections, down to the smallest functional unit. This would allow any user, from a researcher to a student to someone who is curious to someone looking for entertainment, to see how any thing created by humanity relates to all other things. This system would get us a lot closer to ideisomedia. Let’s call it the Knowledge Explorer.

This is not the Web. The Web gave us the ability to publish easily electronically, but because links on the Web point only one way, there can exist no full view of the way in which things are connected. Why? For example, if you look at website x.com, you can quite easily see all the other websites to which it links: all you need to do is you can look at all the pages on that site and make a record.

How, what if you asked what other websites link to x.com? The way the Web functions now, with links stored in the page and going one-way, the only way to see what other websites link to a given site is to inspect every other site on the rest of the Web. This is why the closest things we have to an index of all links are expensive proprietary tools like Google and SEMRush. If links pointed both ways, seeing how things are connected would be trivial.

Jaron Lanier explains this beautifully in the video below (his explanation starts at 15:48):

Google and SEMRush are useful, but deep down it’s all a travesty: we the users, companies, research groups, Universities and other organizations set down information in digital form, but practically throw away useful information on how it is structured. We have already done the work to realize the vision of the Knowledge Explorer, but because we have bad tools, the work is mostly lost. Links, connections, analogies are the fuel and fire of thinking, and ought to be the common inheritance of humanity, and we should build tools that let us form and preserve them properly.

As you might have already realized, building two-way links for a hypertext system is non-trivial. All I can say is that this problem as been solved. More on this much later in this series.

This concludes the discussion of my six imperatives. Now, what happens if these ideas fail?

What Do We Have to Lose?

1. Freedom

People with ideas more profound than mine have explored the concept of freedom of expression more extensively than I can here and have been doing so for some time; there seems little point in rehearsing well-worn arguments. But, as this is my favourite topic, I will give you just one point, on error-correction. David Deutsch put it like this, in his definition of “rational:”

Attempting to solve problems by seeking good explanations; actively pursuing error correction by creating criticisms of both existing ideas and new proposals.

The generation of knowledge is principally about the culling of falsehoods rather than the accrual of facts. The extent to which we prevent discourse on certain topics or hold certain facts or ideas to be unalterably true or free from criticism, is the extent to which we prevent error correction in those areas. This is something of a recapitulation of Popper’s idea of falsification in formal science: in essence, you can never prove that something correct, only incorrect, therefore what we hold to be correct is so unless we find a way to disprove it.

As mentioned above with respect to the First Amendment, I’m aware of how contentious this issue is; as such, I will set out below a framework, which, I hope, simplifies the issue and provides both space for agreement and firm ground for debate. Please note that this framework is designed to be simple and generalizable, which requires generalizations: my actual opinions and the realities are more complex, but I won’t waste valuable column inches on them.

My framework for free expression online:

  • In most countries (especially the USA and those in its orbit), most spaces are either public or private; the street is public, the home is private, for example. (When I say “legal” in this section, I mean practically: incitement to violence muttered under one’s breath at home is irrelevant.)
    • In public, one can say anything legal.
    • In private, one can say anything legal and permitted by the owner.
  • Online, there are only private spaces: 1. The personal devices, servers and other storage that host people’s information (email, websites, blockchains, chat logs, etc.) are owned just like one owns one’s home; 2. Similarly, the physical infrastructure through which this information passes (fiberoptic cables, satellite links, cellular networks) is owned also, usually by private companies like ISPs; some governments or quasi-public institutions own infrastructure, but we can think of this as public only in the sense that a government building is, therefore carrying no free speech precedent.
    • Put simply, all Internet spaces are private spaces.
    • As in the case of private spaces in the physical world, one can say anything legal and permitted by the owner.

From this framework we can derive four conclusions:

  1. There is nothing analagous to a public square on the Internet: think of it instead as of a variety of private homes, halls, salons, etc. You are free to the extent that you own the technology of communication or work with people who properly uphold values of freedom, hence #2 of my six imperatives. This will mean doing things that aren’t that uncommon (like getting your own hosting for your website) through to things that are very unusual (like creating our own ISPs) and more. I’m not kidding.
  2. Until we achieve imperative #2, and if you care about free expression, you should a. encrypt your communications, b. own as many pieces of the chain of communication through which your speech passes, c. collaborate and work with individuals, organizations and companies that share your values.
  3. We made a big mistake in giving so much of our lives and ideas to social networks like Twitter and Facebook, and their pretended public squares. We should build truly social and free networks, on a foundation that we actually own. Venture capitalists Balaji Srinivasan and Naval Ravikant are both exploring ideas of this sort.
  4. Prediction: in 2030, 10% of people will access the Internet, host their content, and build their networks via distributed ISPs, server solutions and social networks.

Remember, I’m not necessarily happy about any of this, but I think this is a clear view of the facts. I apologize if I sound cynical, but it’s better to put yourself in a defensible position than to rely on your not being attacked. As Hunter S. Thompson said, “Put your faith in God, but row away from the rocks.”

I am aware that this isn’t a total picture, and there are competing visions of what the CNASs can and should be; I am more than delighted to hear from and discuss with people who disagree with me on the above. I can’t do them justice, but here are some honourable mentions and thorns:

  1. The Internet (or other CNAS) as a public service. Pro: This could feasibly create a true public square. Con: It seems like it would be too tempting for any administration to use their control to victimize people.
  2. Public parts within the overall Internet or CNAS; think of the patchwork of public and private areas that exist in countries, reflected online—this might feasibly include free speech zones in public areas. See the beautifully American “free speech booth” in St. Louis Airport for a physical example.
  3. Truly distributed systems like the Bitcoin and other blockchains, which are stored on the machines of all members raise the question of whether these are the truly communal or public spaces; more on this in future writings.

I think that the case I made here for freedom of expression is broadly the same when applied to privacy: one might even say that privacy is the freedom not to be observed. In essence, you are private to the extent that you control the means of communication or trust those that do. Your computer, your ISP, and any service that you use all represent snooping opportunities.

We should be prepared to do difficult and unusual things to preserve our freedom and privacy: start our own ISPs, start our own distributed Internet access system, or, better, our own CNAS. I note a sense of learned helplessness with respect to this aspect of my connectivity (speaking especially for myself) there are communities out there to support you.

Newish technology will be very helpful, too:

  • WISPs: wireless internet providers, which operate without the need to establish physical connections to people’s homes.
  • Wireless mesh networks: wireless networks, including among peers, wherein data is transmitted throughout a richly connected “mesh” rather than relying on a central hub.

Finally, and fascinating as it is, I simply don’t have the space to go into the discussion of how to combine our rights with the proper application of justice. For example, if everyone used encryption, it would be harder for police to monitor communications as part of their investigations. All I can say is that I support the enforcement of just laws, including through the use of communications technology, and think that the relevant parties should collaborate to support both criminal justice and our rights: this approach has served the countries that use it rather well, thus far.

2. Interoperability

To illustrate how much the Internet has done for us and how good we have it now in terms of interoperability, let’s look back to pre-Internet days. In the 70s, say, many people would usually access a single computer via terminal, often within the same building or campus, or far away via a phone line. For readers who aren’t familiar, the “Terminal” or “Command Line” program on your Mac, PC, Linux machine, etc. emulates how these terminals behaved.

These terminals varied in design between models, manufacturers and through time: most had keyboards with which to type inputs into the computer, some had printouts, some had screens, and sometimes more stuff. However, not all terminals could communicate with all computers: for example, most companies used the ASCII character encoding standard (for translating between binary and letters, numbers and punctuation), but IBM used its own proprietary EBCDIC system—as a result, it was challenging to use IBM terminals with other computers and vice-versa.

This is more than just inconvenient: it locked users and institutions into particular hardware and data structures, and trapped them in a universe constituted by that technology—as usual, the only people truly free were those wealthy or well-connected enough to access several sets of equipment. Actions like this break us up into groups, and prevent such groups from accessing each other’s ideas, systems, innovations, etc. Incompatibility, thought sometimes an expedient in business, is a pure social ill.

To be clear, I am not saying that you have to talk to or be friends with everyone or be promiscuous with the tech you use. If you want to be apart from someone, fine, but being apart from them because of tech is an absurd thing to permit. We need to be able to understand each other’s data structures, codes and approaches: the world is divided enough, along party, religious and cultural lines to permit new artificial divisions.

The most magnanimous thing about the Internet is that it is totally interoperable, based on open standards. I almost feel silly saying it: this beautiful fact is so under-appreciated that I would have to go looking to find another person making the same point. Put it this way, no other technology is as interoperable as the Internet.

It’s tempting to think of the Internet as something normal or even natural; the truth is far from it: it’s sui generis. 53% of the world population use it, making it bigger than the World’s greatest nations, religions and corporations: anything of a similar scale has waged war, sought profits or sent missionaries; the Internet has no need for any of these things.

The Internet is one of the few things actually deserving of that much overused word: unique. But it does what it does because of something much more boring: standards, as discussed above. These standards aren’t universal truths derived from the fabric of the universe, they’re created by fallible, biased people, with their own motivations and philosophical influences. Getting to the point of making a standard is not the whole story: making it good and useful depends on the character of these people.

We should care more about these people and this process: remember, all the normal forces that pull us into cliques and break connections haven’t declared neutrality with respect to the Internet: they can’t help themselves, and would be delighted to see it broken into incompatible fiefdoms; rather, we should focus immense intellectual energy and interest:

  1. on maintaining the philosophical muscle necessary to insist that the Internet stay interoperable
  2. on proposing virtuous standards
  3. on selecting and supporting excellent people to represent us in this endeavour

The Internet feels normal and natural, even effortless in an odd way; the truth is the exact opposite, it is one of a kind, it is not just artificial, it was made by just a few people, and it requires constant energy and attention. Let us give this big, strange monster the attention it deserves, lest the walls go up.

Beyond this, the fact that the Internet is our only CNAS puts us in a perilous position. We should create new CNASs with a variety of philosophies and approaches; this will afford us:

  1. Choice
  2. Some measure of antifragility, in that a variety of approaches and technologies increases the chances of survival if one or more breaks
  3. Perhaps, even, something better than what we have now

3. Ideisomorphism

Bad technology generally puts constrains on the imagination, and on the way in which we think and communicate, but arguing and articulating the effect is much harder and the conclusions less clear cut than with my previous point on free expression. Put it this way: most, practically all of us take what we are given when it comes to tools and technology, some might have ideas about things that could be better, fewer still actually insist that things really ought to be better, and the small few that have the self-belief, tenacity, good fortune and savvy to being their ideas to market, we call innovators and entrepreneurs.

More importantly, these things influence the way we think. For example, Visicalc, the first spreadsheet program for personal computers (and the Apple II’s killer app) made possible a whole range of mathematical and organizational functions that were impossible or painfully slow before: it opened and deepened a range of analytical and experimental thinking. Some readers recognize what I might call “spreadsheet muscle-memory”—when a certain workflow or calculation comes to mind in a form ready to realize in a spreadsheet.

With repeated use, the brain changes shape to thicken well-worn neural pathways: and if you use computers, the available tools, interfaces and data structures train your brain. Digital tools can, therefore, be mind-expanding, but also stultifying. To borrow from an example often used by Ted Nelson, before Xerox PARC, the phrase “cut and paste” referred to the act of cutting a text on paper (printed or written) into many pieces, then re-organizing those pieces to improve the structure.

The team at PARC cast aside this act of total thinking and multiple concurrent actions, and instead gave the name “cut and paste” to a set of functions allowing the user to select just one thing and place it somewhere else. Still today, our imaginations are stunted relative to those who were familiar with the original cut and paste—if you know anything about movies, music or programming, you’ll recognize that many of the best things happen more than one thing at a time.

This is why I argue so vehemently that we shouldn’t accept what we are given online so passively: everything you do online, especially what you do often, is training your mind to work in a certain way. What way? That depends on what you do online.

For the sake of space, I’ll confine myself to the Web. My thesis is this:

  1. The Web as it stands today is primarily focused on beguiling and distracting us.
  2. It presents with two-dimensional worlds (yes there is motion and simulated depth of field, but most of the time these devices gussy up a two-dimensonal frame rather than expressing a multi-dimensional idea).
  3. It is weighed down with unnecessary animation and styling, leaving practically no attention (or for that matter bandwidth) left for information.

I’m here to tell you that you need not suffer through endless tracking, bloated styling, interfaces designed to entrap or provoke tribal feelings while expressing barely any meaning. If you agree, say something. Take to heart what Nelson said: “If the button is not shaped like the thought, the thought will end up shaped like the button.” This is why we have become what we’ve become: divided, enraged, barely able to empathize with someone of a different political origin or opinion.

Then there are the more profound issues: as mentioned above, links only go one way, the Web typically makes little use of the magic of juxtaposition and parallel text, there are few robust ways of witnessing, visually, how things are connected, and for the most part, Web documents are usually one-dimensional (they have an order, start to finish) or two-dimensional (they have an order, and they have headings).

People, this is hypertext we’re dealing with, you can have as many dimensions as you like, document structures that branch, merge, move in parallel, loop, even documents that lack hierarchy altogether: imagine a document with, instead of numbered and nested headings, the overlapping circles of a Venn diagram.

Our thinking is so confined that being 2-D is a compliment.

Digital media offered us the sophistication and multidimensionality necessarily, finally, to reflect human thought, and an end to the hierarchical and either-or structures that are necessary with physical filing (you can can put a file in only one folder in your filing cabinet, but with digital media, you can put it in as many as you like or have multiple headings that contain the same sentence (not copies!)), but we got back into all our worst habits. This, to quote Christopher Hitchens, “is to throw out the ripening vintage and to reach greedily for the Kool-Aid.”

Some, or even you, dear reader, might object that all this multidimensionality and complexity will be too confusing for users. This is fair. But first, I want to establish a key distinction, between confusion arising from unnecessary complexity introduced by the creators of the system and confusion arising from the fact that something is new and different. The former is unnecessary and we should make all efforts to eliminate it; the latter is as necessary to the extent that new things sometimes confuse us.

It might sometimes seem that two-dimensionality is something of a ceiling for the complexity of systems or media. There is no such ceiling; for example, most musicians will perform in the following dimensions simultaneously: facets of individual notes like pitch, dynamic (akin to volume), timbre, and facets of larger sections of music that develop concurrently with the notes but at independent scales, like harmony and phrasing.

In my view, we should build massively multidimensional systems, which start as simply as possible and, pedagogically, work from simple and familiar concepts up to complex ideas well beyond the beginner. Ideisomedia will, 1. free us from the clowning condescension of the Web and 2. warrant our engaging with it by speaking to us at our level, and reward us for taking the time to learn how to use it.

Before I talk at length and through the medium of graphs about what we stand to gain by doing this thing correctly, I’d like to make two supporting points. One frames why the mood of this introduction is so imperative, the other frames technological growth and development in a way that, I think, offers us cause for optimism.

Why Now, and Why I Think We’re Up to the Task

The Connectional Imperative

Firstly, I think that we are experiencing an emergency of communication in many of our societies, particularly in the USA and its satellites. My hypothesis (which is quite similar to many others in the news at the moment) is that the technology of communication, as it is configured currently, is encouraging the development of a set viewpoints that are non-interoperable and exclusive: this is to say that people believe things and broach the things that they believe in ways that are impossible to combine or that preclude their interacting productively.

Viewpoint diversity is necessary for a well-functioning society, but this diversity matters to the extent that we can communicate; this means firstly, actually parsing each others communications (which is analogous to syntactic interoperability: regardless of whether we understand the communication, do we regard it as a genuine communication and not mere noise, do we accept it or ignore it?); secondly, this means actually understanding what we’re saying (which is analogous to semantic interoperability: can we reliably convey meaning to each other?).

I think that both of these facets are under threat; often people call this “polarisation,” which I think is close but not the right characterisation; I am less concerned with how far apart the poles are than whether they can interact and coordinate.

Why is this happening? I think that it is because we don’t control the means of communications and, therefore, we are subject to choices and ideas about how we talk that are not in our interest. Often these choices are profit-driven (like limbic hijacks on Facebook that keep you on page). Sometimes it’s accidental, sometimes expedient design (like the one-way links and two-dimensional documents that characterize the Web, as mentioned earlier). Why is it a surprise so many of us see issues as “us versus them,” or to assume that if someone thinks one thing that they necessarily accept all other aspects associated with that idea, when we typically compress a fantastically multidimensional discussion (politics) onto a single dimension (left-right)?

We need multidimensional conversations, and we already have the tools to express them: we should start using them.

This really is an emergency. We don’t grow only by agreement, we grow by disagreement, error correction, via the changing of minds, and the collision of our ideas with others’: this simply won’t happen if we stay trapped in non-interoperable spaces of ideas, or worse, technology.

On the topic of technology, I am quite optimistic: everyone that uses the Internet can connect, practically seamlessly, with any other person, regardless of sex, gender, race, creed, nationality, ideology, etc. The exceptions here (please correct me if I’m wrong) are always to do with whether you’re prevented (say by your government) from accessing the Internet or because your device just isn’t supposed to connect (it was made before the Internet became relevant, or it is not a communications device (e.g. a lamp)).

TCP/IP is the true technological universal language, it can bring anyone to the table: when at the table you might find enemies and confusion, but you’re here and have at least the opportunity to communicate.

Therefore, I think that we should regard that which is not interoperable, not meaningfully interoperable, or at least not intentionally open and logical, with immense scepticism, and conserve what remains, especially TCP/IP, standards like this and their successors in new CNASs.

Benign Technology

I think that technology is good. Others say that technology is neutral, that people can apply it to purposes that help us or that hurt us. Of course, still others say that it is overall a corrupting influence. My argument is simple, technology forces you to do two things: 1. to the extent that whatever you create works, it will have forced you to be rational; 2. to the extent that you want your creation to function properly in concert with other devices, it will have forced you to think in terms of communication, compatibility and, at best, interoperability. I’m not saying that all tech is good, but rather that to the extent that tech works, it forces you to exhibit two virtues: rationality and openness.

In the first case, building things that work forces you to adopt a posture that accepts evidence and some decent model of reality: obviously this is not a dead cert, people noticeably “partition” their minds into areas, one for science, another for superstitions and so on. My claim is that going through the motions of facing reality head on is sufficient to improve things just a little; tech that works means understanding actions and consequences. This is akin to how our knowledge of DNA trashed pseudo-scientific theories of “race” by showing us our familiarity with all other humans, or how the germ theory of disease has helped to free us of our terror of plagues sent by witches or deities.

I’m not being naive here: I know that virtue isn’t written in the stars; rather, I claim that rationality is available to us as one might use a sifter: anyone can pick it up and use their philosophical values to distinguish gold (in various gradations) from rock. Technology requires us to pick up the sifter, or create it, or refine it, even if you would otherwise be disinclined.

In the case of the second faculty, openness, once you have created a technology, you can make it arbitrarily more functional by giving it the ability to talk to others like it or, better, others unlike it. Think of a computer that stands alone, versus one that can communicate with any other number of other computers over over a telecommunication line. But, in order to create machines that can connect, you have to think in terms of communication, you have to at least open yourself to and model the needs and function of other devices and other people. Allowing for some generalizing, the more more more capable the system, the more considerate the design.

Ironically, the totality of the production of technology is engaged in a tug-of-war: on one side, the need and desire to make good systems pulls us towards interoperability and on the other side, short-sighted profit-seeking and the fact that it’s hard to make systems that can talk pulls us towards non-interoperability. Incidentally, the Internet is a wonderful forcing function, here: the usual suspects like Apple, IBM and Microsoft amazingly Internet-interoperable.

Put simply, if you want to make your tech work, you have to face reality; if you want your tech to access the arbitrarily large benefits of communicating with other tech, you have to imagine the needs of other people and systems. Wherever you’re going, the road will likely take you past something virtuous.

A Rhapsody In Graphs: Up and to the Right, or What Do We Have to Gain?

To begin, remember this diagram:

Technological innovation exists in a feedback loop: so if you want virtuous systems in the future, create virtuous systems today.

You’re familiar, I hope, with Moore’s Law, which states that around every two years, the number of transistors in an integrated circuit doubles, meaning roughly that it doubles in computing power. This means that if you plot computing power against time, it looks something like this:

Moore’s law describes the immense increase in processing capacity that has facilitated a lot of the good stuff we have. Today, the general public can get computers more powerful than the one that guided the Saturn V rocket. The ARPANET (the predecessor to the Internet) used minicomputers to fulfil a role somewhat similar to that of a router today—the PDP-11 minicomputers popular for the ARPANET started at $7,700 (more than $54,000 in 2020 dollars); most routers today are relatively cheap pieces of consumer equipment, coming in at less than $100 a piece. This graph represents more humans getting access to the mind-expanding and mind-connecting capabilities of computers, that the computers themselves are getting better, and that this trend is quickening.

But, one might ask, what about some of the other indexes discussed in this introduction: infreedom, interoperability, ideisomorophism?

Interoperability

For the purposes of this question, we first need to find a meaningful way to think about overall interoperability. For instance, it doesn’t really matter to us that coders create totally incompatible software for their own purposes, all the time; meanwhile, as time passes, the volume of old technology that can no longer work with new technology increases due to changing standards and innovation: this matters only to the extent that we have reason to talk to that old gear (there are lots of possible reasons, if you were wondering). So, let’s put it like this:

overall meaningful interoperability (OMI) = the proportion of all devices and programs that are interoperable, excluding private and obsolete technology

This give us an answer as a fraction:

  • 100% means that everything that we could meaningfully expect to talk to other stuff can do so.
  • 0% would mean that nothing that we could reasonably expect to talk to other stuff can do so.
  • As time passes we would expect this number to fluctuate, as corporate policy, public interest, innovation, etc. affect what sort of technology we create.

As mentioned above, a variety of different pressures influence overall meaningful interoperability; some companies, for example, might release non-interoperable products to lock in their customers, other firms might form consortia to create shared, open standards.

I think that, long-term, the best we can expect for interoperability would look like the below:

What are you looking at?

  • Back Then (I’m being deliberately vague here) represents a time in the past when computers were so rare, and often very bespoke, that interoperability was extremely difficult to achieve.
  • Now represents the relative present: we have mass computer adoption, and consortia and other groups give us standards like Unicode, USB, TCP/IP and more. At the same time, some groups are still doing their best to thwart interoperability to lock in their customers.
  • The Future is ours to define; I hope that through collaboration and by putting pressure on the creators of technology, we can continuously increase OMI. You’ll notice that the shape is the opposite of Moore’s Law’s exponential growth: this is, firstly, because there’s an upper limit of 100% and, secondly, because it seems fair to assume that we will reach a point where we hit diminishing returns.
  • It is theoretically possible that we might reach a future of total OMI, but perhaps it’s more realistic to assume that through accidents, difficulty and innovation, some islands of non-interoperability will remain.

Freedom

How are things looking for free software? It’s very hard to tell, because the computer world is so diverse and because the subject matter itself is so complex. For example, the growth of Android is excellent news on one level, because it is based on the open source Linux kernel; it is less good news in that the rest of Android is totally proprietary, which makes it confusing. See the graphs below for a recent assessment of things (data from statcounter):

Desktop:

Mobile:

I think it is imperative that we work to create and use more free tools, for no other reason that we as people deserve to know what the stuff in our homes, on our devices or that is manipulating our information is doing. With the right effort, we might be able to recreate the growth of Linux among supercomputer operating systems. I am enthusiastic about this, and see the growth of free software as something as unstoppable, say, as the growth of democracy.

 

Wikipedia

Ideisomorphism

First, dimensions. As mentioned above, we frequently try to express complex ideas in two few dimensions, and this hampers our ability to think and communicate. Computers are, potentially, a way for us to increase the dimensionality of our communication, but only if we use them to their full potential.

The diagram below sets out some ideas, along with their dimensions:

To be clear, I’m not making a value-judgement against lower-dimensional things. Rather, I am saying:

  • Firstly, that one should study any given thing in a manner that allows us to engage with it with the proper number of dimensions.
  • Secondly, that poor tools for the studying, engaging with and communicating that which is in more than two-dimensions acts as a barrier, keeping more of us from learning about some very fun topics.
  • Thirdly, high dimensionality can scare us off when it shouldn’t, e.g. if you can talk you already know how to modulate your voice in more than five dimensions simultaneously: pitch, volume, timbre, lip position, tongue position, etc.

I think that, pedagogically and technologically, we should strive to master the higher dimensions and structures of thinking that allow us to communicate thus. However, we seem to be hitting two walls:

  1. The paper/screen wall: it’s hard to present things in more than two dimensions on paper or screens, and we get stuck with things like document structure, spreadsheets, etc., when more nuanced tools are available.
  2. The reality wall: it’s weird and sometimes scary to think in more than three dimensions, because it’s tempting to try to visualize this sort of thing as a space and, as our reality has just three spacial dimensions, this gets very confusing. This is tragic because a. we already process in multiple dimensions quite easily and b. multidimensionality doesn’t have to manifest spatially, nor they do manifest spatially must all dimensions manifest at once; what matters is the ability to inspect information from an arbitrary number of dimensions seamlessly.

Let us break the multidimensionality barrier! The nuance of our conversations and our thinking requires it. We should:

  1. Where possible, use tools like mind-maps and Venn diagrams (which allow for an arbitrary relationships and dimensions) over strictly hierarchical or relational structures (like regular documents, spreadsheets or relational databases, which are almost always two-dimensional).
  2. Use and build systems that allow for the easy communication and sharing of these structures: it’s easy to show someone a mind-map, but quite hard to share between systems because there’s no standard datastructure.
  3. Remember the technological feedback loop: 2-D systems engender 2-D thinking, meaning more 2-D systems in the future; we need concerted efforts to make things better now, such that things can be better in the future.

For our last graph, I’d like to introduce a new value, that combines the three concerns of this introduction (freedom, interoperability, ideisomorphism) into one, we can call it FII. Where before, I expressed interoperability and freedom as proportions (e.g. the percentage of software that is interoperable); this time, let’s think of these values as a relative quantity with no limit on it’s size (e.g. let’s say that 2020 and 2030 had the same percentage of free software, but 2030 had more software doing more things: this means that 2030 is higher on the freedom scale).

So:

FII = freedom x interoperability x ideisomorphism

We should, therefore, strive to achieve something like the graph above with respect to the technology of communication; think of it as Moore’s law, but for particular aspects of technology that represent our species’ ability to endure and flourish. It’s worth noting, of course, that Moore’s law isn’t a law in the physical sense, the companies whose products follow it achieve these results through continuous, intense effort. It seems only fair that we might expend such efforts to make the technology of communication not just more powerful, but better able to serve our pursuit of virtue; to the extent that I’m right about the moral arc of technology naturally curving upward, we may be rewarded quicker than we think.

What If We Are Successful?

What might happen if we’re successful? Here’s just one of many possibilities, and to explain I will need the help of a metaphor: the Split-brain condition. This condition afflicts people who have had their corpus callosum severed; this part of the brain connects the right and left hemispheres and, without it, the hemispheres have been known to act and perceive the world independently. For example, for someone with this condition, if only one eye sees something, the hemisphere associated with the other eye will not be aware of it.

I liken this to the current condition of humanity, except that instead of two hemispheres, we have numerous overlapping groupings of different sizes: nations, religions, ideologies, technologies, and more. Like split brain patients, often these parts don’t understand what the other is doing, might have trouble coordinating, or even come into conflict.

We have the opportunity to build our species’ corpus callosum, not that we might be unify the parts, but that the parts might coordinate; and, in that the density of connections is a power function of the number of nodes in the system, this global brain might dwarf the achievements of history’s greatest nations with feats on a planetary scale and in its pursuit of goodness, truth and beauty.

Categories
Editorial

It is time to talk about Technology differently

While manifestly found in many animal species, humanity’s ability to devise and wield intricate tools is unique in its breadth and impact. Be it part of our genetic code, a proportionally massive cranium or an elegant pair of opposable-thumbs, some set of perfect conditions has allowed for the presence of a magnificent talent; our obsession with finding easier ways to achieve our diverse ends. We would do well to remember this. Technology is not an end in itself, nor is it a single ubiquitously recognised set of means. It is a talent found in all of us, an urge to create and innovate and move past obstacles set before us.

Statistically, most of you will be readers from Europe or North America. Recently, we have been exposed to a certain idea of what “Technology” is supposed to mean. If we go by published output from mainstream Technology- or Business press outlets, we could be easily led into thinking that Technology is euphemism for the “Information Technology industry”. Some of us might associate the word to a mosaic of gadgets that together form part of this vaguely coined “Fourth Industrial Revolution” – a Global economy driven by automation. Why is our definition of Technology so limited?

As initially said by Robert Smith, Co-Founder of Seedlink and anthropology researcher, this is a “Euro-American Centric consensus”. A handful of financiers and technologists from London and San Francisco are setting the tone for how start-ups should be born and companies should be run. It is built around an obsession with the economic domination of four or five Big Tech corporations and the opinions of investors in Silicon Valley or Silicon Circle. This obsession is blinding us from the exciting developments in technology, like the midday sun outshining the moon and stars.

It is in fact a double blind. First you are being misled into thinking IT may be the most important technology, simply by merit of investment volumes and value (see CB-Insights’ 2019 List of Unicorns by Industry). Next that Big Tech may be an appropriate poster-child for contemporary technological development.

Let us decide to take a step-back, or rather, to remove our headsets and examine the question of technology as the fruit of an anthropologically-encoded set of creative or innovative behaviours based on improving the human condition.

Now a gospel to be repeated on San Franciscan dinner-tables, Moore & Grove’s balanced corporate-innovative environment at Intel in the 1970s, created the foundation on which several breakthrough technologies like the MOS transistor and integrated-circuit were developed. This foundation and the success that came with it enabled Intel and several other early digital companies to create a financially-supportive environment for start-ups to pursue ambitious high-risk projects.

It is in fact quite revealing how much directional influence Moore and Grove have had on the ideological tapestry of Silicon Valley. Moore’s law dictates the technical keystone: “The number of transistors on a microchip doubles every two years.” Elsewhere, one of Grove’s laws (the exact law is subject to a great-many disputes) dictates the cultural keystone: “Success breeds complacency. Complacency breeds failure. Only the paranoid survive.” (Attributed to Andy Grove in: Ciarán Parker (2006) The Thinkers 50: The World’s Most Influential Business. p. 70). Another Grovian law is that “A fundamental rule in technology says that whatever can be done will be done.” (Attributed to Andrew S. Grove in: William J. Baumol et al (2007) Good Capitalism, Bad Capitalism, and the Economics of Growth. p. 228). Built on these two keystones was the ideological evolution of Silicon Valley, built into a highly self-confident arena for microchip-based solutions to an apparently infinite plethora of identifiable problems. It explains the emergence and dominance of disruptive innovation and unique value proposition as pillar concepts. It gives prelude to the impact left by Peter Thiel’s Zero to One, which we have already covered here.

The recounting of the early days seems to be missing key ingredients. In addition to the leaders of the Intel corporation, were Gordon French and another Moore, Fred Moore. French and Moore were co-Founders of the Homebrew Computer Club, a club for DIY personal computer building enthusiasts founded in Menlo Park. This informal group of computer geeks was in all intents and purposes a digital humanist entreprise, openly inviting anyone who seeks to know more about electronics and computers to join the conversation and build with like-minded peers. Its great influence on Steve Wozniak and the many Stanford University engineers to that have built the Valley cannot be overstated.

Technologists from across the globe have inspired themselves off of this origin story, and innovative ecosystems have cropped-up in mimickry. New uses of IT, democratised and cheaper-to-access, have led to fascinating developments in parts of the developing world that do not enjoy California’s access to investment funds. And there is also the fact that Silicon Valley was not the only Tech story of the last 50 years (think vaccines, cancer research and environmental technologies). More colours come to light and the grey-bland world of Euro-American financialised IT will fade back into a world of people finding new ways of solving problems, finding new problems to solve, finding new problems from ways of solving, finding new solutions to problems yet unseen.

We dove into the mission of Supriya Rai — who seeks to bring beauty and colour into hundreds of identical-looking London office buildings with Switcheroo. She is now also Wonk Bridge’s CTO!

Portraits of Young Founders: Supriya Rai

We followed Muhammad and Robbie, who broke away from the London incubator scene after an initially successful Agri-tech IoT prototype, to radically changing their business plan to launch a logistics service company in East Africa, against the wishes of their Euro-American investment mentors. Rather than launch Seedlink to improve the lives of Malawians and East Africans at large, which entirely satisfy the white saviour narrative and follow a set of Euro-American prescribed ROIs, they sought to build a proposition that would fit in this unique business climate. How can a company that connects rural farmers to urban centres ignore common practices like tipping that are branded as bribery in the Euro-American world. What explained the gap between the London investors’ expectations and the emerging strategy needed to succeed in East Africa?

Thanks to a double-feature from our China-correspondent Edward Zhang, we analysed how different countries used the power of their societal and political technology as well as how they leveraged their national cultures to combat Covid-19. Sometimes, technologies are a set of cultural values and political innovations developed over the course of generations.

The Chinese Tech behind the War on Coronavirus

The Technologies that will help China recover from COVID-19

We also saw how a different application of a mature information technology such as the MMO video-game has helped fight autism where many other methods have failed.

Building a Haven for the Autistic on Minecraft

The real world

 

Photo by Namnso Ukpanah on unsplash / Edited by Yuji Develle

I am writing this article on the foothills of Mount Kilimanjaro, in the shade of a hotel found not far from a bustling Tanzanian town. Here, I can observe a much healthier use of technology, less dictated by the tyranny of notifications and more driven by connection between individuals found in the analog. People here use social media and telephones regularly, but they spend the majority of their time outside and depend on cooperation between townsfolk to survive (in the absence of public utilities or private sector).

 

My own photo of a Tanzanian suburb town near Arusha (Yuji Develle / December, 2020)

The Internet is available but limited to particular areas of towns and villages; WIFI hotspots at restaurants, bars or the ubiquitous mobile-phone stands (Tigo, M-PESA, Vodacom).

 

Left: A closed Tigo kiosk, Right: A Tigo pesa customer service shop (Yuji Develle / December, 2020)

The portals to the Digital Civilization have been kept open but also restricted by the lack of internet-access in peoples’ homes (missing infrastructure and the relatively high cost of IT being primary reasons why). It has kept IT from frenetically expanding into what it has become in the North-Atlantic and East Asia.

Like an ever-expanding cloud, the Technology-Finance Nexus has taken over our Global economy and replaced many institutions that served as pillars to the shape and life of analog world.

  • Social Networks have come to replace the pub, the newspaper kiosk, the café
  • Remote-working applications, the office
  • Amazon, the brick-and-mortar store, the pharmacy, the supermarket
  • Netflix, the cinema
  • Steam or Epic Games, the playground

These analog mainstays have been taken apart, ported and reassembled into the digital world. While the size of our Digital civilization continues to grow in volume and richness, the analog is shrinking and emptying with visible haste. The degradations that the disappearances provoke and that the exclusive-use of these Digital alternatives generate are unfortunately well documented at Wonk Bridge.

Astroturfing — the sharp-end of Fake News and how it cuts through a House-Divided

Social Media and the Syndication of the ‘Friend’

A new way of covering Tech

With our most recent initiative, Wonk World, we seek to avoid falling into the trap of overdiscussing and overusing the same Tech stories, told through and about the same territories, as representations of Tech as a whole. We aim to shed light into the creative and exciting rest-of-the-world.

We will be reaching out to technologists and digital humanists located far beyond Tech journalism’s traditional hunting grounds: Israel, China, Costa Rica. We will be following young Founders’ progress through the gruelling process of entrepreneurship in our Portraits of Young Founders newseries. Finally, we are looking for ways to break out of our collective echo-chambers and bring new perspectives into the Wonk Bridge community, so diversity of region as well as vision will constitue one of Wonk Bridge’s credos.

So join us, wherever you are and however you are, beyond the four walls of your devices and into the unexplored regions of the world and dimensions of the mind to see technology as Wonk Bridge sees it: the greatest story of humankind.

Categories
Editorial

Just do nothing: An inconvenient digital truth

As addictive and stimulating technology proliferates across society, we are losing our most ancient and coveted ability. Join us as we explore the loss of our ability to do nothing and how stand-up comedians have become the unlikely torch bearer of an inconvenient digital truth.

Image for post

Have you ever tried sitting in a room and doing nothing? And when I mean nothing, I mean absolutely nothing. Chances are you won’t last very long and that’s mainly because the human brain has a ferocious appetite for information stimuli. It’s why meditation is so hard and yet advocated by so many. Fundamentally, we aren’t very good at quieting our brain and the past decade of technological advancement has been anything but helpful.

According to the basic fundamentals of human computer interaction (HCI), there are three main ways or modalities by which we interact with computers:

Visual (Poses, graphics, text, UI, screens, and animations)

Auditory (Music, tones, sound effects, voice)

Physical (Hardware, buttons, haptics, real objects)

Regardless of what computer type you are using — whether it’s a smart phone or laptop — physical inputs and audio/visual outputs dominate HCI. Indeed, these forms of interaction and feedback are the very foundation of how humans have developed computers to function alongside them.

Now take into account another fundamental theme of HCI development because with every successful iteration of technology, there exists a main defining principle: Mainly, people who use technology want to capture their ideas more quickly and more accurately. Keep this in mind for later.

Whether it’s 1839s’ Joseph Jacquard who used programmable weaving looms to create a portrait of himself using 24,000 punched cards or WWII military agencies that invested in the development of the first ‘monitor’ to allow radar operators to plot aircraft movement, the development and evolution of technology is largely predicted by this theme of speed and accuracy.

Image for post
Portrait of Joseph Jacquard next to his iconic weaving loom computer

So let’s go back to my introduction: doing nothing. Like I said, it’s real hard, but my hypothesis is that it’s much harder than it used to be. If you read one of my past articles on human brain development, I explored the idea of the modern brain not being so different to how it was 2,000 years ago. In other words, there simply hasn’t been enough time for evolution to weed out certain mutations of our brain genealogy. Therefore, how we develop as an individual and functioning person, is just as much nurture as it is nature.

Now, I’m aware that my argument will be formed by a series of ‘sweeping & shallow statements’, but I’d like you to picture what most modern societies of both the past and present would have done when confronted with the reality of doing nothing. Whether it’s a pilgrim town in colonial Virginia during the 1600s preparing for the harsh winter or a small present day Tibetan village nestled in the Himalayan mountains going about their usual day, both isolated societies, if not for the menial tasks of survival and hardship, are generally confronted with the reality of doing nothing on the daily.

Image for post
Children playing with toys during 17th Century Colonial America

For the children of both these societies, once most chores are done, they would generally be allowed to go out and play. In doing so, they had to quickly confront the idea of doing nothing. Sure, they had games, they had toys, but the realm in which these tools of time reside are largely within the imagination. In fact, playing with others for a vast majority of mammalian species is an essential form of growth and development.

Today, we are plagued by bright screens, sharp sounds, and intruding notifications. From the very first pager beeping, to the early 2000s MSN Messenger nudge (I can still hear that sound in my head), and the evolution and proliferation of the Facebook notification beep, we have slowly grown accustomed to be alerted by our technology. Most notably, is the proliferation of the newsfeed, which has largely evolved to lure us into a web-like slot machine of personalized and attention grabbing media.

Image for post
MSN Messenger user interface with symbolic ‘knock’ nudge

If you reflect back on your historic usage of Facebook, it generally follows this path: status text (2007) → photo post (2010) → video/story stream (2012). Remember that earlier theme and it’s three modalities? I believe it has dominated the evolution and usage of our most prolific technologies, especially when it comes to sharing aspects of ourselves and others across our various digital networks.

Moreover, this digital game of carrot & stick has greatly been exacerbated by how quickly modern society has shifted its fundamental functions to the current dominant technology. From how we consume our news, to monitor our work, and even order food, every function is now app based and by virtue, notification based. The consequence is that we are quickly being trained to look to our our phones to understand our life.

Image for post

This is not to say that all this is bad. As I’m sure many of you reading this are thinking, social media networks can be a great source of social good. Even a company with as a bad a reputation as Facebook does not deserve the ‘shtick’ (for lack of a better word) it gets. Thanks to Whatsapp and Messenger, you are allowed to communicate with your friends and loved ones no matter where you are. Google helps you gain knowledge and explore your interests by allowing you to quickly scan the web and find the information you are looking for. Did I mention this is all for free?

But there is an inherent danger when we grow too dependent on a certain technology. Texting is great but have you tried actively listening to a conversation? Google searching is fantastic but have you tried reading a book from start to finish? Indeed, most of us joke about our dwindling attention spans but I fear none of us take it very seriously.

If our attention is to be monetized for ads by Silicon Valley, we need to also start seeing it as our currency to how we learn and grow as individuals. The less attention we are willing to give, the less personal development we will get in return. From clickbait journalism, to the inherent shallowness and distraction of social media, the examples for this argument are numerous and worrying.

Image for post

I believe I can speak for most generation Zers when I say we were lucky to have barely avoided the advent of social media while growing up. Because, by and large, as children, we were forced to confront the same idea of doing nothing as most other past societies. We had to use our imaginations and our social skills to play by ourselves and with others. Of course, critics will say we had game consoles like the Playstation and cable television like Comcast, but it wasn’t as enslaving. Today, video platforms like Netflix let you binge, game developers like Electronic Arts let you win, and social media companies like Facebook steal your time.

Because even gaming , which I believe represents superior elements of story-telling and cooperative strategy, has been tweaked for profit by executives and developers to be addicting. Once upon a time, triple-A video games were simply great for their 1 player or 2 player story mode. Like opening a book, you could dive into a world, play, learn, and explore but there was no mechanism to constantly lure you back besides the gameplay itself. It was just as easy to stop as it was to begin. Today, you have loot boxes and pay-to-win features which aren’t truly about the game. It’s about hooking you emotionally and getting you to pay more money.

Image for post
Screenshot of mobile game Jam City.

Yet, I digress, because this article isn’t about the exploitation of gaming as a medium or even how most platforms today function as a social slot machines. No, this article is about how many of us are slowly becoming incapable of doing nothing. It is how we are slowly but surely being re-wired by tech-based companies, whose bottom line is not to make you a better or more informed person, but instead to keep you glued to a screen and push advertisements and paid services.

There’s a quote in a 2001 stand-up act by the late-great comedian George Carlin who I believe really drives this point home. Although he is speaking about the proliferation of overbearing parents, I believe the same logic can be applied to my discussion. Just a quick disclaimer, there is profanity in this video but as you already know, he’s a comedian.

“You know, [talking about overly concerned parents organizing playtime] something that should be spontaneous and free is now being rigidly planned, when does a kid ever get to sit in the yard with a stick anymore, you know, just sit there with a ******* stick, do today’s kids even know what a stick is?”

— George Carlin

The idea of children no longer being taught or given the opportunity to simply sit in the yard with a stick is humorously worrying. Whether it’s hyper vigilant parents who coddle them for their safety or frustrated parents who shove a screen in their face to keep them from being annoying, children today are the victims of societies rush to quicker and more accurate technology. Although the theme of speed and accuracy has served us well, skyrocketing productivity from the punchcard, to the mainframe, to the PC, and now to the smartphone, I believe there is an inherent danger in our chase for quicker and more accurate technology.

I am not writing this article to give solutions. That is not what my primary intent was when I set out to write this. I’m not here to tell you to meditate, or to stop you from using social media, or even to limit the use of your phone. Moreover, I am conscious enough to realise that much of what I’m saying is rooted in personal hypocrisy, because I am just as much a slave to my inability to do nothing as most of you are.

But if there is one message I’d like to get across, it’s that we should embrace the nothing. The idea that maybe we don’t need to be stimulated by our looping relationship with the physical, visual, and audio modalities of modern technology. You can silence your phone and put it in the other room. You can sit in a train and not scroll through a newsfeed. You can stare at a wall and do nothing.

Because if you force your brain to be quiet, you’d be surprised how much it will start saying.

“The thing is, you need to build an ability to be yourself and just not be doing something. The ability to just sit there and be a person. Underneath everything in your life, there is that thing, that forever empty. That knowledge that it’s all for nothing and you’re alone…”

“And sometimes when things clear away and you’re in your car, and you start feeling it — this sadness, life is tremendously sad, just by being in it — That’s why we text and drive, people are willing to risk taking a life and ruining their own because they are unwilling to be alone for a second ”

— Louis C.K.

Categories
Editorial

When the tool uses you: How immersive tech could exploit our illusion of control

From the fax machine making information vulnerable to loss and theft to the internet making malware easy for susceptible users to download, malicious actors have always found a way to exploit our naivety to new technology. What dangers should users, businesses, and governments expect from immersive technology?

Image for post

You’re sitting in a virtual meeting room. Although the marble walls and mahogany table encompassing the space appear vectored and block-like, you feel oddly at ease. As you look around the room, everything feels intuitively wrapped around your eyes. You’re surprised to find how fluid your hands feel as you gesticulate to a nearby avatar. Hovering between the two of you is a larger than life three dimensional model of your proposed project.

Snap back to the reality of your boring home-office. You’re actually on Zoom. Your computer monitor is bright but the glare from the nearby window hurts your eyes. The video-chat interface is cluttered with tiny webcams talking over one another. You’re connected to the internet but you feel disconnected from your team and although you may not see it now, you are living on the verge of a paradigm shift.

The immersive paradigm shift is a moment in time where the line between what we perceive is ‘real’ and what is not will blur indefinitely. This is a world where cameras are programmed to defy reality, bodies swing and walk into nothing, and eyes become sentient portals to a collective imagination.

If you haven’t guessed it by now, of course I’m talking about the trifecta of incoming immersive technology, or rather the much anticipated mass market emergence of augmented reality (AR), virtual reality (VR), and mixed reality (MR). While all three somewhat differ from one another, they share one important aspect: that is, the representation of a new dimension to human computer interaction (HCI).

Image for post

That’s not to say this is our first rodeo. Over the past 25 years we’ve seen technology bring forth dramatic changes to the economic and social fabric of our society. From the internet powering our knowledge economy to mobile computing transforming how we communicate, these significant evolutions are judged not just by their technical sophistication but by their intrinsic ability to transform our lives.

Thanks to advances in computer vision — particularly in object sensing, gesture identification, and tracking — sensor fusion and artificial intelligence has furthered our interaction with computers as well as the machines understanding of the real world.

Moreover, advances in 3D rendering, optics — such as projections, and holograms — and display technologies have made it possible to deliver highly realistic virtual reality experiences. As a result, immersive technologies can now allow us to interact with ourselves and machines in a completely different manner as we will no longer be confined to a 2D screen.

As scary as that may sound, governments and businesses need to be preparing for the various modalities that will be introduced by immersive tech across their products and processes. This moment in time is no different to the shift from fax to email or the introduction of the smartphone. Moving to VR and AR will simply be the next natural step in staying relevant and competitive.

So if immersive technologies are poised to profoundly change the way we work, live, learn, and play, what ramifications should we come to expect? As speech, spatial, and, biometric data are fed into artificial intelligence, new questions will emerge over the extent of our virtual privacy and security. As technology becomes more comfortable and intuitive, we are at risk of going under the illusion of control.

Image for post

Throughout the history of computing, every significant shift in modality has brought with it new and potentially destabilising threats. If we fail to ask the right questions, the problems we will experience adjusting to this new technology may be greater than those posed by the internet and mobile computing combined. Let’s explore some examples:

It’s no secret that augmented reality technologies, which overlay virtual content on users’ perceptions of the physical world, are now a commercial reality. Recent years saw the success of AR powered camera filters such as Instagram stories, with more immersive AR technologies such as head-mounted displays and automotive AR windshields now being shipped or in development.

Image for post

With over 3.4 billion AR capable devices expected to hit the market by 2022, augmented reality is predicted to make the earliest splash amongst consumers. We should expect wearables that will allow us to navigate in the real world through Google Maps and camera applications that will scan the relevant objects surrounding you in a grocery store. Therefore, anticipating and addressing the securityprivacy, and safety issues behind AR is becoming increasingly paramount.

Buggy or malicious AR apps on an individual user’s device are at risk of:

  • Recording privacy-sensitive information from the user’s surroundings | Productivity tools
  • Leak sensitive sensor data (e.g., images of faces or sensitive documents) to untrusted apps | Instagram & Snapchat
  • Disrupt the user’s view of the world (e.g., by occluding oncoming vehicles or pedestrians in the road) | Google Maps

Multi-user AR systems can experience:

  • Vandalism such as with this incidence with augmented reality art in Snapchat
  • Privacy risks that bystanders may face due to non-consensual recording by the devices of nearby AR users

For the most part, AR security research focuses exclusively on individual apps and use cases; mainly because many problems we have already experienced with internet and mobile computing are expected to crossover to the new AR medium.

For instance, when the app store was first launched, many iPhone apps were nefariously designed to siphon and package individual mobile data in the background. Security analysts expect similar issues to arise with AR; only this time it won’t just be our location data they’re after but our more sensitive biometric data. More on that later.

Lastly, AR technologies may also be used by multiple users interacting with shared virtual content and/or in the same physical space. This includes virtual vandalism of shared virtual spaces and unauthorised access to group interactions. However, these risks have not yet been studied or addressed extensively. This will surely change as the technology hits the mainstream.

Image for post
Jeff Koons’ augmented reality Snapchat artwork gets ‘vandalized’

Virtual reality is the use of computer technology to create a simulated virtual environment. As visual creatures, humanity has been dreaming of creating virtual environments since the inception of VR developmental research in the early 60s. At first, commercial uses were mainly in video games and advanced training simulations (NASA) but as the technology advanced, so did our potential for using it.

Since the 2012 launch of the Oculus Rift, digital tools for VR have slowly begun to emerge. From Facebook’s all-in approach with the virtually collaborative social-media-esque Horizon and Valve’s newly released and highly praised virtual zombie game Half-Life: Alyx, there are plenty of examples today to show off the prowess of current-state VR. Indeed, with so many development and hardware companies competing for market share, it may feel like virtual reality has finally arrived.

However, as new tools and applications for virtual reality continue to develop, new questions are emerging over intellectual property rightsSince everything in virtual reality will be a renderised model of something, control over the aesthetics, feel, and look of a certain model may imply some form of ownership. This will become more pressing as the medium extends into other fields such as autonomous vehicles, e-commerce and even medical procedures.

For instance, items bearing a brand, recognisably shaped cars, dangerous weapons, and iconic places, have appeared in video games for years. A great example is Rockstar’s Grand Theft Auto series which has had numerous IP battles after satirically recreating the cities and environments of Miami and Los Angeles.

Image for post

Once we reach a form of ‘near reality’ within a game environment (one that is higher fidelity than the current 2D experience), we should expect intellectual property issues in virtual reality to sky rocket. For instance, a printed image of a painting from Google Images is much less of an IP issue than perhaps a virtual high-quality model of the same painting within a future virtual space.

Couple this with the fact that the depth-sensors in our phones are increasingly more capable of scanning real-life objects and modelling it real-time, means that in the future, anyone will be able create a virtual model of anything, and place it virtually anywhere.

Intellectual property predictions to expect:

  • IP protection of places, and buildings is a growing trend with EU lawmakers continuing to debate whether built structures, which are open to the public, should have rights attached to them. This is known as the so-called “freedom of panorama”.
  • IP protection of experiences such as touching or smelling a particular store, airline or hotel chain is possible with haptic virtual technologies. Although it is difficult to justify protecting an aesthetic today (only Apple has managed with its store layout), this may be more relevant in the future with VR.
  • Featuring a branded item or a well-known person is currently seen as a potential intellectual property infringement. How will this change if it is the player who is inserting self-scanned models rather than the game developer? Who is going to be liable?

This last point is most interesting because it relates to whether the platform or developer is liable even though it may be the user who is placing IP-protected models into the virtual environment. This concept is a similar crossover to the early days of peer to peer technologies with Napster and Limewire when users uploaded IP protected MP3 and video files to shared servers.

In the future, VR should expect similar IP problems that we get today. Faster computers and smarter artificial intelligence programs will allow users and developers to upload virtual objects at an unprecedented ease. Add on to this the idea that virtual reality will someday be as realistic as real-life and we’ll have an interesting problem on our hands.

Unlike virtual reality which immerses the end user in a completely digital environment, or augmented reality which layers digital content on top of a physical environment, mixed reality (MR) occupies a sweet spot between the two by blending digital and real world settings into one.

Image for post

When it comes to mixed reality, biometric and environmental data is an essential yet consequential by-product of sensory technology. This is mainly because developers need access to data to tweak specific functionalities and perfect the comfort and usability behind an immersive tool.

Thus, as immersive tools enter our homes, we are at risk of digitalising and exposing our most personal of information. The potential by-product of these applications siphoning biometric data is fundamentally tied to our security and privacy. Nobody at first knew how much user data the mobile phone was collecting through our apps. Why shouldn’t we expect the same with immersive devices?

The data collected will someday include:

  • Finger prints
  • Voiceprints
  • Hand & face geometry
  • Electrical muscle activity
  • Heart-rate
  • Skin response
  • Eye movement detection
  • Pupil dilation
  • Head position
  • Hand pose recognition
  • Emotion registry
  • Unconscious limb movement tracking

At its core, there is nothing more sensitive and unique than an individual’s biometric data. For instance, heart rate, skin response, and eye movement within a controlled virtual environment can be collected to potentially analyse an individual’s reaction to a virtual advertisement. Thus, a feeling that is meant to be reserved for your own inner-self can someday be downloaded and scrutinised by external corporate entities.

Additionally, it’s important to mention that unauthorised collection of biometric data is prohibited under article 4(14) of the GDPR. However, despite this, questions on the potential consequences of this data being mis-collected or misused remains highly relevant. Advertising will be the first to enter this space but expect greater consequences with the continued advent of the surveillance nation state.

Every major modality shift in technology has brought with it new threats and dangers. From the fax machine making information vulnerable to loss and theft to the internet making malware easy for susceptible users to download, malicious actors will always find a way to exploit our naivety and ignorance.

As users and consumers of digital technology, we need to be aware of the privacy risks involved when hooking ourselves up to sensor-laden devices. Virtual reality can be really useful and fun but remember to make sure you’re biometric data isn’t getting funneled to a third party.

In the business world, immersive technology will force many companies to rethink their internal and external processes — due diligence and hiring the right people will be important. So is taking the necessary steps towards protecting your IP or making sure your virtual products can’t be hacked or ‘vandalised’.

Lastly, governments and public institutions need to prove to the public that they can preempt the various threats immersive tech will bring to business, social well-being, and user privacy. So far, legislators and tech companies have been playing a game of cat and mouse. As we move forward, a firm hand and some much needed transparency will be key.

The future of technology will no doubt be impressive. Someday we will look up to the skies to access our information instead of down to our phones. Yet, the warning here is that comfort is never bliss. Where there is comfort, there is an opportunity for naivety and exploitation. As we gear up for the immersive paradigm shift, please remember to stay informed.

Categories
Editorial

The Neurological Conditioning of Sound

The greatest weapon in a sound designer’s arsenal is the mere fact that we listen first and react second. Join us as we briefly explore the neurological, anthropological, and digital histories behind how we interpret sound and why not everything you hear should sound like the truth.

Image for post

Remember that time when you were alone in that quiet house for the first time and heard a creepy sound? Maybe it was a windy day and the floor creaked and the window bellowed. That sound you heard, was clearly the logical result of wind pushing into a creaky wooden structure, yet the auditory impact is interpreted by the hypothalamus (a small but very important part of your brain that regulates fight or flight) as a threat.

Your thoughts quickly flow into scenarios: is it a ghost? Or perhaps a robber? For the first five seconds these possibilities are all you might consider. They dominate your imagination and thought processing. Until the rational side of your brain — granted some time passed without other similarly scary sounds — convinces you that the sound is nothing to be afraid of. But part of you still believes that, during those first five seconds, you actually saw, or at least heard, a spooky ghost making that sound.

If you are unlucky enough to believe you have witnessed paranormal activity, you can consider yourself conditioned. In humans, conditioning is part of a behavioural repertoire of intelligent survival mechanisms supported by our neurobiological system.These underlying mechanisms promote adaptation to changing ecologies and efficient navigation of natural dangers. In this case, you have been conditioned to be aware of a sound attached to a particular danger.

Conditioning is a big reason why our brains don’t like to be surprised. Otherwise known as the Survival Optimization System (SOS), our response to most danger usually begins with a sound. This is because, as far as the human experience goes, you hear way faster than you see and at over 300,000 kilometers per second, sound gets into the ear so fast that it modifies all other input and sets the stage for it.

Image for post
Tree graph provided by The ecology of human fear: survival optimisation and the nervous system.

We hear first and listen second because in this Darwinian struggle we call life, it’s considerably faster and more effective for our brains to react to the possibility of a threat then to wait for its validity. Thus, the bi-product of a ghostly trauma, is a deep mechanistic rewiring of our neurobiological system to that specific occurence of sound. So for at least the near future, any sounds you hear alone in a quiet house will trigger your brain’s survival mechanisms and neurobiological nervous system to react fearfully to the potential presence of a spooky ghost.

Yet, conditioning doesn’t only happen with things that scare us. As we said before, conditioning is a natural process the brain undergoes when faced with repetitive sensory information. It is a software-like response that codes a defence mechanism into our subconscious reactions.

In psychology, sound conditioning is defined as:
A process in which a stimulus that was previously neutral, comes to evoke a particular response, by being repeatedly paired with another stimulus that normally evokes the response.

A classic example of a sound conditioning experiment is the Pavlov Experiment which sought to establish if salivation in dogs could be caused with the pairing of a bell sound stimulus. Everytime Pavlov rang the bell, he would feed the dogs. After doing this repeatedly, the pairing of food and bell eventually established the dog’s conditioned response of salivation to the sound of the bell. After repeatedly doing this pairing, Pavlov removed the food and when ringing the bell the dog would salivate.

Image for post

What the Pavlov experiment demonstrated is that most intelligent animals, including humans, given sensory repetition, are capable of experiencing a conditioned response to a conditioned stimulus. It’s a big reason why we people listen for cars before crossing the road, why particular songs make us remember the past, and in a more humorous sense, why children run after the ice cream truck.

Throughout human history we have devised alarms that alert us to small and large dangers. As humans gathered into larger groups and more permanent settlements, we artificially conditioned ourselves to respond to alarms that would warn us of incoming danger of all kinds. From early fire alarms alerting 100 people that a building is on fire, to tsunami alarm-systems alerting millions to get to high-ground, the story of alarms is largely the story of civilisation.

We hear dozens of conditioned alarms without even realising it such as car horns, police sirens, school bells, and most pertinently, the digital songs, sounds, and notifications of our everyday consumer technology.

Today, most people will wake up and listen to sounds they have been conditioned to hear. For instance, some may incorporate a specific upbeat song into a wakeup or workout routine. Others may play particular songs that can be associated to the memories of romantic thoughts and relationships. Even outdoor concerts and music venues can function as places of establishment and communication of tribal signatures such as identity and mating readiness.

A popular, catchy summer song (Daft Punk’s Get Lucky comes to mind for 2013) may define an entire summer, not just in one country, but around the world. Together these songs — whether associated most to a workout, romantic date, or summer party — represent rituals of emotional outputs or certain moods.

It’s no secret that sound designers today take great interest in our emotional conditioning to particular and ubiquitous sound. In fact, how past societies have interpreted sound historically is a large part of a sound designers inspiration to understanding how to select the right ding for your app.

For instance, the talking drum of West Africa is an interesting and unusual example. The drum was specifically designed to make a variety of sounds that emulate human speech, giving it a basic but intuitive beat-like vocabulary. This made the drum an effective signalling device for long-distance communication between remote African villages.

Upon drumming a beat, other far away drums would hear the signal and pass on the beat-like message similar to how a torch runner passes on a flame to another torch. Perfectly sophisticated, too; only weeks after Abraham Lincoln’s death, news of the tragedy and its complex implications had penetrated the African interior on the drum.

From an anthropological and sound design perspective, the drum of West Africa was far beyond any other audible communication device of its day. By communicating a wide variety of messages based on rhythm, tone, and strength, its sound was designed perfectly for what it was needed for. It was an ideal, early, and elegant solution to a common problem villages had when communicating. The outcome was that three strong beats meant an attack was coming. This would conditions other villages to be prepared to mobilize together and defend an alliance.

Today sound design has transitioned greatly in its effort to convey messages. We have gone from the drawn out drum beats of the Savannah, to the binary pings of morse code, to the monotonous buzzes of the pager, and now to the myriad of pinging sounds from our smartphones.

The outcome is that we have become conditioned to the smartphone the same way we are conditioned to a fire alarm or West African drum beat. For some, the ding of a social media message can bring forth excitement, butterflies to the stomach, or even a sigh of relief.

The video above contains a recording of an all too familiar sound in our current pandemic times: the Zoom incoming call ringtone. The deliberately interchanging of high-pitch and medium-pitch notes resembles a non-threatening plea or cry for attention, which repeated can quickly turn into an aggressively annoying noise that must be addressed. Sounds are a major tool in the software and hardware developer’s arsenal to usher the types of emotional reactions intended of the user. We respond instinctively to natural sounds — which can trigger any set of emotions. We also respond instinctively to artificial sounds, who are most effective at doing so when they mimic sounds that we are already conditioned to.

Nowhere is emotional conditioning to sound more prevalent than in our current and historic use of social media. Take for example the now-retired Facebook Messenger notification. For some, hearing that sound will transport them back to 2013. Perhaps they will associate that sound with a lost love creating an neurological output of emotional pain. However, for most of us today, the ding of a social media app gears the brain to expect some form of social gratification.

Indeed, before even glancing at our screens to see who it is that liked our last photo or sent us a message, we begin to imagine the realm of possibilities for who may be trying to contact us. Is it a crush? Is it a friend inviting you to that party you wanted to go? With every chat, comes an expectation, and the stronger that expectation is emotionally, the stronger you will be conditioned emotionally to that sound.

When distinct and repeated sensory stimuli, like UI sounds, are paired with feelings, moods, and memories, our brains build bridges between the two.

– Rachel Kraus

As devices, software applications, and apps become omnipresent, the User Interface (UI) sounds they emit — the pings, bings, and bongs vying for our attention — have also started to contribute to the sonic fabric of our lives. And just as a song has the power to take you back to a particular moment in time, the sounds emitted by our connected devices can trigger memories, thoughts, and feelings, too.

A word of advice from someone who has felt the anxiety of a message tone and the sadness of an old song, I believe all of us today should be more aware of how digital sounds can be tuned to condition our emotional lives. Like Pavlov’s dogs, Silicon valley is conditioning our usage of their products to expect social gratification from the various dings and boops of our devices. We need to learn to expect these feelings from the outside world, not from the digital world inside our pockets.

Only then can we begin to clean up the noise and listen to the music.

Image for post

“Who we are is not just the neurons we have,” Santiago Jaramillo, a University of Oregon neuroscientist who studies sound and the brain, said, referring to cells that transmit information. “It’s how they are connected.”

Categories
Editorial

A Short Introduction to the Mechanics of Bad Faith

A Background to the Mechanics
Six Policies to Encourage Good Faith
What Is Bad Faith?
The Trouble with Calling out Bad Faith
Game Theory, Mutually Assured Destruction and Technology
Game Theory

Image for post

Daniel Mróz
Mutually Assured Destruction
Technological Esperanto

Image for post

John Postel/Wikipedia

Six Proposals for the Promotion of Good Faith

Reducing False Positives
1. All fora should publish lists of logical fallacies for people to avoid.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns.
Making It Easier and More Productive to Handle Bad Faith
3. Claims of bad faith should be recorded diligently.
4. All claims of bad faith should be falsifiable.

Image for post

Hamilton and Burr dueling/Wikipedia
Incentivizing Good Faith and Disincentivizing Frivolous Accusations
5. All claims of bad faith should be reciprocal.
6. Good Faith Bonds

Image for post

Daniel Mróz
Finding the Best of Humanity Expressed by Computers

* I have not found reference to the philosophy smell idea anywhere, so believe it to be original. I will of course hand it to its rightful owner, if corrected.

Categories
Editorial Repost

The Accidental Tyranny of User Interfaces

The potential of technology to empower is being subverted by tyrannical user interface design, enabled by our data and attention.

My thesis here is that an obsession with easy, “intuitive” and perhaps even efficient user interfaces is creating a layer of soft tyranny. This layer is not unlike what I might create were I a dictator, seeking to soften up the public prior to an immense abuse of liberty in the future, by getting them so used to comical restrictions on their use of things that such bullying becomes normalised.

A note of clarification: I am not a trained user interface designer. I am just a user with opinions. I don’t write the following from the perspective of someone who thinks that they could do better; my father, a talented programmer, taught me early on that everyone thinks that they could build a good user interface, but very few actually have the talent and the attitude to do so.

As such, all the examples that I shall discuss are systems where the current user interface is much worse than a previous one. I’m not saying that I could do it better; they did it better, just in the past.

This is not new. Ted Nelson identifies how in Xerox’s much lauded Palo Alto Research Center (the birthplace of personal computing), the user was given a graphical user interface, but in return gave up the ability to form their own connections between programs, which were thereafter trapped inside “windows” — the immense potential of computing for abstraction and connection was dumbed down to “simulated paper.” If you’d like to learn more about his ideas on how computing could and should have developed, see his YouTube series, Computing for Cynics.

Moore’s law describes that computers are becoming exponentially more powerful as time passes, meanwhile our user interfaces — to the extent that they make ourselves act stupidly and humiliate ourselves — are making us more and more powerless.

YouTube’s Android app features perhaps the most egregious set of insulting user interface decisions. The first relates to individual entries for search results, subscriptions or other lists. Such a list contains a series of video previews that contain (today) an image still from the video, the title, the name of the channel, a view count, and the publishing date.

What if I want to go straight to the channel? This was possible, once. What if I want to highlight and select some of the text from the preview? I can’t. Instead, the entire preview, rather than acting like an integrated combination of graphics, text and hypertext, is just one big, pretty, stupid, button.

This is reminiscent of one of my favorite Dilbert cartoons. A computer salesperson presents Dilbert with their latest model, explaining that their latest user interface is so simple, friendly and intuitive that it only has one button, which they press for you when it’s shipped from the factory. We used to have choices. Now we are railroaded.

Image for post

Do you remember when you could lock your phone or use another app, and listen to YouTube in the background? Not any more. YouTube took away this — my fingers hovered over the keyboard there for a moment, and I nearly typed “feature” — YouTube continuing to play in the background is not a feature, it should be the normal operation of an app of that type; the fact that it closes when you switch apps is a devious anti-feature.

YouTube, combining Soviet-Style absurdity and high-capitalist banality, offers to give you back a properly functioning app, in return for upgrading to Premium. I’m not arguing against companies making available additional features in return for an upgrade. Moreover, my father explained how certain models of IBM computers came with advanced hardware built-in — upgrading would get you a visit from an engineer to activate hardware you already had.

IBM sells you a car, you pay for the upgrade, but realize that you already had the upgraded hardware, they just suppressed it; YouTube sells you a car, then years later turns it into a clown-car, and offers you the privilege of paying extra to make it into a normal car. Imagine a custard pie hitting a human face, forever.

Obviously this simile breaks down in that the commercial relationship between YouTube and me is very different to the one between a paying customer and IBM. If you use the free version of YouTube, you pay the company in eyeballs and data — this sort of relationship lacks the clarity of a conventional transaction, and the recipient of a product or service that is supposedly free leaves themselves open to all manner of abuses and slights, being without the indignation of a paying customer.

WhatsApp used to have a simple, logical UI; this is fast degrading. As with YouTube, WhatsApp thwarts the user’s ability to engage with the contents of the program other than in railroaded ways.

Specifically, one used to be able to select and copy any amount of text from messages. Now, when one tries to select something from an individual message, the whole thing gets selected, and the standard operations are offered: delete, copy, share, etc.

What if I want to select part of a message because I only want to copy that part, or merely to highlight so as to show someone? Not any more. WhatsApp puts a barrier between you and the actual textual content of the messages you send and receive, letting you engage with them only in the ways for which it provides.

On this point I worry that I sound extreme — today I tried this point on a friend who didn’t see why this matters so much to me. Granted, in isolation, this issue is small, but it is one of a genre of such insults that are collectively degrading our tools.

That is is to say that WhatsApp pretends that the messages one the screen belong some special category, subject only to limited operations. No. It’s text. It is one of the fundamental substrates of computing and any self-respecting software company ought to run on the philosophical axiom that users should be able to manipulate it, as text.

Another quasi-aphorism from my father. We were shopping for a card for a friend or relative, in the standard Library of Congress-sized card section in the store. Looking at the choices, comprehensively labelled 60th Birthday, 18th Birthday, Sister’s Wedding, Graduation, Bereavement, etc., he commented, Why do they have to define every possible occasion? Can’t they just make a selection of cards and I write that it’s for someone’s 60th birthday?

This is about the shape of it. The Magnetic North to which UIs appear to be heading is one in which all the things people think you might want to do are defined and are given a button. To refer to the earlier automotive comparison, this would be like a car without a steering wheel or gas pedal. Instead there’s a button for each city to which people think you might want to visit.

There’s a button for Newark but not for New York City? Hit the Button for Newark then walk the rest of the way. What kind of deviant would want to go to New York City anyway, or for that matter what muddle-headed lunatic would desire to go for a drive without having first decided upon the destination?

I work in the Financial District in Manhattan. Our previous building had normal lifts: you call a lift and, inside, select your floor. This building has a newer system: you go to a panel in the lobby and select your floor, the system then tells you the number of the lift that it has called for you. Inside, you find that there no buttons for floors.

This is impractical. Firstly, there is no way to recover if you accidentally get in the wrong lift (more than once, the security guards on the ground floor have seen a colleague and me exit a lift with cups of coffee and laptops, call another, and head straight back upstairs). Meanwhile, one has to remember in order for the system to function. I don’t go to the office to memorize things, I go to the office to work. Who wants to try to hold in mind the number for your lift while trying to talk to a friend?

More importantly, and just like WhatsApp, it’s like getting into a car but finding the steering wheel immovable in the grip of another person, who asks, “Where would you like to go?” What if I get in the lift and change my mind? This says nothing for the atomizing effect this has on people. Before, we might get into a lift and I, being closest to the control panel, would ask “which floor?” Now we’re silent, and there’s one fewer interruption between the glint of your phone, the building, and the glass partitions of your coworking space.

My father set up my first computer when I was 8 or 9 years old. Having successfully installed Red Hat GNU/Linux, we booted for the first time. What we saw was something not unlike this:

Image for post

This is a list of the processes that the operating system has launched successfully. It runs through it every time you start up. I see more or less the same thing now, running the latest version of the same Linux. It’s a beautiful, ballsy thing, and if it ever changes I will be very sad.

Today, our software treats us to what you might call the Ambiguous Loading Icon. Instead of a loading bar, percentage progress, or list, we’re treated to a thing that moves, as if to say we’re working on it, without any indication that anything is happening in the background. This is why I like it when my computer boots and I see the processes launching: there’s no wondering what’s going on in the background, this is the background.

One of the most egregious examples of this is in the (otherwise functional and inexpensive) Google Docs suite, when you ask it to convert a spreadsheet into the Google Sheets format:

Image for post

We’re treated to a screen with the Google Docs logo and a repeating pattern that goes through the four colors of Google’s brand. Is it doing something? Probably. Is it working properly? Maybe. Will it ever be done? Don’t know. Each time that ridiculous gimmick flips colors is a slap in the face for a self-respecting user. Every time I tolerate this, I acclimatize myself to the practice of hiding the actual function and operation of a system from the individual, or perhaps even to the idea that I don’t deserve to know. This the route of totalitarianism.

I’m not pretending that this is easy. I understand that software and user interface design is a compromise between multiple goals: feature richness (which often leads to difficult user interfaces), ease of use (which often involves compromising on features or hiding them), flexibility, and many others.

I might frame it like this:

  1. There exists an infinite set of well-formed logical operations (that is, there is no limit to the number of non-contradictory logical expressions (e.g. A ⊃ B (the set A contains B)) that one can define.
  2. Particular programming languages allow a subset of such expressions, as limited by the capabilities and power of the hardware (even if a function is possible, it might take an impractical amount of time (or forever) to complete).
  3. Systems architects, programmers and others provide for a subset of all possible operations as defined by 2. in their software.
  4. Systems architects, programmers and others create user interfaces that allow us to access a subset of 3. according to their priorities.

They have to draw the line somewhere. It feels like software creators have placed too much emphasis on prettiness and ease of use, very little on freedom, and sometimes almost no emphasis on letting the user know what’s actually happening. I’m not asking for software that provides for the totality of all practical logical operations, I’m asking for software that treats me like an adult.

Some recommendations:

  1. Especially for tools intended for non-experts, there seems to be a hidden assumption that the user should be able to figure it out without training, and figure it out by thrashing around randomly when the company changes the user interface for no reason. A version of this is laudable, but often leads to systems so simplistic that they makes us feckless and impressionable. Perhaps a little training is worth it.
  2. No fig-leaves: hiding a progress message under an animated gimmick was never worth it.
  3. Perhaps the ad-funded model is a mistake, at least in some cases. As in the case of YouTube, it’s challenging to complain about an app for I don’t pay conventionally. The programs for which I do pay, for example Notion, are immensely less patronizing. Those for which I don’t pay conventionally, but aren’t run on ads, like GNU/Linux, Libre Office, Ardour, etc. are created by people who so value things like openness, accessibility, freedom (as in free), that they border on the fanatical. Perhaps we should pay for more stuff and be more exacting in our values. (Free / open source software is funded in myriad ways, too complex to explore.)

All this matters because the interfaces in question do the job of the dictator and the censor, and we embrace it. More than being infuriating, they train us to accept gross restrictions in return for trifling or non-existent ease of use, or are a fig leaf covering what is actually going on.

Most people do what they think is possible, or what they think they are allowed to do. Do you think people wouldn’t use a Twitter-like “share” function on Instagram, if one existed? What about recursive undo/redo functions that form a tree of possible document versions? Real hyperlinks that don’t break when the URL for the destination changes?

We rely on innovators to expand our horizons, while in fact they are defining limited applications of almost unlimited possibilities. Programmers, systems architects, businesspeople and others make choices for us: in doing so they condition in us that which feels possible. When they do so well, they are liberators; when they do so poorly, we are stunted.

Some of these decisions appear to be getting worse over time, and they dominate some of the most popular (and useful) systems; the consciousness-expanding capabilities of technology are being steered into a humiliating pose in a cramped space, not by force, but because the space is superficially pretty, easy to access and because choices are painful.

This article was originally posted on Oliver Meredith Cox

Categories
Editorial

Satanic Panic 2: Facebook Boogaloo

Image for post

McMartin Preschool in the early 1980s // Investigation Discovery
Image for post
2004 study shows crime reporting dominates local TV coverage // Pew Research
Image for post
Not your typical fringe conspiracy aesthetic
Image for post
Notice the QAnon hashtags #greatawakening, #maga, and #painiscoming
Categories
Editorial

‘Who Watches the Watchers?’ — The Internet and the Panopticon

How the philosophical concept of the Panopticon can help us visualise the structure of the internet and our position in it.

Image for post

Panopticon according to Bentham’s original design

What does the Panopticon concept offer us in terms of understanding the relationship between individual and society in today’s world and what future developments might result from this understanding?

The Internet-as-Panopticon
The individual as a ‘prisoner’ of the Panopticon

Image for post

The role of the Panopticon ‘guard’ online
Image for post
https://www.slashfilm.com/the-quarantine-stream-justice-for-nurse-ratched-in-one-flew-over-the-cuckoos-nest/

Image for post

Presidio Modelo today