Categories
Editorial

Internet Walden: Introduction—What Is the Internet?

Internet users speak with an Internet accent, but the only manner of speaking that doesn’t sound like an accent is one’s own.

What is the Internet? Why ask a question like this? As I mentioned in this piece’s equal companion Why We Should Be Free Online, we are in the Internet like fish in water and forget both its life-giving presence and its nature. We must answer this question, because, given a condition of ignorance in this domain, it is not a matter of whether but when one’s rights and pocket book will be infringed.

What sort of answer does one expect? Put it this way: ask an American what the USA is. There are at least three styles of answer:

  1. Someone who might never have considered the question before might say that this is their country, it’s where they live, etc.
  2. Another might say that America is a federal constitutional republic, bordering Canada to the North and Mexico to the South, etc.
  3. Another might talk about the country’s philosophy and its style of law and life, how, for example, the founding fathers wrote the Constitution so as to express rights in terms of what the government may not do rather than naming any entitlements, or how the USA does not have an official language.

The latter is the style of answer that I’m seeking, so as to elucidate the real human implications of what we have, the system’s style and the philosophy behind it. This will tell us, in the case of the Internet as it does in the case of the USA, why we have what we have, why we are astonishingly fortunate to have it in this configuration and what is wrong and how best to amend the system or build new systems to better uphold our rights and promote human flourishing.

In pursuit of this goal, I will address what I think are the three main mistaken identities of the Internet:

  1. The Web. The Web is the set of HTML documents, websites and the URLs connecting them; it is one of many applications which run on the Internet.
  2. Computer Network of Arbitrary Scale (CNAS). CNAS is a term of my own creation which will be explained in full later. In short: Ford is a car, the Internet is a CNAS.
  3. Something Natural, Deistic or Technical. As with many other technologies, it is tempting to believe that the way the Internet is derives from natural laws or even technical considerations; these things are relevant, but the nature of the internet is incredibly personal to its creators and users, and derives significantly from philosophy and other fields.

Finally, I will ask a supporting and related question, Who Owns the Internet? which will bring this essay to a close.

With our attention redirected away from these erroneous ideas and back to the actual Internet, we might then better celebrate what we have, and better understand what to build next. More broadly, I think that we are building a CNAS society and haven’t quite caught up to the fact; we need to understand the civics of CNAS in order to act and represent ourselves ethically. Otherwise, we are idiots: idiots in the ancient sense of the word, meaning those who do not participate.

Pulling on that strand, I claim, and will elaborate later, that we need should be students of the Civics of CNASs, in that we are citizens of a connected society; and I don’t mean merely that our pre-existing societies are becoming connected, rather that the new connections afforded by tools like the Internet are facilitating brand new societies.

The Internet has already demonstrated the ability to facilitate communication between people, nationalities and other groups that would, in most other periods of time, have found it impossible not just to get along but to form the basis for communication through which to get along. With an active CNAS citizenry to steward our systems of communication, I expect that our achievements in creativity, innovation and compassion over the next few decades will be unimaginable.

The Internet is Not the Web

You may, dear reader, already be aware of this distinction; please do stick with me, though, as clarifying this misapprehension will clarify much else. The big difference between the Web and the Internet is this: the Internet is the global system of interconnected networks, running on the TCP/IP suite of protocols; the Web is one of the things you can do on the Internet, other things include email, file-sharing, etc.

It’s not surprising that we confuse the two concepts, or say, go on the Internet when we mean go on the Web, in that the Web is perhaps the Internet application that most closely resembles the Internet itself: people and machines, connected and communicating. This is not unlike how, as a child, I thought that the monitor was the computer, disregarding the grey box. Please don’t take this as an insult; the monitor may not be where the processing happens, but it’s where the things that actually matter to us find a way to manifest in human consciousness.

As you can see in the above diagram, the Web is one of many types of application that one can use on the Internet. It’s not even the only hypermedia or hypertext system (the HT in HTTPS stands for hypertext).

The application layer sits on top of a number of other functions that, for the most part, one barely or never notices, and rightly so. However, we ought to concern ourselves with these things because of how unique and interesting they are, so let’s go through them one by one.

Broadly, the Internet suite is based on a system of layers. As I will explore later on, Internet literature actually warns against strict layering. Caveats aside, the Internet protocol stack looks like this:

Physical

Not always included in summaries of the Internet protocol suite, the physical layer refers to the actual physical connection between machines. This can be WiFi signals, CAT-5 cables, DSL broadband lines, cellular transmissions, etc.

The link layer layer handles data transmission between devices. For example, the Link layer handles the transmission of data from your computer to your router, such as via WiFi or Ethernet, and then over, say, Ethernet via a DSL line to the target network (say to a webserver from which you’re accessing a site). The Link layer was specifically designed for it not to matter what the physical layer actually is, so long as it provides the basic necessities.

Internet

The Link layer handled the transmission between devices, and the Internet layer organizes the jumps between networks: in particular between Internet routers. The Link layer on its own can get the data from your computer to your router, but to get to the router for the target network, it needs the Internet layer’s help: this is why (confusingly) it’s called the Internet layer, it provides a means of interconnecting networks.

Your devices, your router, and all Internet-accessible machines are assigned the famous IP addresses, which come in the form of a 32-bit number, of four bytes separated by dots. My server’s IP address is 209.95.52.144.

This layer thinks in terms of getting data from one IP address to another.

Transport

Now, with the Link and Internet layers buzzing away, transmitting data, the Transport layer works above them both, establishing communication between hosts. This is akin to how I have something of a direct connection with someone to whom I send a letter, even though that letter passes through letterboxes and sorting offices; the Transport layer sets up this direct communication between machines, so that they can act independently with respect to the underlying conditions of the connection itself. There are a number of Transport layer protocols, but the most famous is TCP.

One of the most recognizable facets of the Transport layer is the port number. The TCP protocol assigns numbered “ports” to identify different processes; for the Web, for example, HTTP uses port 80 and HTTPS, port 443. To see this in action, try this tool, which will see which ports are open on a given host: https://pentest-tools.com/network-vulnerability-scanning/tcp-port-scanner-online-nmap — try it first with my server, omc.fyi.

This layer thinks in terms of passing data over a direct connection to another host.

Application

The Application layer is responsible for the practicalities associated with doing the things you want to do: HTTPS for the Web, SMTP for email, SSH for a remote command line connection, are all Application layer protocols. If it wasn’t clear by now, this is where The Web lives, it is one of many Applications running on the Internet.

How it Works in Practice

Here’s an example of how all this works:

  • Assume a user has clicked on a Web link in their browser, and that the webserver has already received this signal, which manifests in the Application layer. In response, the server sends the desired webpage, using HTTPS, which lives on the Application layer.
  • The Transport Layer is then responsible for establishing a connection (identified by a port) between the server and the user’s machine, through which to communicate.
  • The Internet Layer is responsible for transmitting the appropriate data between routers, such as the user’s home router and the router at the location of the Web server.
  • The Link Layer is responsible for transmitting data between the user’s machine and their router, between their router and the router for the webserver’s network, and between the webserver and its router.
  • The Physical Layer is the physical medium that connects all of this: fiberoptic cable, phone lines, electromagnetic radiation in the air.

Why is this interesting? Well, firstly, I think it’s interesting for its importance; as I claim in this piece’s equal counterpart on Internet freedom, the Internet is used for so much that questions of communication are practically the same as questions of the Internet in many cases. Secondly, the Internet is Interesting for its peculiarity, which I will address next.

“Internet” Should Not Synonymous with “Computer Network of Arbitrary Scale”

When addressing the Internet as a system, there appear to be two ways in which people use the word:

  • One refers to the Internet as in the system we have now and, in particular, that runs on the TCP/IP protocol suite.
  • The other refers to the Internet as a system of interconnected machines and networks.

Put it this way: the first definition is akin to a proper noun, like Mac or Ford, the latter is a common noun, like computer or car.

This is not uncommon: for years I really thought that “hoover” was a generic term, and learned only a year or so ago that TASER is a brand name (the generic term is “electroshock” weapon). Then of course we have non-generic names that are, and some times deliberately so, generic-sounding: “personal computer” causes much confusion, in that it could refer to IBM’s line of computers by that name, something compatible with the former, or merely a computer for an individual to use; there is of course the Web, which is one of many hypertext systems that allow users to navigate interconnected media at their liberty, whose name sounds merely descriptive but, in fact, refers to a specific system of protocols and styles. The same is true for the Internet.

For the purpose of clarifying things, I’ve coined a new term: computer network of arbitrary scale (CNAS or seenas). A CNAS is:

  1. A computer network
  2. Using any protocol, technology or sets thereof
  3. That can operate at any scale

Point three is important: we form computer networks all the time, but one of the things about the Internet is that its protocols are robust enough for it to be global. If you activate the WiFi hotspot on our phone and have someone connect, that is a network, but it’s not a CNAS because, configured thus, it would have no chance of scaling. So, not all networks are CNASs; today, the only CNAS is the thing we call the Internet, but I think this will change in a matter of years.

There’s a little wiggle room in this definition: for example, the normal Internet protocol stack cannot work in deep space (hours of delay due to absurd distances, and connections that come in and out because the sun gets in the way make it hard), so one could argue the today’s Internet is not a CNAS because it can’t scale arbitrarily.

I’d rather keep this instability in the definition:

  • Firstly, because (depending on upcoming discoveries in physics) it may be possible that no network can scale arbitrarily: there are parts of the universe that light from us will never reach, because of cosmic expansion.
  • Secondly, because the overall system in which all this talk is relevant is dynamic (we update our protocols, the machines on the network change and the networks themselves change in size and configuration); a computer network that hits growing pains at a certain size, and then surmounts them with minor protocol updates didn’t cease to be a CNAS then become one again.

Quite interestingly, in this RFC on “bundle protocol” (BP) for interplanetary communication (RFCs being a series of publications by the Internet Society, setting out various standards and advice) the author says the following:

BP uses the “native” internet protocols for communications within a given internet. Note that “internet” in the preceding is used in a general sense and does not necessarily refer to TCP/IP.

This is to say that people are creating new things that have the properties of networking computers, and can scale, but are not necessarily based on TCP/IP. I say that we should not use the term Internet for this sort of thing; we ought to differentiate so as to show how undifferentiated things are (on Earth).

Similarly, much of what we call the internet of things isn’t really the Internet. For example, Bluetooth devices can form networks, sometimes very large ones, but it’s only really the Internet if they connect to the actual Internet using TCP/IP, which doesn’t always happen.

I hope, dear reader, that you share with me the sense that it is absolutely absurd, that our species has just one CNAS (the Internet) and one hypertext system with anything like global usage (the Web). We should make it our business to change this:

  • One, to give people some choice
  • Two, to establish some robustness (the Internet itself is robust, but relying on a single system to perform this function is extremely fragile)
  • Three, to see if we can actually make something better

At this point I’m reminded of the scene in the punchy action movie Demolition Man, in which the muscular protagonist (frozen for years and awoken in a strange future civilization) is taken to dinner by the leading lady, who explains that in the future, all restaurants are Taco Bell.

This is and should be recognized to be absurd. To be clear, I’m not saying that the Internet is anything like Taco Bell, only that we ought to have options.

The Internet is not Natural, Deistic or even that Technical

I want to rid you of a dangerous misapprehension. It is a common one, but, all the same, I can’t be sure that you suffer from it; all I can say is that, if you’ve already been through this, try to enjoy rehearsing it with me one more time.

Here goes:

Many, if not most, decisions in technology have little to do with technical considerations, or some objective standard for how things should be; for the most part they relate, at best, to personal philosophy and taste, and, at worst, ignorance and laziness.

Ted Nelson provides a lovely introduction, here:

Here’s a ubiquitous example: files and folders on your computer. Let’s say I want to save a movie, 12 Angry Men, on my machine: do I put it in my Movies that Take Place Mainly in One Room folder with The Man from Earth, or my director Sidney Arthur Lumet folder with Dog Day Afternoon? Ideally I’d put it in both, but most modern operating systems will force me to put it in just one folder. In Windows (very bad) I can make it sort of show up in more than one place with “shortcuts” that break if I move the original, with MacOS (better) I have “aliases” which are more robust.

But why am I prevented from putting it in more than one place, ab initio? Technically, especially in Unix-influenced systems (like MacOS, Linux, BSD, etc.) there is no reason why not to: it’s just that the people who created the first versions of these systems decades ago didn’t think you needed to, or thought you shouldn’t—and it’s been this way for so long that few ask why.

A single, physical file certainly can’t be in more than one place at a time, but the whole point of electronic media is the ability to structure things arbitrarily, liberating us from the physical.

Technology is a function of constraints—those things that hold us back, like the speed of processors, how much data can pass through a communications line, money—and values: values influence the ideas, premises, conceptual structures that we use to design and build things, and these things often reflect the nature of their creators: they can be open, closed, free, forced, messy, neat, abstract, narrow.

As you might have guessed, the creators and administrators of technology often express choices (such as how a file can’t be in two places at once) as technicalities, sometimes this is a tactic to get one’s way, sometimes just ignorance.

Why does this matter? It matters because we won’t get technology that inculcates ethical action in us and that opens the scope of human imagination by accident, we need the right people with the right ideas to build it. In the case of the Internet, we were particularly fortunate. To illustrate this, I’m going to go through two archetypical values that shape what the Internet became, and explore how things could have been, otherwise.

Robustness

See below a passage from RFC 1122. It’s on the long side, but I reproduce it in full for you to enjoy the style and vision:

At every layer of the protocols, there is a general rule whose application can lead to enormous benefits in robustness and interoperability [IP:1]:

“Be liberal in what you accept, and conservative in what you send”

Software should be written to deal with every conceivable error, no matter how unlikely; sooner or later a packet will come in with that particular combination of errors and attributes, and unless the software is prepared, chaos can ensue. In general, it is best to assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect. This assumption will lead to suitable protective design, although the most serious problems in the Internet have been caused by unenvisaged mechanisms triggered by low-probability events; mere human malice would never have taken so devious a course!

Adaptability to change must be designed into all levels of Internet host software. As a simple example, consider a protocol specification that contains an enumeration of values for a particular header field — e.g., a type field, a port number, or an error code; this enumeration must be assumed to be incomplete. Thus, if a protocol specification defines four possible error codes, the software must not break when a fifth code shows up. An undefined code might be logged (see below), but it must not cause a failure.

The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features. It is unwise to stray far from the obvious and simple, lest untoward effects result elsewhere. A corollary of this is “watch out for misbehaving hosts”; host software should be prepared, not just to survive other misbehaving hosts, but also to cooperate to limit the amount of disruption such hosts can cause to the shared communication facility.

This is not just good technical writing, this is some of the best writing. In just a few lines, Postel assures us of the not a case of whether but when orientation that can almost be applied universally, which almost predicts Taleb’s Ludic Fallacy—how the things that really hurt you are those for which you weren’t being vigilant, especially not ones that belong to familiar, mathematical-feeling or game-like scenarios; Taleb identifies another error type: not planning enough for the scale of the damage—Postel understood that in a massively interconnected environment, small errors can compound into something disastrous.

Then Postel explains one of the subtler parts of of his imperative: on first reading, I had thought that Be liberal in what you accept meant “Permit communications that are not fully compliant with the standard, but which are nonetheless parseable”. It goes beyond this, meaning that one should do so while being liberal in an almost metaphorical sense: being tolerant of and therefore not breaking down in response to aberrant behavior.

This is stunningly imaginative: Postel set out how Internet hosts might communicate without imposing uniform of versions of the software among all Internet users. Remember, and as I mention this essay’s counterpart on freedom, the Internet is stunningly interoperable: today, in 2021, you still can’t reliably switch storage media formatted for Mac and Windows, but it’s so easy to hook new devices up to the Internet that people seem to say why not, giving us Internet toothbrushes and fridges.

Finally, the latter part, calling hosts to be Conservative in what you send, is likewise more subtle than one might gather on first reading. It doesn’t mean that one should merely adhere to the standards (which is hard enough), it means do so while avoiding doing something that, while permitted, risks causing issues in other devices that are out-of-date or not set up properly. Don’t just adhere to the standard, imagine whether some part of the standard might be obscure or new enough that using it might cause errors.

This supererogation reaches out of the bounds of mere specification and into philosophy.

Postel’s Law is, of course, not dogma, and people in the Internet community have put forward proposals to move beyond it. I’m already beyond my skill and training, so can’t comment on the specifics here, but wish to show only that the Law is philosophical and beautiful, not necessarily perfect and immortal.

Simplicity

See RFC 3439:

“While adding any new feature may be considered a gain (and in fact frequently differentiates vendors of various types of equipment), but there is a danger. The danger is in increased system complexity.”

And RFC 1925:

“It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.”

You might not need more proof than the spelling error to understand that the Internet was not created by Gods. But if you needed more, I wish for you to take note of how these directives relate to a particular style of creation, the implication being that the Internet could have gone many other ways and would have made our lives very different.

Meanwhile, these ideas are popular but actually quite against the grain, overall. With respect to the first point, it’s quite hard to find arguments to the contrary; this seems to be because features are the only way get machines to do things, and doing things is what machines is for—this seems to be the same as the reason why there’s no popular saying meaning “more is more” but we do have the saying “less is more,” because more is actually more, but things get weird with scale.

The best proponents for features and lots of them are certainly software vendors themselves, like Microsoft here:

Again, I’m not saying that features are bad—everything your computer does is a feature. This is, however, why it’s so tempting to increase them without limit.

Deliberately limiting features, or at least spreading features among multiple self-contained programs appears to have originated within the Unix community, best encapsulated by what is normally called the Unix philosophy, here are my two favorite points (out of four, from one of the main iterations of the philosophy):

  1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.
  2. Expect the output of every program to become the input to another, as yet unknown, program. Don’t clutter output with extraneous information.

The first point, there, neatly encompasses the two ideas referenced before in RFCs: don’t add too many features, don’t try to solve all your problems with one thing.

This philosophy is best expressed by the Internet Protocol layer of the stack (see the first section of this essay for our foolishly heathen recap of the layers). It is of course tempting to have IP handle more stuff; right now all it does is rout traffic between the end users, those users are responsible for anything more clever than that. This confers two main advantages:

  1. Simple systems mean less stuff to break; connectivity between networks is vital to the proper function of the Internet, better to lighten the load on the machinery of connection and have the devices on the edge of the network be responsible for what remains.
  2. Adding complex features to the IP layer, for example, would add new facilities that we could use; but any new feature imposes a cost on all users, whether it’s widely used or not. Again, better to keep things simple when it comes to making connections and transmitting data, and get complex on your own system and your own time.

At risk of oversimplifying, the way the Internet is is derived from a combination of technical considerations, ingenuity, and the combination of many philosophies of technology. There are, one can imagine, better ways in which we could have done this; but for now I want to focus on what could have been: imagine if the Internet had been built by IBM (it would have been released in the year 2005 and would require proprietary hardware and software) or Microsoft (it would have come out at around the same time, but would be run via a centralized system that crashes all the time).

Technology is personal first, philosophical second, and technical last; corollary: understand the philosophy of tech, and see to it that you and the people that make your systems have robust and upright ideas.

Who Owns and Runs the Internet?

As seems to be the theme: there’s a good deal of confusion about who owns and runs the internet, and our intuitions can be a little unhelpful because the Internet is an odd creature.

We have a long history of understanding who owns physical objects like our computers and phones, and if we don’t own them fully, have contractual evidence as to who does. Digital files can be more confusing, especially if stored in the cloud or on third party services like social media. See this piece’s counterpart on freedom for my call to action to own and store your stuff.

That said, a great deal of the Internet, in terms of software and conceptually, is hidden from us, or at least shows up in a manner that is confusing.

The overall picture looks something like this (from the Internet Engineering Task Force):

“The Internet, a loosely-organized international collaboration of autonomous, interconnected networks, supports host-to-host communication through voluntary adherence to open protocols and procedures defined by Internet Standards.”

Hardware

First, the hardware. Per its name, the Internet interconnects smaller networks. Those networks—like the one in your home, an office network, one at a university, or something ad hoc that you set up among friends—are controlled by the uncountable range of individuals and groups that own networks and/or the devices on them.

Don’t forget, of course, that the ownership of this physical hardware can be confusing, too: it’s my home network, but ComCast owns the router.

Then you have the physical infrastructure that connects these smaller networks: fiberoptic cables, ADSL lines, wireless (both cellular and WISP setups) which is owned by internet service providers (ISPs). Quite importantly, the term ISP says nothing about nature or organizational structure: we often know ISPs as huge companies like AT&T, but ISPs can be municipal governments, non-profits, small groups of people or even individuals.

Don’t assume that you have to settle for internet service provided by a supercorporation. There may be alternatives in your area, but their marketing budgets are likely small, so you need to look for them. Here are some information sources:

ISPs have many different roles, and transport data varying distances and in different ways. Put it this way: to get between two hosts (e.g. your computer and a webserver) the data must transit over a physical connection. But, there is no one organization that own all these connections: it’s a patchwork of different networks, of different sizes and shapes, owned by a variety of organizations.

To the user, the Internet feels like just one thing: we can’t detect when an Internet data packet has to transition between, say, AT&T’s cabling to Cogent Communications’—it acts as one thing because (usually) the ISPs coordinate to ensure that the traffic gets where it is supposed to go. The implication of this (which I only realized upon starting research for this article) is that the ISPs have to connect their hardware together, which is done at physical locations known as Internet exchange points, like the Network Access Point of the Americas, where more than 125 networks are interconnected.

Intangibles

The proper function of the Internet relies heavily on several modes of identifying machines and resources online: IP addresses and domain names. There are other things, but these are the most important and recognizable.

At the highest level, ICANN manages these intangibles. ICANN is a massively confusing and complicated organization to address, not least because it has changed a great deal, and because it delegates many of its important functions to other organizations.

I’m going to make this very quick and very simple, and for those who would like to learn more, see the Wikipedia article on Internet governance. ICANN is responsible for three of the things we care about: IP addresses, domain names, and Internet technology standards; there’s more, but we don’t want to be here all day. There must be some governance of IP addresses and domain names, if nothing else so that we ensure that no single IP address is assigned to more than one device, or one domain name assigned to more than one owner.

The first function ICANN delegates to one of several regional organizations that hand out unique IP addresses. IP addresses themselves aren’t really ownable in a normal sense, they are assigned.

The second function was once handled by ICANN itself, now by an affiliate organization, Public Technical Identifiers (PTI). Have you heard of this organization before? It is very important, but doesn’t even have a Wikipedia page.

PTI, among other things, is responsible for managing the domain name system (DNS) and for delegating the companies and organizations that manage these domains, such as GoDaddy, VeriSign and Tucows, etc. I might register my domain with GoDaddy, for example, but, quite importantly, I don’t own it, I just have the exclusive right to use it.

These organizations allow users to register the domains, but PTI itself manages the very core of the DNS, the root zone. The way DNS works is actually rather simple. If your computer wishes, say, to pull up a website at site.example.com:

  1. It will first ask the DNS root were to find the server responsible for the com zone.
  2. The DNS root file will tell your computer the IP address of the server responsible for com.
  3. Your machine will then go to this IP address and ask the server where to find example.com.
  4. The com server will tell you where to find example.com.
  5. And, finally, the example server will tell you where to find site.example.com.

You might have noticed that this is rather centralized; it’s not fully centralized in that everything after the first lookup (where we found how to get to com) is run by different people, but it’s centralized to the extent that PTI controls the very core of the system.

Fundamentally, however, the PTI can’t prevent anyone else from providing a DNS service: computers know to go to the official DNS root zone, but can be instructed to get information from anywhere. As such, here are some alternatives and new ideas:

  • GNS, via the GNUNET project, which provides a totally decentralized name system run on radically different principles.
  • Handshake, which provides a decentralized DNS, based on a cryptographic ledger.
  • OpenNIC, which is not as radical as GNS or Handshake, but which, not being controlled by ICANN, provides a range of top-level domains not available via the official DNS (e.g. “.libre” which can be accessed by OpenNIC users only).

The Internet Engineering Task Force (IETF) handles the third function, which I will explore in the next section.

Before ICANN, Jon Postel, mentioned above, handled many of these functions personally: on a voluntary basis, if you please. ICANN, created in 1998, is a non-profit: it was originally contracted to perform these functions by the US Department of Commerce. In 2016, the Department of Commerce made it independent, performing its duties in collaboration with a “multistakeholder” community, made up of members of the Internet technical community, businesses, users, governments, etc.

I simply don’t have the column inches to go into detail on the relative merits of this, e.g. which is better, DOC control or multistakeholder, or something else? Of course, there are plenty of individuals and governments that would have the whole Internet, or at least the ICANN functions, be government controlled: I think we ought to fight this with much energy, because we can guarantee that what any government with this level of control would use it to victimize its enemies.

I think I’m right in saying that in 1998 there was no way to coordinate the unique assignment of IP addresses and domain names without some central organization. Not any more: Handshake, GNUNET (see above) and others are already pioneering ways to handle these functions in a decentralized way. See subsequent articles in this series for more detail.

Dear reader, you may be experiencing a feeling somewhat similar to what I felt, such as when first discovering that there are alternative name systems. That is, coming upon the intuition that the way technology generally is set up today is not normal or natural, rather that it done by convention and, at that, is one among many alternatives.

If you are starting to feel something like this, or already do, I encourage you to cultivate this feeling: it will make you much harder to deceive.

Standards

The Internet is very open, meaning that all you need, really, to create something for the Internet is the skill to do so; this doesn’t mean you can do anything or that anything is possible (there are legal and technical limitations). One of the many results of this openness is that no single organization is responsible for all the concepts and systems used on the Internet.

This is not unlike how, in open societies, there is no single organization responsible for all the writing that is published: you only get this sort of thing in dictatorships. Contrast this, for example, to the iPhone and its accompanying app store, for which developers must secure permission in order to list their apps. I, unlike some others, say that this is not inherently unethical: however, we are all playing a game of chose your own adventure, and the best I can do is commend the freer adventure to you.

There are, however, a few very important organizations responsible for specifying Internet systems. Before we address them, it’s worth looking at the concept of a standards organization. If you’re already familiar, please skip this.

  1. What is a standard, in this context? A standard is a description of the way things work within a particular system such that, if someone follows that standard, they will be able to create things that work with others that follow the standard. ASCII, USB, and, course, Internet Protocol, are all standards.
  2. Why does this matter? I address this question at length in this piece’s counterpart on freedom; put simply, standards are like languages, they facilitate communication. USB works so reliably, for example, because manufacturers and software makers agree to the standard, and without these agreements, we the users would have no guarantee that these tools would operate together.
  3. Who creates the standards? Anyone can create a standard, but standards matter to the extent that they are adopted by the creators of technology and used. Quite commonly, people group together for the specific purpose of creating a standard or group of standards, sometimes this might be a consortium of relevant companies in the field (such as the USB Implementers Forum) or an organization specifically set up for this purpose, such as the ISO or ITU. Other times, a company might create a protocol for its own purposes, which becomes the de facto standard; this is often but not necessarily undesirable, because that firm will likely have created something to suit their own needs rather than those of the whole ecosystem. Standards like ASCII and TCP/IP, for example, are big exceptions to the popular opprobrium for things designed by committees.

In the case of the Internet, the main standards organization is the Internet Engineering Task Force (IETF), you can see their working groups page for a breakdown of who does what. Quite importantly, the IETF is responsible for specifying Internet Protocol and TCP, which, you will remember from above, represent the core of the Internet system.

The IETF publishes the famous RFC publication that I have referenced frequently. The IETF itself is part of the Internet Society, a non-profit devoted to stewarding the Internet more broadly. Do you care about the direction of the Internet? Join the Internet Society: it’s free.

There are other relevant standards, far too many to count; it’s incumbent upon me to mention that the World Wide Web Consortium handles the Web, one of the Internet’s many mistaken identities.

Nobody is forcing anyone to use these standards; nor is the IETF directly financially incentivized to have you use them. Where Apple makes machines that adhere to its standards and would have you buy them (and will sue anyone that violates its intellectual property), all the Internet Society can do is set the best standard it can and commend it to you, and perhaps wag its finger at things non-compliant.

If I wanted to, I could make my own, altered version of TCP/IP; the only disincentive to use it would be the risk that it wouldn’t work or, if it only played with versions of itself, that I would have no one to talk to. What I’m trying to say is that the Internet is very open, relative to most systems in use today: the adoption of its protocols is voluntary, manufacturers and software makers adhere to these standards because it makes their stuff work.

There is, of course, Internet coercion, and all the usual suspects re clamoring for control, every day: for my ideas on this subject, please refer to this piece’s counterpart on freedom.

Conclusion: We Need a Civics of Computer Networks of Arbitrary Size, or We Are Idiots

I propose a new field, or at least a sub-field: the civics of CNASs; which we might consider part of the larger field of civics and/or the digital humanities. Quite importantly, this field is distinct from some (quite interesting) discussions around “Internet civics” that are really about regular civics, just with the Internet as a medium for organization.

I’m talking about CNASs as facilitating societies in themselves, which confer rights, and demand understanding, duties, and reform. And, please, let’s please not call this Internet Civics, which would be like founding a field of Americs or Britanics and calling our work done.

So, to recapitulate this piece in the CNAS civics mode:

  1. The subject of our study, the Internet, is often confused for the Web, not unlike the UK and England, Holland and the Netherlands. This case of mistaken identity is instrumental because it deceives people as to what they have and how they might influence it.
  2. The Internet is also confused for the class of things to which it belongs: computer networks of arbitrary scale (CNAS). This is deceptive because it robs us of the sense (as citizens of one country get by looking at another country) that things can be done differently, while having us flirt with great fragility.
  3. The Internet’s founding fathers are much celebrated and quite well known in technical circles, but their position in the public imagination is dwarfed by that of figures from the corporate consumer world, despite the fact that the Internet is arguably the most successful technology in history. Because of its obscurity, there’s the sense that the Internet’s design is just so, normal or objective, or worse, magical, when quite the opposite is true: the Internet’s founding fathers brought their own philosophies to its creation; the proper understanding of any thing can’t omit its founding and enduring philosophy.
  4. The Internet’s structure of governance, ownership and organization it so complex that it is a study unto itself. The Internet combines immense openness with a curious organizational structure that includes a range of people and interest groups, while centralizing important functions among obscure, barely-known bodies. The Internet Society, which is the main force behind Internet technology, is free to join, but has only 70,000 members worldwide; Internet users are both totally immersed in it and mostly disengaged from the idea of influencing it.

As I say in this piece’s counterpart on freedom, the Internet is a big, strange, unique monster: one that all the usual suspects would have us carve up and lobotomize for all the usual reasons; we must prevent them from doing so. This means trading in the ignorance and disengagement for knowledge and instrumentality. Concurrently, we must find new ways of connecting and structuring those connections. If we do both of these things, we might have a chance of building and nurturing the network our species deserves.

Categories
Editorial

Internet Walden: Introduction—Why We Should Be Free Online

Image credit: Marta de la Figuera

Goodness, truth, beauty. These are not common terms to encounter during a discussion of the Internet or computers; for the most part, the normal model seems to be that people can do good or bad things online, but the Internet is just technology.

This approach, I think, is one of the gravest mistakes of our age: thinking or acting as though technology is separate from fields like philosophy or literature, and/or that criticisms from other fields are either irrelevant or at best secondary to technicalities. This publication, serving the Digital Humanities, is part of a much-needed correction.

I say that technology can be just or unjust in the same sense that a law can: an unjust law doesn’t possess the sort of ethical failing available only to a sentient being, rather, it has an ethical character (such as fairness or unfairness) as does the action that it encourages in us. We should accept the burden of seeing these qualities as ways: towards or away from goodness, truth and beauty.

Such a way is akin to a method or a path, like for example mediation or the practice of empathy: it’s not necessarily virtuous in itself, but the idea is that with consistent application one develops one’s virtue, or undermines it. My claim is that this is especially true for the ways in which we use technology, both as individuals and collectively.

In Computer Lib, Ted Nelson describes “the mask of technology,” which serves to hide the real intentions of computer people (technicians, programmers, managers, etc.) behind pretend technical considerations. (“We can’t do it that way, the computer won’t allow it.”) There’s another mask, that works in the opposite way: the mask of technological ignorance. We wear it either to avoid facing difficult ethical questions about our systems (hiding behind the fact that we don’t understand the systems) or as an excuse when we offload responsibilities onto others.

This essay concerns itself primarily with three ways: the secondary characteristics that lend themselves to our pursuit of goodness, truth and beauty, specifically in the technology of communication. They are, freedom, interoperability, and ideisomorphism; the latter is a concept which I haven’t heard defined before, but which can be summarized thus: the quality of systems which are both flexible enough to express the complexity and nuance of human thought and which have features that lend themselves to the shape of our cognition. (Ide, as in idea, iso as in equal to, morph as in shape.)

We should care about freedom, because we require it to build and experiment with systems in pursuit of the good; interoperability, because it forces us to formulate the truth in its purest form and allows us to communicate it; ideisomorphism, because it allows us to combine maximal taste and creativity with minimal technological impositions and restrictions in our pursuit of beauty. For details on these ways, please read on.

I won’t claim that this is a complete treatment of the ethical character of machines, my subject is machines for communication, and the best I can hope for is to start well.

In short, bad communications technology causes and covers up ethical failures. Take off the mask. We have nothing to lose but convenient excuses and stand to gain firstly, tools that act as force-multipliers for our best qualities and, secondly, some of the ethical clarity that comes from freedom and diverse conversation, and, if nothing else, a better understanding of ourselves.

Oliver Meredith Cox, January 28th, 2021

An Introduction to Walden: Life on the Internet

I argue that anyone who cares about human flourishing should concern themselves with Internet freedom, interoperability and ideisomorphism; I make this claim because the ethical character of the Internet appears to be the issue which contains or casts its shadow over the greatest number of other meaningful issues, and because important facts about the nature of the Internet are ways to inculcate our highest values in ourselves.

The Internet offers us the opportunity to shed the mask of technological ignorance: by understanding its proper function we should know how to spot the lies, and how to use the technology freely. We might then transform it into the retina and brain of an ideisomorphic system that molds to and enhances, rather than constricting, our cognition.

As such, with respect to the Internet, I say that we should:

  1. Learn/understand the tools and their nature.
  2. Use, build or demand tools that lend themselves to being learned.
  3. Use, build or demand tools that promote the scale, nuance and synchrony of human imagination, alongside those that nurture people’s capacity to communicate.

This piece is one part of a two-part introduction to the series, both parts are equal in importance and in order.

The other (What Is the Internet?) is an introduction to the technology of the Internet itself, so whenever questions come up about such things or if anything is not clear, consider either referring to that piece or reading it first; specifically, one of the reasons why I think we should turn our attention very definitely to the Internet is the fact that most people know so little about it, categorize it incorrectly and mistake it for other things (many confuse the Internet and the Web, for example).

Prospectus

  • Part 1: Introduction
    • Why We Should be Free Online: (this article) in which I explain why you should care about Internet freedom.
    • What Is the Internet: An explanation of what the Internet is (it probably isn’t what you think it is).
  • Part 2: Diary
    • Hypertext (one of an indefinite number of articles on the most popular and important Internet technologies)
    • Email
    • Cryptocurrency
    • Your article here: want to write an article for this series? Reach out: oliver dot cox at wonk bridge dot com.
  • Part 3: Conclusion
    • What Should We Create?A manifesto for what new technology we should create for the purpose of communicating with each other.

Call to Action

Do you care about Internet freedom and ethics? Do you want to take an Internet technology, master it and use it on your own terms? Do you want to write about it? Reach out: oliver dot cox at wonk bridge dot com.

A Note on Naming

For a full discussion on naming conventions, please see this piece’s companion, What Is the Internet?. However, I must clarify something up front. Hereafter, I will use a new term: computer network of arbitrary scale (CNAS [seenas]), which refers to a network of computers and/or the technology used to facilitate it, which can achieve arbitrarily large scale.

I use to distinguish between the 1. Internet in the sense of a singular brand, and 2. the class of networks of which the Internet is one example. The Internet is our name for the network running on a particular set of protocols (TCP/IP), it is a CNAS, and today it is the only CNAS. Imagine if a single brand so dominated an industry, like if Ford sold 99.9 percent of cars, so that there was no word for “car” (you would just say “Ford”), and you could hardly imagine the idea of there being another company. But, I predict that soon there will be more, and that they will be organized in different ways and run on different protocols.

Why the Internet Matters

First: the question of importance. Why do I think that the Internet matters relative to any other question of freedom that one might have? I know very well that many people with strong opinions think their subject the most important of all: politics, art, culture, literature, sport, cuisine, technology, engineering; if you care about something, it is likely that you think others should care, too. I know that there isn’t much more that I can do than to take a number, stand in line, and make my case that I should get an audience with you.

Here’s why I think you should care:

1. The Internet is engulfing everything everything we care about.

I won’t bore you with froth on the number of connected devices around or how everyone has a smartphone now; rather, numerous functions and technologies that were separate and many of which preceded the Internet are being replaced by it, taking place on it, or merging with it: telephony, mail, publishing, science, the coordination of people, commerce.

2. The Internet is the main home of both speech and information retrieval.

This is arguably part of the first point, but I think it deserves its own column inches: most speech and information exchange now happens online, most legacy channels (such as radio) are partly transmitted over the Internet, and even those media that are farthest from the digital (perhaps print) are organized using the Internet. At risk of over-reaching, I say that the question of free speech in general is swiftly becoming chiefly a question of free speech online. Or, conversely, that offline free speech is relevant to the extent that online speech isn’t free.

3. The Internet is high in virtuality.

When I claim above that the issue of all issues, someone might respond, “What, is it more important than food?” That is a strong point, and I am extremely radical when it comes to food and think that people should understand what they eat, know what’s in it, hold food corporations to account, and that to the extent that we don’t know how to cook or work with food, we will always be victim to people who want to control or fleece us. However, the Internet and cuisine are almost as far apart on the scale of virtuality as it is possible to be.

Virtuality, as defined by Ted Nelson, describes how something can seem or feel a certain way, as opposed to how it actually is, physically. For example, a ladder has no virtuality, (usually) the way it looks and how we engage with it corresponds 100% to the arrangement of its parts. A building, on the other hand, has much more virtuality: the lines and shape of a building give it a mood and feel, beyond the mere structure of the bricks, cement and glass.

Food is has almost no virtuality (apart from cuisine with immense artifice); the Internet, however, has almost total virtuality: the things that we do with it, the Web, email, cryptocurrency, have realities in the screen and in our imagination that are almost limitless, and the only physical thing that we typically notice is the “router” box in our home, the Wifi symbol on our device, the engineer in their truck and, of course, the bill. This immense virtuality is both what makes the Internet so profound, but also so dangerous: there are things going on beneath the virtual that threaten our rights. You are free to the extent that you understand and control these things.

Ted Nelson explains virtuality during his TED conference speech (start at 31:16):

4. The Internet has lots of technicalities, and the technicalities have bad branding.

All this stuff: TCP/IP, DNS, DHCP, the barrage of initialisms is hard to master and confusing, especially for those who are non-engineers or non-technical. I’m sorry, but I think we should all have to learn it or at least some of it. Not understanding something gives organizations with a growth obligation perhaps the best opportunity to extract profit or freedom from you.

5. The Internet is the best example that humanity has created of an open, interoperable system to connect people.

It is our first CNAS. As fish with water, it is easy to forget what we have achieved in the form of the Internet: it connects people of all cultures and religions, and nationalities (those that are excluded are usually so because of who governs them, not who they are), it works on practically all modern operating systems, it brings truths about the universe to those in authoritarian countries or oppressive cultures, and connects the breadth of human thinkers together.

To see the profundity of this achievement, remember that, today, many Mac- and Windows-formatted disks are incompatible with the other system, and that computer firms still attempt to lock their customers into using their systems by trapping them in formats and ways of thinking that don’t work with other systems, or that, even, culturally, some people refuse to use other systems or won’t permit other systems in their corporate, university or other departments.

Mac, Windows, GNU/Linux, Unix, BSD, Plan 9, you name it, it will be able to connect to the Internet; it is the best example of a system that can bridge types of technology and people. Imagine separate and incompatible websites, only for users of particular systems: this was an entirely possible outcome and we’re lucky it didn’t happen a lot more than the little it did (see Flash). The Internet, despite it’s failures and limitations, massively outclasses other technology on a regular basis, and is therefore something of a magnetic North, pulling worse, incompatible and closed systems along with it.

6. The Internet is part of a technological feedback loop.

As I mentioned in point 2. above, the Internet is now the main way in which we store, access and present information; the way in which we structure and present information today influences what we want to pursue in the future, the ideas we have and what, ultimately, we build. The Internet hosts and influences an innovation cycle:

  1. Available storage and presentation systems influence how we think
  2. The way we think influences our ideas
  3. Our ideas influence the technology we build, which takes us back to the start

This means that bad, inflexible, closed systems will have a detrimental effect on future systems, as will open, flexible systems engender better future systems. There is innovation, of course, but many design paradigms and ways of doing things get baked in, and sometimes are compounded. As such, I say that we ought to exert immense effort in creating virtuous Internet systems, such that these systems will compound into systems of even more virtue: much like how those who save a lot, wisely and early are (allowing for market randomness, disaster and war) typically rewarded with comfort decades later.

Put briefly, the Internet combines: the most integrating and connecting force in history, difficulty, virtuality, working in a feedback loop; it is the best we have, it is under constant threat, and we need to take action now.

The rest of this introduction will speak to the following topics:

  • Six imperatives for communications freedom
  • What we risk losing if we don’t shape the Internet to our values
  • Why, ultimately, I’m optimistic about technology and particularly the technology of connection
  • Why this moment of crisis tells us that we are overdue for taking action to improve the Internet and make it freer
  • What we have to gain

Six Imperatives for Communications Freedom

The technology of communication should:

  1. Be free and open source.
  2. Be owned and controlled by the users, and should help the rightful entity, whether an individual, group or the collective, to maintain ownership over their information and their modes of organizing information.
  3. Have open and logical interfaces, and be interoperable where possible.
  4. Help users to understand and master it.
  5. Let users to communicate in any style or format.
  6. Help users to work towards a system that facilitates the storage, transmission and presentation of both the totality of knowledge and of the ways in which it is organized.

1. The technology of communication should be free and open source.

First: what is free software? A program is free if it allows users the following:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute and make copies so you can help your neighbour.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

You will, dear reader, detect that this use of the word free relates to freedom not merely to something being provided free of charge. Open source, although almost synonymous, is a separate concept promoted by a different organization: the Open Source Initiative promotes open source and the Free Software Foundation, free.

A note, I think that we should obey a robustness principle when it comes to software and licenses: Be conservative with respect to the software you access (i.e. obey the law, respect trademarks, patents and copyright; pay what you owe for software, donate to and promote projects that give you stuff without charging a fee); be liberal with respect to the software you create (i.e. make it free and open source wherever and to the extent possible).

Fundamentally, the purpose of free software is to maximize freedom, not to impoverish software creators or get free stuff; any moral system based on free software must build effective mechanisms to give developers the handsome rewards they deserve.

To dig further into the concept and its sister concept, why do we say open source? The word “source” here refers to a program’s source code, instructions usually in “high-level” languages that allow programmers to write programs in terms that are more abstract and closer to the ways in which humans think, making programming more intuitive and faster. These programs are either compiled or interpreted into instructions in binary (the eponymous zeroes and ones) that a computer’s processor can understand directly.

Having just these binary instructions is much less useful than having the source, because the binary is very hard (perhaps impossible in some cases) for humans to understand. As such, what we call open source might be termed, software for which the highest level of abstraction of its workings is publicly available. Or, software that shows you how it does what it does.

Point 0. matters because the technology of communication is useful to the extent that we can use it: we shouldn’t use or create technology, for example, that makes it impossible to criticise the government or religion.

Of course, one might challenge this point, asking, for example, whether or why software shouldn’t include features that prevent us from breaking the law. I have ideas and opinions on this, but will save them for another time. Suffice to say that free software has an accompanying literature as diverse and exacting as the commentary on the free speech provision of the First Amendment: there is much debate about how exactly to interpret and apply these ideas, but that doesn’t stop them from being immensely useful.

Point 1. is extremely important for any software that concerns privacy, security or, for that matter, anything important. If you can’t inspect your software’s core nature, how can you see whether it contains functions that spy on you, provide illicit access to your computer, or bugs that its creators missed that will later provide unintentional access to hackers? See the WannaCry debacle for a recent example of a costly and disastrous vulnerability in proprietary software.

Point 2. matters for communications in that when software or parts of software can be copied and distributed freely, this maximises the number of people that have access and can, thus, communicate. It matters also in that if you can see how a system works, it’s much easier to create systems that can to talk to it.

However, the “free” in free software is the cause of confusion, as it makes it sound like people that create free or open source software will or can never make money. This is a mistake worth correcting:

  1. Companies can and do charge for free software, for example, Red Hat charges for it’s GNU/Linux operating system distro, Red Hat Enterprise Linux. The fee gets you the operating system under the exclusive Red Hat trademark, support and training: the operating system itself is free software (you can read up on this firm to see that they really have made money).
  2. A good deal of programmers are sponsored to create free software by their employers, at one point, Microsoft developers were the biggest contributors to the Linux kernel (open source software like Linux is just too good to ignore).

Point 3. might be clearer with the help of a metaphor. Imagine if you bought a car, but, upon trying to fit a catalytic converter, or a more efficient engine, were informed that you were not permitted to do so, or even found devices that prevented you from modifying it. This is the state that one finds when trying to improve most proprietary software.

In essence, most things that make their way to us could be better, and in the realm of communication, surmounting limitations inherent in the means of communication opens new ways of expressing ourselves and connecting with others. Our minds and imaginations are constrained by the means of communication just as they are by language; the more freedom we have, the better. Look, for example, to the WordPress ecosystem and range of plugins to see what people will do given the ability to make things better.

There are names in tech that are well known among the public: most notably Bill Gates and Microsoft, Steve Jobs and Apple; we teach school children about them, and rightly so, they and those like them have done a great deal for a great many. However, I argue that there are countless other names of a very different type whose stories you should know, here are two: Jon Postel, a pioneer of the Internet who made the sort of lives we life now possible through immense wisdom and foresight, his brand: TCP/IP; Linus Torvalds, who created the Linux kernel, which (usually installed as the core of the GNU operating system) powers all of the top supercomputers, most servers, most smart-phones and a non-trivial share personal computers.

Richard Dawkins has an equation to evaluate the value of a theory:

value = what it explains ÷ what it assumes

Here’s my formulation but for technology:

value = what it does ÷ the restrictions accompanying it

Such restrictions include proprietary data structures, non-interoperable interfaces, and anything else that might limit the imagination.

Gates and Jobs’ innovations are considerable, but almost all of them came with a set of restrictions that separate users from users and communities from communities. Postel and Torvalds, their collaborators, and others like them in other domains not mentioned, built and build systems that are open and interoperable, and that generate wealth for the whole world by sharing new instrumentality with everyone. All I’m saying is that we should celebrate this sort of innovator a lot more.

2. The technology of communication should be owned and controlled by the users, and should help the rightful entity, whether an individual, group or the collective, to maintain ownership over their information and their modes of organizing information

I will try to be brief with what risks being a sprawling point. In encounter after encounter, and interaction after interaction, users sign ideas, identities, privacy and control over how we communicate to unaccountable corporations. This is a hazard because (confining ourselves only to social media and the Web) we might pour years of work into writing and building an audience, say, on Twitter, to have everything taken away because we stored our speech on a medium that we didn’t own and, for example, a network like Twitter represents a single choke-point for authoritarian regimes like the government of Turkey.

On a slightly subtler note, expressing our ideas via larger sites makes us dependent on them for the conversation around ideas, also: conversations should accompany the original material, not live on social profiles far from it, where they are sprayed into the dustbin by the endless stream of other content.

We the users should pay for our web-hosting and set up our own sites: we already have the technology necessary to do this. If you care about it, own it.

3. The technology of communication should have open and logical interfaces, and be interoperable where possible.

What is interoperability, the supposed North Star here? I think the best way to explain interoperability is to think of it as a step above compatibility. Compatibility means that some thing is can work properly in connection with another thing, e.g. a given USB microphone is compatible with, say, a given machine running a particular version of Windows. Interoperability takes us a step further, requiring there to be some standard (usually agreed by invested organizations and companies) which 1. is publicly available and 2. as many relevant parties as possible agree to obey. USB is a great example: all devices carrying the USB logo will be able to interface with USB equipment; these devices are interoperable with respect to this standard.

There are two main types of interoperability: syntactic and semantic. The former refers to the ability of machines to transmit data effectively: this means that there has to be a standard for how information (like images, text, etc.) is encoded into a stream of data that you can transmit, say, down a telephone line. These days, much of this is handled without us noticing or caring. If you’d like to see this in action, right-click or ⌘-click on this page and select “View Page Source” — you ought to see a little piece of code that says ‘ charSet=”utf-8″ ‘ — this is the Webpage announcing what system it is using. This page is interoperable with devices and software that can use the utf-8 standard.

Semantic interoperability is much more interesting: it builds on the syntactic interoperability and adds the ability to actually do work with the information in question. Your browser has this characteristic in that (hopefully) it can take the data that came down your Internet connection and use it to present a Webpage that looks the way it should.

Sounds great, right? Well, people break interoperability all the time, for a variety of reasons:

  1. Sometimes there’s no need: One-off, test or private software projects usually don’t need to be interoperable.
  2. Interoperability is hard: The industry collaboration and consortia necessary to create interoperable standards require a great deal of effort and expense. These conversations can be dry and often acrimonious: we owe a great deal to those who have them on our behalf.
  3. Some organizations create non-interoperable systems for business reasons: For example, a company might create a piece of software that saves user files in a proprietary format and, thus, users must keep using/paying for the company’s software can access their information.
  4. Innovation: New approaches eventually get too far from older technology to work together; sometimes this is a legitimate reason, sometimes it’s an excuse for reason 3.

Reason three is never an excuse for breaking interoperability, reason two is contingent, and reason one and four fine. In cases where it is just too hard or expensive to work up a common, open standard, creators can help by making interfaces that work logically, predictably and, if possible, document them: this way collaborators can at least learn how to build compatible systems.

4. The technology of communication should help users to understand and master it.

Mastery of something is a necessary condition for freedom from whatever force that would control it. To the extent that you don’t know how to build a website or operating system or a mail server, you are a captive audience for those who will offer to do it for you—there is nothing wrong with this, per se, but I argue that the norm should be that any system that makes these things easy should be pedagogical: it should act as a tutorial to get you at least to the stage of knowing what you don’t know, rather than keeping your custom through ignorance. We should profit through assisting users in the pursuit of excellence and mastery.

Meanwhile, remember virtuality: the faulty used car might have visible rust that scares you off, or might rattle on the way home, letting you know that it’s time to have a word with the salesperson. Software that abuses your privacy or exposes your data might do so for years without you realizing, all this stuff can happen in the background; software, therefore, should permit and encourage users to “pop the hood” and have a look around.

Users: understand your tools. Software creators: educate your users.

5. The technology of communication should let users communicate in any style or format.

Modern Internet communication systems, particularly the Web and to an extent email, beguile us with redundant and costly styling, user interfaces, images, etc. The most popular publishing platforms, website builders like WordPress and social media, force users either to adopt particular styling or to make premature or unnecessary choices in this regard. The medium is the message: forced styling changes the message; forced styling choices dilute the message.

6. The technology of communication should help users to work towards a system that facilitates the storage, transmission and presentation of both the totality of knowledge and of the ways in which it is organized.

This is a cry for action for the future: expect more at the end of this article series. Picture this: all humanity’s knowledge, artistic and other cultural creations, visibly sorted, thematically and conceptually, via sets, links and other connections, down to the smallest functional unit. This would allow any user, from a researcher to a student to someone who is curious to someone looking for entertainment, to see how any thing created by humanity relates to all other things. This system would get us a lot closer to ideisomedia. Let’s call it the Knowledge Explorer.

This is not the Web. The Web gave us the ability to publish easily electronically, but because links on the Web point only one way, there can exist no full view of the way in which things are connected. Why? For example, if you look at website x.com, you can quite easily see all the other websites to which it links: all you need to do is you can look at all the pages on that site and make a record.

How, what if you asked what other websites link to x.com? The way the Web functions now, with links stored in the page and going one-way, the only way to see what other websites link to a given site is to inspect every other site on the rest of the Web. This is why the closest things we have to an index of all links are expensive proprietary tools like Google and SEMRush. If links pointed both ways, seeing how things are connected would be trivial.

Jaron Lanier explains this beautifully in the video below (his explanation starts at 15:48):

Google and SEMRush are useful, but deep down it’s all a travesty: we the users, companies, research groups, Universities and other organizations set down information in digital form, but practically throw away useful information on how it is structured. We have already done the work to realize the vision of the Knowledge Explorer, but because we have bad tools, the work is mostly lost. Links, connections, analogies are the fuel and fire of thinking, and ought to be the common inheritance of humanity, and we should build tools that let us form and preserve them properly.

As you might have already realized, building two-way links for a hypertext system is non-trivial. All I can say is that this problem as been solved. More on this much later in this series.

This concludes the discussion of my six imperatives. Now, what happens if these ideas fail?

What Do We Have to Lose?

1. Freedom

People with ideas more profound than mine have explored the concept of freedom of expression more extensively than I can here and have been doing so for some time; there seems little point in rehearsing well-worn arguments. But, as this is my favourite topic, I will give you just one point, on error-correction. David Deutsch put it like this, in his definition of “rational:”

Attempting to solve problems by seeking good explanations; actively pursuing error correction by creating criticisms of both existing ideas and new proposals.

The generation of knowledge is principally about the culling of falsehoods rather than the accrual of facts. The extent to which we prevent discourse on certain topics or hold certain facts or ideas to be unalterably true or free from criticism, is the extent to which we prevent error correction in those areas. This is something of a recapitulation of Popper’s idea of falsification in formal science: in essence, you can never prove that something correct, only incorrect, therefore what we hold to be correct is so unless we find a way to disprove it.

As mentioned above with respect to the First Amendment, I’m aware of how contentious this issue is; as such, I will set out below a framework, which, I hope, simplifies the issue and provides both space for agreement and firm ground for debate. Please note that this framework is designed to be simple and generalizable, which requires generalizations: my actual opinions and the realities are more complex, but I won’t waste valuable column inches on them.

My framework for free expression online:

  • In most countries (especially the USA and those in its orbit), most spaces are either public or private; the street is public, the home is private, for example. (When I say “legal” in this section, I mean practically: incitement to violence muttered under one’s breath at home is irrelevant.)
    • In public, one can say anything legal.
    • In private, one can say anything legal and permitted by the owner.
  • Online, there are only private spaces: 1. The personal devices, servers and other storage that host people’s information (email, websites, blockchains, chat logs, etc.) are owned just like one owns one’s home; 2. Similarly, the physical infrastructure through which this information passes (fiberoptic cables, satellite links, cellular networks) is owned also, usually by private companies like ISPs; some governments or quasi-public institutions own infrastructure, but we can think of this as public only in the sense that a government building is, therefore carrying no free speech precedent.
    • Put simply, all Internet spaces are private spaces.
    • As in the case of private spaces in the physical world, one can say anything legal and permitted by the owner.

From this framework we can derive four conclusions:

  1. There is nothing analagous to a public square on the Internet: think of it instead as of a variety of private homes, halls, salons, etc. You are free to the extent that you own the technology of communication or work with people who properly uphold values of freedom, hence #2 of my six imperatives. This will mean doing things that aren’t that uncommon (like getting your own hosting for your website) through to things that are very unusual (like creating our own ISPs) and more. I’m not kidding.
  2. Until we achieve imperative #2, and if you care about free expression, you should a. encrypt your communications, b. own as many pieces of the chain of communication through which your speech passes, c. collaborate and work with individuals, organizations and companies that share your values.
  3. We made a big mistake in giving so much of our lives and ideas to social networks like Twitter and Facebook, and their pretended public squares. We should build truly social and free networks, on a foundation that we actually own. Venture capitalists Balaji Srinivasan and Naval Ravikant are both exploring ideas of this sort.
  4. Prediction: in 2030, 10% of people will access the Internet, host their content, and build their networks via distributed ISPs, server solutions and social networks.

Remember, I’m not necessarily happy about any of this, but I think this is a clear view of the facts. I apologize if I sound cynical, but it’s better to put yourself in a defensible position than to rely on your not being attacked. As Hunter S. Thompson said, “Put your faith in God, but row away from the rocks.”

I am aware that this isn’t a total picture, and there are competing visions of what the CNASs can and should be; I am more than delighted to hear from and discuss with people who disagree with me on the above. I can’t do them justice, but here are some honourable mentions and thorns:

  1. The Internet (or other CNAS) as a public service. Pro: This could feasibly create a true public square. Con: It seems like it would be too tempting for any administration to use their control to victimize people.
  2. Public parts within the overall Internet or CNAS; think of the patchwork of public and private areas that exist in countries, reflected online—this might feasibly include free speech zones in public areas. See the beautifully American “free speech booth” in St. Louis Airport for a physical example.
  3. Truly distributed systems like the Bitcoin and other blockchains, which are stored on the machines of all members raise the question of whether these are the truly communal or public spaces; more on this in future writings.

I think that the case I made here for freedom of expression is broadly the same when applied to privacy: one might even say that privacy is the freedom not to be observed. In essence, you are private to the extent that you control the means of communication or trust those that do. Your computer, your ISP, and any service that you use all represent snooping opportunities.

We should be prepared to do difficult and unusual things to preserve our freedom and privacy: start our own ISPs, start our own distributed Internet access system, or, better, our own CNAS. I note a sense of learned helplessness with respect to this aspect of my connectivity (speaking especially for myself) there are communities out there to support you.

Newish technology will be very helpful, too:

  • WISPs: wireless internet providers, which operate without the need to establish physical connections to people’s homes.
  • Wireless mesh networks: wireless networks, including among peers, wherein data is transmitted throughout a richly connected “mesh” rather than relying on a central hub.

Finally, and fascinating as it is, I simply don’t have the space to go into the discussion of how to combine our rights with the proper application of justice. For example, if everyone used encryption, it would be harder for police to monitor communications as part of their investigations. All I can say is that I support the enforcement of just laws, including through the use of communications technology, and think that the relevant parties should collaborate to support both criminal justice and our rights: this approach has served the countries that use it rather well, thus far.

2. Interoperability

To illustrate how much the Internet has done for us and how good we have it now in terms of interoperability, let’s look back to pre-Internet days. In the 70s, say, many people would usually access a single computer via terminal, often within the same building or campus, or far away via a phone line. For readers who aren’t familiar, the “Terminal” or “Command Line” program on your Mac, PC, Linux machine, etc. emulates how these terminals behaved.

These terminals varied in design between models, manufacturers and through time: most had keyboards with which to type inputs into the computer, some had printouts, some had screens, and sometimes more stuff. However, not all terminals could communicate with all computers: for example, most companies used the ASCII character encoding standard (for translating between binary and letters, numbers and punctuation), but IBM used its own proprietary EBCDIC system—as a result, it was challenging to use IBM terminals with other computers and vice-versa.

This is more than just inconvenient: it locked users and institutions into particular hardware and data structures, and trapped them in a universe constituted by that technology—as usual, the only people truly free were those wealthy or well-connected enough to access several sets of equipment. Actions like this break us up into groups, and prevent such groups from accessing each other’s ideas, systems, innovations, etc. Incompatibility, thought sometimes an expedient in business, is a pure social ill.

To be clear, I am not saying that you have to talk to or be friends with everyone or be promiscuous with the tech you use. If you want to be apart from someone, fine, but being apart from them because of tech is an absurd thing to permit. We need to be able to understand each other’s data structures, codes and approaches: the world is divided enough, along party, religious and cultural lines to permit new artificial divisions.

The most magnanimous thing about the Internet is that it is totally interoperable, based on open standards. I almost feel silly saying it: this beautiful fact is so under-appreciated that I would have to go looking to find another person making the same point. Put it this way, no other technology is as interoperable as the Internet.

It’s tempting to think of the Internet as something normal or even natural; the truth is far from it: it’s sui generis. 53% of the world population use it, making it bigger than the World’s greatest nations, religions and corporations: anything of a similar scale has waged war, sought profits or sent missionaries; the Internet has no need for any of these things.

The Internet is one of the few things actually deserving of that much overused word: unique. But it does what it does because of something much more boring: standards, as discussed above. These standards aren’t universal truths derived from the fabric of the universe, they’re created by fallible, biased people, with their own motivations and philosophical influences. Getting to the point of making a standard is not the whole story: making it good and useful depends on the character of these people.

We should care more about these people and this process: remember, all the normal forces that pull us into cliques and break connections haven’t declared neutrality with respect to the Internet: they can’t help themselves, and would be delighted to see it broken into incompatible fiefdoms; rather, we should focus immense intellectual energy and interest:

  1. on maintaining the philosophical muscle necessary to insist that the Internet stay interoperable
  2. on proposing virtuous standards
  3. on selecting and supporting excellent people to represent us in this endeavour

The Internet feels normal and natural, even effortless in an odd way; the truth is the exact opposite, it is one of a kind, it is not just artificial, it was made by just a few people, and it requires constant energy and attention. Let us give this big, strange monster the attention it deserves, lest the walls go up.

Beyond this, the fact that the Internet is our only CNAS puts us in a perilous position. We should create new CNASs with a variety of philosophies and approaches; this will afford us:

  1. Choice
  2. Some measure of antifragility, in that a variety of approaches and technologies increases the chances of survival if one or more breaks
  3. Perhaps, even, something better than what we have now

3. Ideisomorphism

Bad technology generally puts constrains on the imagination, and on the way in which we think and communicate, but arguing and articulating the effect is much harder and the conclusions less clear cut than with my previous point on free expression. Put it this way: most, practically all of us take what we are given when it comes to tools and technology, some might have ideas about things that could be better, fewer still actually insist that things really ought to be better, and the small few that have the self-belief, tenacity, good fortune and savvy to being their ideas to market, we call innovators and entrepreneurs.

More importantly, these things influence the way we think. For example, Visicalc, the first spreadsheet program for personal computers (and the Apple II’s killer app) made possible a whole range of mathematical and organizational functions that were impossible or painfully slow before: it opened and deepened a range of analytical and experimental thinking. Some readers recognize what I might call “spreadsheet muscle-memory”—when a certain workflow or calculation comes to mind in a form ready to realize in a spreadsheet.

With repeated use, the brain changes shape to thicken well-worn neural pathways: and if you use computers, the available tools, interfaces and data structures train your brain. Digital tools can, therefore, be mind-expanding, but also stultifying. To borrow from an example often used by Ted Nelson, before Xerox PARC, the phrase “cut and paste” referred to the act of cutting a text on paper (printed or written) into many pieces, then re-organizing those pieces to improve the structure.

The team at PARC cast aside this act of total thinking and multiple concurrent actions, and instead gave the name “cut and paste” to a set of functions allowing the user to select just one thing and place it somewhere else. Still today, our imaginations are stunted relative to those who were familiar with the original cut and paste—if you know anything about movies, music or programming, you’ll recognize that many of the best things happen more than one thing at a time.

This is why I argue so vehemently that we shouldn’t accept what we are given online so passively: everything you do online, especially what you do often, is training your mind to work in a certain way. What way? That depends on what you do online.

For the sake of space, I’ll confine myself to the Web. My thesis is this:

  1. The Web as it stands today is primarily focused on beguiling and distracting us.
  2. It presents with two-dimensional worlds (yes there is motion and simulated depth of field, but most of the time these devices gussy up a two-dimensonal frame rather than expressing a multi-dimensional idea).
  3. It is weighed down with unnecessary animation and styling, leaving practically no attention (or for that matter bandwidth) left for information.

I’m here to tell you that you need not suffer through endless tracking, bloated styling, interfaces designed to entrap or provoke tribal feelings while expressing barely any meaning. If you agree, say something. Take to heart what Nelson said: “If the button is not shaped like the thought, the thought will end up shaped like the button.” This is why we have become what we’ve become: divided, enraged, barely able to empathize with someone of a different political origin or opinion.

Then there are the more profound issues: as mentioned above, links only go one way, the Web typically makes little use of the magic of juxtaposition and parallel text, there are few robust ways of witnessing, visually, how things are connected, and for the most part, Web documents are usually one-dimensional (they have an order, start to finish) or two-dimensional (they have an order, and they have headings).

People, this is hypertext we’re dealing with, you can have as many dimensions as you like, document structures that branch, merge, move in parallel, loop, even documents that lack hierarchy altogether: imagine a document with, instead of numbered and nested headings, the overlapping circles of a Venn diagram.

Our thinking is so confined that being 2-D is a compliment.

Digital media offered us the sophistication and multidimensionality necessarily, finally, to reflect human thought, and an end to the hierarchical and either-or structures that are necessary with physical filing (you can can put a file in only one folder in your filing cabinet, but with digital media, you can put it in as many as you like or have multiple headings that contain the same sentence (not copies!)), but we got back into all our worst habits. This, to quote Christopher Hitchens, “is to throw out the ripening vintage and to reach greedily for the Kool-Aid.”

Some, or even you, dear reader, might object that all this multidimensionality and complexity will be too confusing for users. This is fair. But first, I want to establish a key distinction, between confusion arising from unnecessary complexity introduced by the creators of the system and confusion arising from the fact that something is new and different. The former is unnecessary and we should make all efforts to eliminate it; the latter is as necessary to the extent that new things sometimes confuse us.

It might sometimes seem that two-dimensionality is something of a ceiling for the complexity of systems or media. There is no such ceiling; for example, most musicians will perform in the following dimensions simultaneously: facets of individual notes like pitch, dynamic (akin to volume), timbre, and facets of larger sections of music that develop concurrently with the notes but at independent scales, like harmony and phrasing.

In my view, we should build massively multidimensional systems, which start as simply as possible and, pedagogically, work from simple and familiar concepts up to complex ideas well beyond the beginner. Ideisomedia will, 1. free us from the clowning condescension of the Web and 2. warrant our engaging with it by speaking to us at our level, and reward us for taking the time to learn how to use it.

Before I talk at length and through the medium of graphs about what we stand to gain by doing this thing correctly, I’d like to make two supporting points. One frames why the mood of this introduction is so imperative, the other frames technological growth and development in a way that, I think, offers us cause for optimism.

Why Now, and Why I Think We’re Up to the Task

The Connectional Imperative

Firstly, I think that we are experiencing an emergency of communication in many of our societies, particularly in the USA and its satellites. My hypothesis (which is quite similar to many others in the news at the moment) is that the technology of communication, as it is configured currently, is encouraging the development of a set viewpoints that are non-interoperable and exclusive: this is to say that people believe things and broach the things that they believe in ways that are impossible to combine or that preclude their interacting productively.

Viewpoint diversity is necessary for a well-functioning society, but this diversity matters to the extent that we can communicate; this means firstly, actually parsing each others communications (which is analogous to syntactic interoperability: regardless of whether we understand the communication, do we regard it as a genuine communication and not mere noise, do we accept it or ignore it?); secondly, this means actually understanding what we’re saying (which is analogous to semantic interoperability: can we reliably convey meaning to each other?).

I think that both of these facets are under threat; often people call this “polarisation,” which I think is close but not the right characterisation; I am less concerned with how far apart the poles are than whether they can interact and coordinate.

Why is this happening? I think that it is because we don’t control the means of communications and, therefore, we are subject to choices and ideas about how we talk that are not in our interest. Often these choices are profit-driven (like limbic hijacks on Facebook that keep you on page). Sometimes it’s accidental, sometimes expedient design (like the one-way links and two-dimensional documents that characterize the Web, as mentioned earlier). Why is it a surprise so many of us see issues as “us versus them,” or to assume that if someone thinks one thing that they necessarily accept all other aspects associated with that idea, when we typically compress a fantastically multidimensional discussion (politics) onto a single dimension (left-right)?

We need multidimensional conversations, and we already have the tools to express them: we should start using them.

This really is an emergency. We don’t grow only by agreement, we grow by disagreement, error correction, via the changing of minds, and the collision of our ideas with others’: this simply won’t happen if we stay trapped in non-interoperable spaces of ideas, or worse, technology.

On the topic of technology, I am quite optimistic: everyone that uses the Internet can connect, practically seamlessly, with any other person, regardless of sex, gender, race, creed, nationality, ideology, etc. The exceptions here (please correct me if I’m wrong) are always to do with whether you’re prevented (say by your government) from accessing the Internet or because your device just isn’t supposed to connect (it was made before the Internet became relevant, or it is not a communications device (e.g. a lamp)).

TCP/IP is the true technological universal language, it can bring anyone to the table: when at the table you might find enemies and confusion, but you’re here and have at least the opportunity to communicate.

Therefore, I think that we should regard that which is not interoperable, not meaningfully interoperable, or at least not intentionally open and logical, with immense scepticism, and conserve what remains, especially TCP/IP, standards like this and their successors in new CNASs.

Benign Technology

I think that technology is good. Others say that technology is neutral, that people can apply it to purposes that help us or that hurt us. Of course, still others say that it is overall a corrupting influence. My argument is simple, technology forces you to do two things: 1. to the extent that whatever you create works, it will have forced you to be rational; 2. to the extent that you want your creation to function properly in concert with other devices, it will have forced you to think in terms of communication, compatibility and, at best, interoperability. I’m not saying that all tech is good, but rather that to the extent that tech works, it forces you to exhibit two virtues: rationality and openness.

In the first case, building things that work forces you to adopt a posture that accepts evidence and some decent model of reality: obviously this is not a dead cert, people noticeably “partition” their minds into areas, one for science, another for superstitions and so on. My claim is that going through the motions of facing reality head on is sufficient to improve things just a little; tech that works means understanding actions and consequences. This is akin to how our knowledge of DNA trashed pseudo-scientific theories of “race” by showing us our familiarity with all other humans, or how the germ theory of disease has helped to free us of our terror of plagues sent by witches or deities.

I’m not being naive here: I know that virtue isn’t written in the stars; rather, I claim that rationality is available to us as one might use a sifter: anyone can pick it up and use their philosophical values to distinguish gold (in various gradations) from rock. Technology requires us to pick up the sifter, or create it, or refine it, even if you would otherwise be disinclined.

In the case of the second faculty, openness, once you have created a technology, you can make it arbitrarily more functional by giving it the ability to talk to others like it or, better, others unlike it. Think of a computer that stands alone, versus one that can communicate with any other number of other computers over over a telecommunication line. But, in order to create machines that can connect, you have to think in terms of communication, you have to at least open yourself to and model the needs and function of other devices and other people. Allowing for some generalizing, the more more more capable the system, the more considerate the design.

Ironically, the totality of the production of technology is engaged in a tug-of-war: on one side, the need and desire to make good systems pulls us towards interoperability and on the other side, short-sighted profit-seeking and the fact that it’s hard to make systems that can talk pulls us towards non-interoperability. Incidentally, the Internet is a wonderful forcing function, here: the usual suspects like Apple, IBM and Microsoft amazingly Internet-interoperable.

Put simply, if you want to make your tech work, you have to face reality; if you want your tech to access the arbitrarily large benefits of communicating with other tech, you have to imagine the needs of other people and systems. Wherever you’re going, the road will likely take you past something virtuous.

A Rhapsody In Graphs: Up and to the Right, or What Do We Have to Gain?

To begin, remember this diagram:

Technological innovation exists in a feedback loop: so if you want virtuous systems in the future, create virtuous systems today.

You’re familiar, I hope, with Moore’s Law, which states that around every two years, the number of transistors in an integrated circuit doubles, meaning roughly that it doubles in computing power. This means that if you plot computing power against time, it looks something like this:

Moore’s law describes the immense increase in processing capacity that has facilitated a lot of the good stuff we have. Today, the general public can get computers more powerful than the one that guided the Saturn V rocket. The ARPANET (the predecessor to the Internet) used minicomputers to fulfil a role somewhat similar to that of a router today—the PDP-11 minicomputers popular for the ARPANET started at $7,700 (more than $54,000 in 2020 dollars); most routers today are relatively cheap pieces of consumer equipment, coming in at less than $100 a piece. This graph represents more humans getting access to the mind-expanding and mind-connecting capabilities of computers, that the computers themselves are getting better, and that this trend is quickening.

But, one might ask, what about some of the other indexes discussed in this introduction: infreedom, interoperability, ideisomorophism?

Interoperability

For the purposes of this question, we first need to find a meaningful way to think about overall interoperability. For instance, it doesn’t really matter to us that coders create totally incompatible software for their own purposes, all the time; meanwhile, as time passes, the volume of old technology that can no longer work with new technology increases due to changing standards and innovation: this matters only to the extent that we have reason to talk to that old gear (there are lots of possible reasons, if you were wondering). So, let’s put it like this:

overall meaningful interoperability (OMI) = the proportion of all devices and programs that are interoperable, excluding private and obsolete technology

This give us an answer as a fraction:

  • 100% means that everything that we could meaningfully expect to talk to other stuff can do so.
  • 0% would mean that nothing that we could reasonably expect to talk to other stuff can do so.
  • As time passes we would expect this number to fluctuate, as corporate policy, public interest, innovation, etc. affect what sort of technology we create.

As mentioned above, a variety of different pressures influence overall meaningful interoperability; some companies, for example, might release non-interoperable products to lock in their customers, other firms might form consortia to create shared, open standards.

I think that, long-term, the best we can expect for interoperability would look like the below:

What are you looking at?

  • Back Then (I’m being deliberately vague here) represents a time in the past when computers were so rare, and often very bespoke, that interoperability was extremely difficult to achieve.
  • Now represents the relative present: we have mass computer adoption, and consortia and other groups give us standards like Unicode, USB, TCP/IP and more. At the same time, some groups are still doing their best to thwart interoperability to lock in their customers.
  • The Future is ours to define; I hope that through collaboration and by putting pressure on the creators of technology, we can continuously increase OMI. You’ll notice that the shape is the opposite of Moore’s Law’s exponential growth: this is, firstly, because there’s an upper limit of 100% and, secondly, because it seems fair to assume that we will reach a point where we hit diminishing returns.
  • It is theoretically possible that we might reach a future of total OMI, but perhaps it’s more realistic to assume that through accidents, difficulty and innovation, some islands of non-interoperability will remain.

Freedom

How are things looking for free software? It’s very hard to tell, because the computer world is so diverse and because the subject matter itself is so complex. For example, the growth of Android is excellent news on one level, because it is based on the open source Linux kernel; it is less good news in that the rest of Android is totally proprietary, which makes it confusing. See the graphs below for a recent assessment of things (data from statcounter):

Desktop:

Mobile:

I think it is imperative that we work to create and use more free tools, for no other reason that we as people deserve to know what the stuff in our homes, on our devices or that is manipulating our information is doing. With the right effort, we might be able to recreate the growth of Linux among supercomputer operating systems. I am enthusiastic about this, and see the growth of free software as something as unstoppable, say, as the growth of democracy.

 

Wikipedia

Ideisomorphism

First, dimensions. As mentioned above, we frequently try to express complex ideas in two few dimensions, and this hampers our ability to think and communicate. Computers are, potentially, a way for us to increase the dimensionality of our communication, but only if we use them to their full potential.

The diagram below sets out some ideas, along with their dimensions:

To be clear, I’m not making a value-judgement against lower-dimensional things. Rather, I am saying:

  • Firstly, that one should study any given thing in a manner that allows us to engage with it with the proper number of dimensions.
  • Secondly, that poor tools for the studying, engaging with and communicating that which is in more than two-dimensions acts as a barrier, keeping more of us from learning about some very fun topics.
  • Thirdly, high dimensionality can scare us off when it shouldn’t, e.g. if you can talk you already know how to modulate your voice in more than five dimensions simultaneously: pitch, volume, timbre, lip position, tongue position, etc.

I think that, pedagogically and technologically, we should strive to master the higher dimensions and structures of thinking that allow us to communicate thus. However, we seem to be hitting two walls:

  1. The paper/screen wall: it’s hard to present things in more than two dimensions on paper or screens, and we get stuck with things like document structure, spreadsheets, etc., when more nuanced tools are available.
  2. The reality wall: it’s weird and sometimes scary to think in more than three dimensions, because it’s tempting to try to visualize this sort of thing as a space and, as our reality has just three spacial dimensions, this gets very confusing. This is tragic because a. we already process in multiple dimensions quite easily and b. multidimensionality doesn’t have to manifest spatially, nor they do manifest spatially must all dimensions manifest at once; what matters is the ability to inspect information from an arbitrary number of dimensions seamlessly.

Let us break the multidimensionality barrier! The nuance of our conversations and our thinking requires it. We should:

  1. Where possible, use tools like mind-maps and Venn diagrams (which allow for an arbitrary relationships and dimensions) over strictly hierarchical or relational structures (like regular documents, spreadsheets or relational databases, which are almost always two-dimensional).
  2. Use and build systems that allow for the easy communication and sharing of these structures: it’s easy to show someone a mind-map, but quite hard to share between systems because there’s no standard datastructure.
  3. Remember the technological feedback loop: 2-D systems engender 2-D thinking, meaning more 2-D systems in the future; we need concerted efforts to make things better now, such that things can be better in the future.

For our last graph, I’d like to introduce a new value, that combines the three concerns of this introduction (freedom, interoperability, ideisomorphism) into one, we can call it FII. Where before, I expressed interoperability and freedom as proportions (e.g. the percentage of software that is interoperable); this time, let’s think of these values as a relative quantity with no limit on it’s size (e.g. let’s say that 2020 and 2030 had the same percentage of free software, but 2030 had more software doing more things: this means that 2030 is higher on the freedom scale).

So:

FII = freedom x interoperability x ideisomorphism

We should, therefore, strive to achieve something like the graph above with respect to the technology of communication; think of it as Moore’s law, but for particular aspects of technology that represent our species’ ability to endure and flourish. It’s worth noting, of course, that Moore’s law isn’t a law in the physical sense, the companies whose products follow it achieve these results through continuous, intense effort. It seems only fair that we might expend such efforts to make the technology of communication not just more powerful, but better able to serve our pursuit of virtue; to the extent that I’m right about the moral arc of technology naturally curving upward, we may be rewarded quicker than we think.

What If We Are Successful?

What might happen if we’re successful? Here’s just one of many possibilities, and to explain I will need the help of a metaphor: the Split-brain condition. This condition afflicts people who have had their corpus callosum severed; this part of the brain connects the right and left hemispheres and, without it, the hemispheres have been known to act and perceive the world independently. For example, for someone with this condition, if only one eye sees something, the hemisphere associated with the other eye will not be aware of it.

I liken this to the current condition of humanity, except that instead of two hemispheres, we have numerous overlapping groupings of different sizes: nations, religions, ideologies, technologies, and more. Like split brain patients, often these parts don’t understand what the other is doing, might have trouble coordinating, or even come into conflict.

We have the opportunity to build our species’ corpus callosum, not that we might be unify the parts, but that the parts might coordinate; and, in that the density of connections is a power function of the number of nodes in the system, this global brain might dwarf the achievements of history’s greatest nations with feats on a planetary scale and in its pursuit of goodness, truth and beauty.

Categories
Trialectic

Trialectic – Can Technology be Moral?

Trialectic

A Wonk Bridge debate format between three champions designed to test their ability to develop their knowledge through exposure to each other and the audience, as well as maximise the audience’s learning opportunities on a given motion.

For more information, please read our introduction to the format here, The Trialectic.

Motioning for a Trialectic Caucus

Behind every Trialectic is a motion and a first swing at the motion, which is designed to kick-start the conversation. Please find my motion for a Trialectic on the question “Can Technology be Moral?” below.

I would like to premise this with a formative belief: Humans have, among many motivations, sought to seek “a better life” or “the good life” through the invention and use of technology and tools.

From this perspective, technology and human agency have served as variables in a so-called “equation of happiness” (hedonism) or in the pursuit of another goal: power, glory, respect, access to the kingdom of Heaven.

At the risk of begging the question, I would like to premise this motion with a preambulatory statement of context. I would like to focus our contextual awareness around three societal problems.

First, the incredible transformation of the human experience through technology mediation has changed the way we see and experience the world. Making most of our existing epistemological frameworks inadequate as well as turning our political, cultural systems unstable if not obsolete.

Another interpretation of this change is that parts of our world are becoming “hyper-historic”, where information-communication technologies are becoming the focal point, not a background feature of human civilisations (Floridi, 2012).

Next, the driving force behind “the game” and the rules of “the game”, which can be generally referred to as Late Capitalism, are being put under question with Postmodern thought exposing it’s weaknesses and unfairness, and a growing body of Climate Change thinkers documenting its unsustainability and nefarious effect on long term human survival. More practically, since the 2008 financial crash, Capitalism has taken a turn towards excluding human agents from the creation of wealth and commodifying distraction/attention. In short, the exclusion of the Human from human activity.

Third, the gradual irrelevance of a growing share of humans in economic and political activity, as well as the lack of tools for both experts and regular citizens to understand the new world(s) being crafted (this “Networked Society” which is a Hybrid of Digital Civilization and of Technologically Mediated Analog world) (Castells, 2009), has created an identity crisis a both the collective and individual levels. We know what is out there, have lost sight of the How and can’t even contemplate the Why anymore.

  • A better understanding of the forces shaping our world
  • An intentional debate on defining what this collective “Why” must be

Can help us find a new “True North” and begin acting morally by designing intentional technologies based around helping us act more morally.

Introductory Thesis

I based my initial stance on this topic atop the shoulders of a modern giant in Digital Ethics – Peter-Paul Verbeek, based on his 2011 work Moralising Technology.

Verbeek, who wants us to believe that the role of “things”, which includes “technologies”, inherently holds moral value. That we need to examine ethics through not an exclusively human-centric lens but also from a materialistic-angle. That we cannot ignore any longer the deep interlink between humans and their tools.

There is first the question of technological mediation. Humans depend on their senses to develop an appreciation of the world around them. Their senses are, however, limited. Our sense of sight can be limited by myopia or other delibitating conditions. We can use eyeglasses to “correct” our vision, and develop an appreciation of our surroundings in higher-definition.

This is a case of using technology that helps us reach a similar level of sensing as our peers, perhaps because living in a society comes with its own “system requirements”? We correct our vision with eyeglasses because we want to participate in society, be in the world, and place ourselves in the best position to abide by the ethics and laws. Technology is necessary to see the world like others, because when we see a common image of the world, we are able to draw conclusions as to how to behave within it.

When a new technology helps us develop our sense-perception even further, we can intuitively affirm that technological mediation occurs in the “definition” of ethics and values. Technologies help us see more of the world. Before the invention of the electric street-lamp system, as part of a wider system of urban reorganisation in the 19th century, western cultures looked-down on the practice of activities at night. An honest man (or woman) would not lurk on in the streets of Paris or London at night.

The darkness of dimly-lit streets made it easy for criminals and malfeasants to hide from the police and to harass the vulnerable. Though still seen as relatively more dangerous than moving in the light of day, it is now socially accepted (even romanticized) to ambulate under the city street-lamps and pursue a full-night’s entertainment.

A technology, the street-lamp system, helped people see more of the world (literally) and our ethics grew out of the previous equilibrium and into a new one. By affecting the way we perceive reality, technology also helps shape our constructed reality, and therefore also directly interferes in the moral thought-process of both individual and collective thought-processes.

From the pre-operative level, my thesis doesn’t diverge too far from Verbeek or Latour’s initial propositions. It will in terms of operative or practical applications seek to put a greater emphasis.

It seems clear that Technology has a role to play in defining what can be a moral practice. The question examined in this thesis therefore seeks to go a step further in exploring whether the creation (technology) can be considered independently from its creator (inventor/designer).

Are human agents responsible for the direct and indirect effects of the tools they build?

Of course, it is clear that adopting an perspective on the morality of technology that is solely anchored in the concept of technology mediation is problematic. As Verbeek mentioned in his book, the isolation of human subjects from material objects is deeply entrenched in our Modernist metaphysical schemes (cf. Latour 1993), contextualises ethics as a solely human affair and keeps us from approaching ethics as a hybrid.

This out-dated metaphysical scheme, sees human beings as active and intentional while material objects as passive and instrumental (Verbeek, 2011). Human behaviour can be assessed in moral terms good or bad but a technological artifact can be assessed only in terms of its functionality (functioning well or poorly) (Verbeek, 2011). Indeed, technologies have a tendency to reveal their true utility after having been used or applied, not before as they were being created or designed.

It is also key to my argument that technologies resembling intentionality are not in themselves intentional. Science fiction relating to artificial general intelligence aside, the context within which technology is being discussed today (2021), is a context of where technologies operate with a semblance of autonomy, situated in a complex web of interelated human and machine agents.

Just because the behaviour of some technologies today (i.e. Google search algorithms) are not decipherable, does not mean that they are autonomous nor intentional. What is intentional is the decision to create a system that contains no checks nor balances. To build a car without breaks or a network without an off-switch.

Technology does have the power to change our ethics.

An example Verbeek uses frequently is the pre-natal ultrasound scan that parents use to see and check whether their unborn child or fetus has any birth defects. This technology gives parents the chance or transfer the responsibility of making a potentially life-threatening or life-defining decision. It also gives them the first glimpse of what their unborn baby looks like through the monitor.

While the birth of a child before the scan was seen ethically as the work of a higher power, outside of human responsibility and agency, the scanner has given parents the tools and the responsibility to make a decision. As Verbeek documents at several occasions in the book, it changes dramatically the way parents’ (especially the fathers) label what they see through the monitor: from a fetus to an unborn child.

The whole ceremony around the scan visit, with the doctor’s briefing and the evaluation of results, creates a new moral dilemma for parents and a new moral responsibility to give life or not to a child with birth defects, rather than accepting whatever outcome is given to you at birth.

But let’s take this a step further and ask the age-old question: Who benefits?

The pre-natal ultrasound scan and the many other tests offered by hospitals today will service the patients. It will give them the chance to see their specimen and make choices about its future. But the clients of these machines are in fact hospitals and doctors, they are also, indirectly, policy-makers and healthcare institutions. The clients seek to begin shifting responsibility away from hospitals and doctors, and onto the parents who will have gained newfound commitment to the unborn babies that they have had the chance to see for the first time. The reasons driving this are manifold, but hospitals and governments are financially/economically interested in baby-births and also in having parents be committed to seeing through the stages of a natality.

When considering the morality of technologies, of systems and objects that are part of those systems, it’s worth paying close attention to what Bruno Latour calls systems of morality indicators; moral messages exist everywhere in society and inform each other, from the speed bump dissuading the driver from driving fast because “the area is unsafe, and driving fast would damage the car” to the suburban house fence informing bystanders “this is my private property”.

But also on who benefits from the widespread usage of said technological-products. The discussions around the morality of technology tend to focus on the effects deriving from the usage or application of said technologies rather than the financial or other benefits deriving from the adoption of said technologies at a large-scale.

Social Media as an example

The bundle of technologies that we call social media is a clear example of why this way of thinking matters. The nefarious consequences of mass-scale social media usage in a society and for an individual are clear and well-documented. We have documented its effects on warping and changing our conception of reality (technological mediation), in political sphere with our astroturfing piece and on our social relationships in our syndication of the friend piece.

In our discussions responding to the acclaimed Netflix documentary The Social Dilemma, we spotted an interesting pattern in the accounts. That one man or woman was powerless in stopping a system that was so lodged in the interweaving interests of Big Tech’s shareholders. The economic-logic of social media makes it so that acting on the nefarious consequences like fake news or information echo-chambers, would be null impossible due to the fact that it would require altering social media’s ad-based business model.

The technology of social media works and keeps being used because it is not concerned with the side-effects, but with the desired effect which is to provide companies or interested parties (usually with large digital marketing budgets) with panopticon-esque insights into its users (who happen to be over 80% of people living in the US, according to Pew Research Center 2019).

Technologies are tools. I mean, this is pretty obvious and doesn’t really need further explanation in writing, but they are not always tools like hammers or pencils that would prove useful to most human beings. They are sometimes network-spanning systems of surveillance that are used by billions, only to provide actual benefit to a chosen few.

The intention of the designer is thus primordial when considering technology and morality because the application of said technology will inevitably have an effect on agents that encounter the technology, but it will also have an effect on the designer themself. There will be a financial benefit and, more than this, ‘the financial benefit will inform future action’ (reflected Oliver Cox, uponing editing this piece).

So yes, the reverse situation is also true, some technologies may be designed with a particular social mission in mind, and then used for a whole suite of unforeseen nefarious applications.

In this case, should the designer be blamed or made responsible for the new applications of their technology, should the technology itself be the subject of moral inquisition and the designer be absolved from their ignorance, or should each application of such technology be considered “a derivative” and thus conceptually separate from the original creation?

Another titan in digital ethics, Luciano Floridi of the Oxford Internet Institute, thinks that intentions are tied to the concept of responsibility; “If you turn on a light in your house and the neighbour’s house goes BANG! It isn’t your responsibility, you did not intend for it to happen.” Yes, the BANG may have had something to do with the turning on of the light, but as he goes on to mention, “accountability is different, it is the process of cause and effect relationships that connects the action to the reaction.”

Accountability as the missing link

With this in mind, we can assume that the missing link between designing a technology and placing the responsibility over to designers is accountability. To hold someone accountable for their actions, one must have access to knowledge or to data that would provide some sort of a paper trail for the observer to trace the effects of said design on the environment and the interactions of the environment with said design.

While it is indeed possible to measure the effects of a technology like social-media from an external perspective, it is far easier and more informative to do so from the source. Yes, what would hold designers of technologies most accountable is for them to hold themselves accountable.

There is therefore a problem of competing priorities when it comes to accountability, derived from the problem of the access to knowledge (or data).

In the three examples given: of the pre-natal ultrasound scanner, social media and the light-switch-that-turns-out-to-be-a-bomb. The intentions of the designer varied across a spectrum, from zero intention to blow up your neighbour’s house, to the pre-natal ultrasound scan where the intention to provide parents with a choice regarding the future of their child was deliberate.

In all three cases, an element beyond intentionality plays an role; the designer is either unaware of (with the light-switch) or unwilling to investigate (with social media) the consequences of applying technology. Behind the veil of claims of technological sophistication, designers reneage from their moral duties to “control their creations”.

If the attribution of responsibility in technologies lies in both intentionality and accountability, then, deontologically, shouldn’t the designers of such technologies provide the necessary information and build the structures to allow for accountability?

The designers should be held accountable for their creations, however autonomous they may initially seem. If so how, feasibly, can they be held accountable?

Many of these questions have been approached and tackled to some extent in the legal world, with intellectual property and copyright laws on the question of ownership of an original work. And this has also been examined to some length by the insurance industry which uses risk management frameworks to determine burden sharing of new initiatives between a principal and agent.

But in the realm of ethics and the impact of technologies on the social good, the focus that may best suit the issue we have here is the Tragedy of the Commons. It is the case where technologies that are widely available (as accessible as water or breathable air) have become commodities and are being used as building blocks for other purposes by a number of different actors.

The argument that technologies have inherent moral value is besides the point. The argument is that moral value should be ascribed to the ways in which technologies are used (whether those be called derivatives or original new technologies); the designers need to be inherently tied to their designs.

  1. In the GDPR example: where processing of personal data represents a genus of technologies where the moral value is ascribed to the processors and controllers of the personal data. The natural resource behind the technology, personal data, remains under control of the owner of that resource.
  2. Ethics by design: The process by which technologies are designed needs to be more inclusive and considerate. Its impact on stakeholders (suppliers, consumers, investors, employees, broader society, and the environment) need to be assessed and factored in during development. It cannot be something that can be wholly predicted but it can also be understood and managed if taken with particular due care. Example: regulated industries such as Life Sciences and Aerospace have lengthy trialling processes involving many stakeholders which makes the introduction of new products more rigorous.

Accountability as the other-side of the equation

As mentioned, the emergence of new technologies such as blockchain governance systems (e.g. Ethereum smart contracts) provide clear examples of how new technologies have created new ways of holding agents accountable for their actions. Those actions that without such enabling technologies, would have been considered outside-of-their-control.

It seems that technology can work on both sides of a theoretical ethical-accountability equation. If some technologies make it easier to act outside of pre-existing ethical parameters and unseen by the panoply of accountability tools in-use, then others can provide stakeholders with more tools to hold each other into account.

Can Technology Be Moral? Yes, it can given its ability to provide more tools to tighten the gap between agents actions and the responsibility they have for those actions. But some technology can be immoral, and stay immoral, without an effective counterweight in place. Technology is therefore an amoral subject, but very much moral in its role as both a medium and as an object for moral actors.

Closing

It will be my honour and pleasure to debate with our two other Trialectic champions, Alice Thwaite and Jamie Woodcock. I am looking forward to what promises to be a learning experience and to update this piece accordingly after their expert takes.

Please send us a message or comment on this article if you would like to join the audience (our audience is also expected to jump-in!).

Categories
Five Minuter

Link Wars: Facebook vs. Australia

In Australia, Facebook is once again hamstrung by its business model

Last month, the Australian government made headlines with a new law forcing Big Tech platforms, namely Google and Facebook, to pay publishers for news content. The move was ostensibly meant to provide a new revenue stream supporting journalism, but the legislation is also the latest development in a succession of moves by influential News Corp CEO Rupert Murdoch to strike back at the online platforms sapping his publications’ advertising revenue.

While Google brokered a deal with News Corp and other major Australian publishers, Facebook decided instead to use a machine learning tool to delete all “news” links from Australian Facebook. Caught in the wave of takedowns were also non-news sites: public health pages providing key coronavirus information, trade unions, and a number of other entities that share, but do not produce, news content. Facebook has since backtracked, and Australian news content has been allowed back on the platform.

While Google reached a deal with News Corp and other major Australian publishers, Facebook decided instead to use a machine learning tool to delete all “news” links from Australian Facebook.

The fiasco illustrates broader issues facing both the company and Big Tech in general: the spectre of regulatory action. This trend that explains the influx of politically influential figures entering Facebook’s employ, like Former UK Deputy Prime Minister Nick Clegg, who is now Facebook’s Vice-President of Global Affairs and Communications.

Facebook’s chronic problem isn’t the aggressive methods of its public affairs team, nor its CEO waxing poetic about free speech principles only to reverse course later. Facebook is hamstrung by its own business model, which incentivizes it to prioritize user engagement above all else.

The Australia case is reminiscent of another moment in the struggle between Big Tech and governments. In 2015, a pair of gunmen murdered a group of people at a barbecue in San Bernardino, CA with what were later understood to be jihadist motives. After the attack, Apple CEO Tim Cook seized the moment to solidify Apple’s brand image around privacy, publicly refusing the federal government’s requests to create a backdoor in iOS.

This principled stand was backed up by Apple’s business model, which involves selling hardware and software as a luxury brand, not selling data or behavioral insights. Cook’s move was both ethically defensible and strategically sound: he protected both users’ privacy and his brand’s image.

After the 2015 San Bernardino attack, Apple CEO Tim Cook seized the moment to solidify Apple’s brand image around privacy, publicly refusing the federal government’s requests to create a backdoor in iOS. This principled stand was backed up by Apple’s business model, which involves selling hardware and software as a luxury brand, not selling data or behavioral insights.

In the case Australia, different actors are involved. Google, like Facebook, relies on mining data and behavioral insights to generate advertising revenue. However, in the case of news, Facebook and Google have different incentives around quality. On the podcast “Pivot”, Scott Galloway from NYU pointed out that Google has a quality incentive when it comes to news. Users trust Google to feed them quality results, so Google would naturally be willing to pay to access professional journalists’ content.

More people use Google than any other search engine because they trust it to lead them not just to engaging information, but to correct information. Google therefore has a vested commercial interest in its algorithms delivering the highest quality response to users’ search queries. Like Apple in 2015, Google can both take an ethical stand — compensating journalists for their work — while also playing to the incentives of its business model.

On the other hand, Facebook’s business model is based on engagement. It doesn’t need you to trust the feed, it needs you to be addicted to the feed. The News Feed is most effective at attracting and holding attention when it gives users a dopamine hit, not when it sends them higher quality results. To Facebook, fake news and real news are difficult to distinguish amongst the fodder used to keep people on their platform.

In short, from Facebook’s perspective, it doesn’t matter if the site sends you a detailed article from the Wall Street Journal or a complete fabrication from a Macedonian fake news site. What matters is that the user stays on Facebook.com, interacting with content as much as possible to feed the ad targeting algorithms.

The immediate situation in Australia has been resolved, with USD 1bn over three years having been pledged to support publishers. But the fundamental weakness of Facebook’s reputation is becoming obvious. Regulators are clearly jumping the gun to take shots at the company in the wake of the Cambridge Analytica scandal, debates over political advertising, and the prominent role the site played in spreading conspiracies and coronavirus disinformation.

In short, shutting down news was a bad look. Zuckerberg may have been in the right on substance — free hyperlinking is a crucial component of an open internet. But considering the company has already attracted the ire of regulators around the world, this was likely not the ideal time to take such a stand.

In any case, Australia’s efforts, whether laudable or influenced by News Corp’s entrenched power, are largely for naught. As many observers have pointed out, the long-term problem facing journalism is the advertising duopoly of Google and Facebook. And the only way out of that problem is robust anti-trust action. Big Tech services may be used around the world, but only two legislatures have any direct regulatory power over the largest of these companies: the California State Assembly in Sacramento, and the United States Congress. Though the impact of these technologies is global, the regulatory solutions to tech issues will likely have to be American, as long as US-based companies continue to dominate the industry.

Categories
Interview

Portraits of Young Founders: Speaking from Malawi to London

This is the story of how two young entrepreneurs, Muhammad Altalib and Robert Smith, attempting to start a social enterprise in Malawi, leapt into the challenging task of founding a trans-continental business. They faced unexpected difficulties and obstacles from day one, and ultimately learned valuable lessons about starting a business in two very different countries.

It is also the story of two co-Founders: how they first met, how they grew to work together despite having naturally different perspectives; how they grew and matured together through one hardship after the next. It is a story of ultimately why they decided to close this chapter of their lives in search for something new. It is a story of success; or, in other ways, success derived from the acceptance of the failures, possibilities and limitations of their company.

In this Portrait, we explore the idea of starting a company with a co-Founder in an unfamiliar market, and map the relatively unexplored vicissitudes of how business can be run between these types of contexts: London and East Africa. In this first episode of Muhammad and Robbie’s journey, we’ll observe the process of the two co-Founders’ crystallising inspirations, and the frustrating disconnect between the Euro-American start-up theatre and the unfamiliar markets of East Africa our entrepreneurs were trying to work in.

https://cdn-images-1.medium.com/max/1440/1*O7dHm12kU1DsQ76ni9BzIA.jpeg

Muhammad Altalib

I grew up in a number of developing countries. My parents ran an NGO and we moved around a lot. My path ran across Malawi, South Africa, and Saudi Arabia. Although I did grow up with relatively privileged surroundings, I was never shielded away from the everyday lives of those around me.

I then came to the UK to study. Having always been into entrepreneurship, I focused a lot of my time cultivating this interest, especially given the fact that I was in London, one of the world’s start-up hubs. Because I was studying finance at the time, I was naturally led into FinTech. The latest technology at the time was blockchain, and I started interning at a blockchain startup trying to increase transparency within trade. Part of my role included looking at flows of goods between Africa and Europe. There I had my first spark of inspiration where I realised there could be a big business opportunity here: “Hey, I come from that part of the world; I never knew there was  so much trade flowing between these two continents Maybe I can combine my regional know-how with this latest technology [blockchain]?”

It’s a very vague origin-point for my entrepreneurial beginnings, but in a sense that’s where a lot of startup ideas come from.

From auras of inspiration to a clear idea

Muhammad: Months down the line, as I continued trying to find a potential opportunity, diving into different academic papers, and looking through online articles, I finally started coming towards a tangible idea: cocoa farmers in Ghana where struggling to make ends meet. There needed to be a better way to finance those farmers. What I wanted to do…[Muhammad visibly bashful] which I realise is quite a bad idea now, but at the time felt revolutionary, was to create something called Cocoa Coin. There’s huge demand for cocoa and chocolate-demand is always increasing. I was going to commoditise cocoa silos in Ghana by listing them onto a blockchain exchange that they can be traded. The idea was that you could finance farmers using the cocoa silos as an underlying physical asset, with blockchain allowing for increased transparency….

Yuji Develle: Why cocoa? From what you’re telling me, it seems like there was a jump from applying blockchain tech in Africa to focusing on cocoa. How much did you know about cocoa farming before embarking into entrepreneurial ideation?

Muhammad: That’s the other disadvantage of being so far from the actual problem, Yuji

I was sitting here in London, looking at commercial-figures; you know, commercial reports and papers. So what you have to expect is I was seeing information solely pertaining to what trade mattered to Europe. Naturally, those two things are cocoa and coffee, cocoa and coffee. So I chose cocoa, just as naturally as having never been to Ghana before. But I thought hey, Malawi, Ghana, what’s the difference?

I pitched this several times [In London] to tech developers in my effort to start bringing together a team. I focused on the problem of financing cocoa farmers, and heavily suggested the use of blockchain-exchange listed silos as a solution to that problem Naturally [the tech developers] were super passionate about it, as the project sounds exciting on paper.

These discussions gradually led me to start considering other solutions to our problem, because we were saying, okay, how can you actually predict what the future productivity of a farm is going to be? Well, hey, there’s a number of things we can do, we can look at weather patterns, extrapolate farm productivity from there l and then finance against that predicted productivity.

At this stage of building a start-up, you’re running, you’re running everywhere. You’re going to all the conferences, you’re going to get every paper, you’re following every other person on LinkedIn… and of course you don’t understand much of it.

I cobbled together diverse ideas from wildly different domains in order to make something work for me. King’s [College London], where I studied, has a really strong Geography department and they were kind enough to share some data I could work with, so I started there..

Yuji: So you had an idea to “coin” the cocoa industry, but realised you didn’t have enough live-data. So you then had the idea to design a data-collection device (IoT) that would be… distributed to the silos?

Muhammad: Exactly, as is often the case, we replaced a problem with another problem, without really solving the original problem! The problem was farmers are not getting financed. I needed to create that link between the farmer and the cocoa silos, in order for Cocoa Coin to work.

How could I create this link? Okay, you go to the farm and then you use a sensor.

So I joined the Robotics Society [at King’s College London] and we ordered a bunch of parts! When trying to build our own IoT device, my first co-Founder Marius entered the picture.

From an idea to a ‘pretotype’

Marius was an undergraduate Computer Science student at King’s. We met as rivals in a pitch competition, but came out seeing common ground around IoT. Together, we managed to build a mini prototype of a weather station.

 

Marius and Muhammad in front of their weather station prototype

Looks great doesn’t it? Well we were going to build this weather station and we were going to solve some really big problems. We were testing these devices “in the field” on some farms in the UK. On paper, they passed the tests and were cheap by UK standards, like £20 a-piece. But the actual on the ground solution was quite a way away…

Yuji: Why was that? When did you first realise that?

Muhammad: When I went to Malawi for the first time. There was the business model issues and the logistics behind installing the devices in-country:

  • How you can just stick a bunch of devices into thousands of farms across Ghana?
  • How do you maintain these devices?
  • How do you make sure the guys respond to pings at this or that farm, and then chuck the device out when it’s no longer of the day?
  • How do you ensure accuracy of the data and in sync with both season (macro) and daily (micro) patterns?
  • How do you make sure the devices are valuable to farmers, while also preventing them from stealing the device, and you know, using the scrap metal?

All these thoughts would go through my head and they just didn’t match? It finally came down when we actually went to Malawi like wow, we were really in our own world, weren’t we?

But regardless, for all that happened, this seemed like the first viable solution. I could actually pitch something tangible and we won a few startup competitions.

Pitching the pretotype

I pitched that idea to the first important character of our story, Stefan.

Yuji: I just want to note Robert’s face of disapproval here for the tape!

Robert Smith: Sorry, Stefan is a pompous white guy. It’s okay, you can take it on the record; it’s fine, and true.

Muhammad: Anyway, we pitched our idea through to the final pitch stage, and I managed to get to Stefan, who is known as a Professor of Practice at the King’s Entrepreurship Institute: this is his official title.

Stefan was known to be somebody who was very direct, and at six-foot-five, very imposing. I think he’s a great guy, and obviously a fantastic business man, but his demeanor can come across as intimidating.

So I went in with this

mindset

and within a few minutes…we got into a shouting match!

Yuji: Why?

Muhammad: It was just my dumb luck that Stefan has a huge charity in Malawi and just so happens that I was pitching my idea for Malawi! And so he sort-of grinded me a lot harder than he was grinding everybody else.

Robert: Look, to my knowledge, he’s been on the board of a charity there. But it’s never been beyond listening to reports from people that work in the charity there, nor has he really been involved in the direct management of the day-to-day, but he knows a lot about Malawi, I’m sure.

Yuji: I ‘think’ I can get the profile here. You know, in London, we have a lot of people who are Africa experts, who have not yet settled there. What was this shouting match about?

Muhammad: Basically, you’re pitching. He asks you a very direct question, and he says, you’re wrong. I argue with him. No, I’m right, No you’re wrong. Everybody else in the room just sort of kept quiet while we were both pouring our angry passionate defences out. Very fun experience.

I think here I learned a first important lesson. I came out of there my blood boiling and a bit disheartened because I thought that I might have blown the pitch by being so combative with Stefan. But then to my surprise, I actually did win the Changemaker Award at Idea Factory!

Muhammad (middle) at the House of Lords at Westminster Parliment for the King’s College London Idea Factory

When you’re in the weeds fighting for your start-up idea, it can seem like your making no progress or that no one believes in you. The moment you realise someone actually believes in your idea, it can come as a sudden realisation.

This provided such a boost in me that it encouraged me to dive into this project full-time and begin putting a team together. I started bringing-in a bunch of other people, we became a team of five very quickly; all university students, working for free

This was also a period of us actively participating in the startup competition circuit: UCL had a competition, Cambridge had one, and many, many different competitions I tried to sneak into.

And it’s generally a very exciting time, because you’re not actually doing anything, you’re just pitching this grand vision, which nobody actually knows much about besides that what you’re doing is something fantastic and amazing and great!

As naive as it is, and as oblivious as I was, it’s actually a good thing for entrepreneurs to be involved in this circuit because it means that you’re motivated enough to actually push yourself. It’s a necessary step on the path to actually carry-on with your company and your startup.

Robert: So there is this education that people in the UK startup scene receive, you know when you’re launching a startup:

  • You have to start-off by coming up with a good idea
  • You have to solve a problem that you’ve defined, you know, find a problem to solve online and find a way to solve it.
  • You have to pitch for investment, based on “valid assumptions” and market research.

But no moment of actually trying to sell your product in the region where it’s supposed to operate in.

Muhammad: It’s true. I think it’s slightly different depending on who you are:

  • If you are an industry veteran, those are the general steps you take, because you know your department well enough.
  • If you are not an industry veteran, which a lot of young entrepreneurs are not, which I was not, I mean, I lived in Malawi, I didn’t actually know much about the industry and know much about farming or about any of those things, then it’s a little bit different. Because then you have to actually figure it out for yourself, using third parties as research-aides of sorts.

When you’re in this period of excitement, the honeymoon period, you will be naive and people like to shoot you down and be cynical with you. But it’s very positive, because it means you’re still in a motivated state. I was very excited. This state of mind allowed me to do things that I probably would not have done in other circumstances; such as jumping on a plane as a broke student to Malawi.

Resulting from our onslaught of competition applications, we were accepted to the King’s Accelerator (a 12 month incubator program) and to the Cambridge (University) incubator (a 3 month incubator program). Things were looking good on paper, but I kept-on thinking that I can’t just be sitting here in London. I mean, they’ve given me all these educational seminars and it means nothing. Reading research papers means nothing, you have to get out and into the field!

That said, joining an accelerator is a big milestone. I already talked to my parents and got their full backing, and decided to pursue Seedlink as a full time business after my university. I’m going to use my savings [to live off of].

Yuji: Was that difficult? I mean, pitching a largely unprofitable venture such as entrepreneurship to your parents can be difficult.

Muhammad: No it wasn’t actually. I suppose it depends on what mindset you are in; I had a positive mindset focused on the big potential of my work. And it wasn’t just because I was and am privileged, although my parents were willing to give me enough funding to allow me to survive the year at least.

https://cdn-images-1.medium.com/max/1440/1*GUqAaH792lQbmYff5E8MZA.jpeg

So I kick-off the King’s Accelerator program, one of twenty start-ups in a very fancy evening set-up to put you at the centre of attention, introduced to all the companies. It was a big cocktail party on the top floor of Bush House in Central London, which is the fanciest building at King’s. They had invited speakers and successful entrepreneurs, and they come to ask you: “oh, what’s your idea? What are you doing?”. You have a little stand, with your pamphlets and demos. It’s followed by drinks at the student bar. Of course, at this point I already knew I was leaving for Malawi.

So I’m there thinking: How will I be part of two accelerators (in Cambridge, at King’s) and also in Malawi?

At the Cambridge Accelerator, they needed you to be there in-person for the first few sessions. I had been accepted **to this exclusive program and they obviously rejected a bunch of other people and startups.

So on this night at Bush House, the stars somehow aligned. I find the three people who I thought could represent me in Cambridge while I’m in  Malawi.

  • Person A was Robbie. I didn’t know him too well but I did know that he was doing something really cool and exciting with his previous startup, Development United, which was based in Haiti. So he was definitely a potential candidate.
  • Person B was Ryan. A close friend who I had start-up discussions with in the past.
  • Person C was Ellis. I met him on the day-of and he was building his own startup to be based in Ghana.

So I’m here on this November night and leaving for Malawi in two weeks, I have nobody to take over in Cambridge, and I’m desperately looking to find someone and the three people I think would be best suited are all in one place tonight.

I drag all three of them out of the bar and we go to some Indian restaurant next-door after convincing them to have  dinner together.

I don’t know what I should have expected but, initially, Robbie was completely disinterested. He’s like, “Why is this guy wasting my time?” I think he was smoking a cigarette, much like he is now.

Ellis is very interested, but also I’ve just met him on the night.

Robert: Yeah, I stood behind the bar and I was like “Maybe you’ll see me a little later? Who knows?”

Yuji: Little did he know that he was gonna follow Muhammad all the way to Malawi.

Robert: Mistake one was going to the dinner.

Muhammad: Yeah, exactly. Anyway, so we have dinner, and I try to pitch them the idea.

Again, Robbie is not very interested, Ellis is trying to learn more and Ryan wants to go home soon. When I finished weaving the threads of my Seedlink idea and my proposal to have someone I trust go to Cambridge on my behalf, Robbie switches on and comes out of his drunken stupor; suddenly he’s very serious.

https://cdn-images-1.medium.com/max/1440/1*6LyToozbjfqicHhcpBnmxg.jpeg

Robert Smith

Robert: Substance does that.

So Muhammad’s original idea that this is Development, and that in itself is really interesting, because my initial hesitation almost was exactly that it was Development…which in my opinion, is not really what business should be doing. Not in the sense that business is not a practical solution but in the sense that development means so many things to so many people.

Then if you want to actually apply development to a business context, nobody really knows what the hell you’re talking about. So it also often comes with just huge, you know, basically white-supremacist notions that Euro-America is responsible for fixing the world, which reifies colonialism…it’s great. That’s its contemporary form, as well.

It points to a larger context of the startup industry in London that we were finding a place within to make a relevant project. Where if you want to do social enterprise, particularly not within Euro-America, particularly within a place that does not have white people, then this is framed as like a development type project, right. You’re performing the grandeur of ‘social’ entrepreneurship in ‘Africa’ — wherever that is anyways, which is very problematic in itself.

And this leads to a problem of translation, every time that we would do something in Malawi, no one knows what the fuck we’re talking about. Because, you know, the business of Malawi is not going to be the same as what you would expect on Canary Wharf. It just doesn’t work that way.

There’s that frame. This is the context, we’re kind of negotiating with and kind of finding our place within it. I think the majority of the time, especially at the undergraduate level in universities, it’s for predominantly privileged people to try out ideas that are not fully fleshed out.

That is the essence of privilege. You cannot do that in Malawi. A lot of them don’t have the funds, but we get that in Euro-America and that is pretty different.

So when Mohammed uses the word industry veteran, which is really interesting because by industry veteran that basically means that you have had the opportunity to float your ideas around a million different ways with the funds. At which point now you can actually even understand what a business model is. Right?

So yeah, my initial skepticism when I was talking to him, and it was, most of what he was telling me up into that point was just hype hype, hype, hype hype, right; fair enough.

Then Muhammad actually made a tangible business proposal, with a use-case and value with the business proposal, a product and the customer, which was very far from the initial kind of material technology device that he was proposing. It was now a [B-2-B-2-C] type service in itself. It was a logistic operation, right? At least we got there.

So yeah, I heard that and I was intrigued.

Joining a start-up accelerator

Muhammad: Cambridge in itself was interesting, because this connects exactly to the context that I was trying to lay out before on an obsession with using material technology to “solve” Africa’s problems.

Robert: Muhammad was accepted to Cambridge on this material technology device, this irrigation management system, whatever. So I showed up the Cambridge with this new business proposition, this logistics operation, you know, we want to do in Malawi, and they kind of looked at me like, “What the hell are you talking about? This is not what we accepted here. What are you doing?”

This is not the business that Mohammed gave us, you are a double imposter.

They were very close minded and weren’t that helpful. They did not appreciate the fact that we regressed, took a step back and rebuilt the business proposition according to what we thought would be useful in Malawi.

 

 

The crew at Cambridge
Robbie (left) showing off his Cambridge access card
Muhammad (right) sharing his insights with the rest of his, focused, crew

On the actual functioning of Cambridge: It was almost inappropriate for us in a way. The way it works is you have sessions and you have training sessions specific to whatever module (marketing etc.). You also have one-to-ones with specific mentors and are supposed to be making progress each week.

We were developing ideas and at this point Muhammad is really doing the basic ground-research. And also, the problem of translation came up on the daily given that most mentors had close to exclusively-UK startup experience.

Muhammad: Yeah. Cambridge was like the epitome of that kind of mindset. So how we actually got invited to Cambridge, was when I went to a weekend hackathon in Cambridge on food security.

**Robert: …**and they liked you ‘cos you were in Tech.

Muhammad: At a hackathon, you basically spend the weekend coming up with potential solutions to a set problem.  Our team came up with the  problem of quality control in the cocoa supply chain.. Our solution was a  swab, you could use to quickly test the level of pesticide in a bag of cocoa and determine whether it passed quality control checks. And that’s the idea we pitched at the hackathon and won.

The thing is, at a hackathon, you tell them what the problem is and what your solution is. Both are constructed by yourself. So when I started exploring the solution we came up with further, it didn’t turn out to be a very practical solution.When we came up with the farmer financing and logistics model, that proved a lot more logical.

So after winning the hackathon and being invited to Cambridge, Robbie goes up there instead of me with this new idea of Seedlink, not the idea I originally pitched to get in.

And you know the reactions all around were… since when did Muhammad become a white guy? There was one person, who was our main mentor and who got us into the programme, I think he very much liked the idea that I (originally) pitched him, despite it being completely constructed by myself, and not very realistic.

He was just a little bit salty about it. I think.

Yuji: So is that the result of a certain ideological orientation, around prizing engineering tools?

Muhammad: I’m not sure exactly, I think everybody would say they basically live in ivory towers. I mean, that’s a little bit of exaggeration, obviously, they don’t actually do that, but there is a bit of a disconnect. And they could not appreciate that you have to pivot if you’re on the ground. I think people in Cambridge are very traditional. They’re very much engineering-focused. After all, Cambridge is a hardware engineering hub.

I still remember this. Didn’t they tell you (Robbie) once: “Why are you doing this in Malawi? I just read this thing on Malawi saying that it was second most dangerous country in the world.”

Robert: It was so ridiculous. [The traditional institutions] want to support this idea with technology, but not when it’s actually like a tangible logistics business, which will do a hell of a lot more to the economy in general. Now you’re going to pull the Corruption Index, like a pretty irrelevant statistic, which you never actually used before, to tell me it’s not ‘safe’ to work there. There was no fetishised ‘Corruption Index’ when we were into tech.

Yuji: I guess what they’re trying to say is, you know, they’ve got economic models they’ve developed and those don’t work in countries with an environment different from what they are used to testing.

Max Gorynski: Metrics like the Corruption Index tend to be very enthusiastically used by large American consultancy firms. From what I understand their appraisals of these kinds of situations are, “Well, if we invest in this small venture, what is the subsequent likelihood that we’ll be able to entice stakeholders from the States to come and invest in this place?”

Robert: So ROI is not with respect to Malawi but in respect to global capital markets?

Max: Precisely.

Robert: You know, these mentoring sessions for Cambridge were held on a Thursday. Yuji, you went to King’s (King’s College London) right?

Yuji: I went to Kings, yeah.

Robert: Okay great, so you know that sports night on Wednesday usually goes quite late (note: this is an understatement).

Yuji: Walkabout Wednesdays

Robert: I would then have to get a train the next morning. So I mean, the interest really just went down rapidly as I saw intellectual ROI diminishing and sports night ROI increasing.

Muhammad: Basically, yeah.

How did you run a team across two continents?

Muhammad: Keeping a team motivated and engaged is one of the hardest parts of running a startup. It’s a skill you have to pick up, and one I wasn’t super good at when I first started building out our team. Add to this the fact that I was trying to manage two teams with very different work cultures across two continents. To give credit to the accelerators, this is a skill they try to build up in you.

Yuji: It’s also what people should do to get the job done, and then also getting them hooked on actually getting the job done…like managing and engaging, right?

Muhammad: My solution in the early days was to bring on people who were intrinsically motivated and give them the freedom to select roles they thought they were best suited for. This did mean we initially had a turnstile of people joining and leaving, but eventually the right people settled.

I will say that even then we had people floating around who were not too engaged. It takes a very specific kind of person with ’founder’ qualities which very few people have. I think that’s the biggest reason why Robbie and I clicked.

Finding and managing a team in Malawi was a whole other story. It was a big culture shock to me. The mindset I had come with was one of workplace equality, where there is no ‘boss’ and where each team member is given the freedom to select their role. The people in Malawi did not like that, and this quickly became apparent when they very directly told me that I need to start telling them what to do. To them, giving them the ‘freedom’ to choose their tasks was extra responsibility, and instead they wanted to be given a well defined task that they could complete and get paid for. It felt very transactional to me at first, but I quickly got the hang of it as whenever I would slip and not provide a task, they would take advantage of that and slack on the job.

So it was interesting seeing the dichotomy of work cultures between the UK and Malawi and having to switch to very different managerial roles whenever I was addressing a specific team.

Muhammad (right) with Paul (Middle) and Mkandawire (Left) part of the Seedlink Malawi team.

Yuji: So if I hear this correctly, despite the obvious difficulties associated with running a multi-continental operation and being away from your core team, as well as all of the downsides of staying in the Cambridge incubator, you still saw the value of having Robbie in Cambridge and not in Malawi?

Muhammad: I mean the whole reason why I spent all this effort to get Robbie to go to the mentor sessions, which I knew were kind of useless, was to keep Cambridge happy.

We had our reputations in London, along with the financial and investment might that the City holds with its banks and institutions, at stake. We needed to keep stakeholders in Cambridge and KCL (King’s College London) happy.

The importance of the London accelerator scene

How were these stakeholders benefitting your business?

Muhammad: It was several factors. Ideally, you want the best of both worlds (London and Malawi). London has a lot of resources.

Firstly, they have technical talent. Your mindset there is also very much an innovative mindset. You keep your mind open, you always see new ideas and new technologies, and you’re always very broad. London also has the capital that we would need if we wanted to scale the business.

In Malawi, you get very engrained with the day-to-day running of the business. So even though London [has] its technology, it has no context to apply that technology to. When I combine the two that’s kind of my unique selling proposition (USP). This is why it was always important for me to keep this bridge between London and Malawi. That was the biggest reasoning behind keeping both stakeholders happy.

Robert: You need to make a distinction between the institutions and the individuals.

As institutions, both Cambridge and King’s are nearly useless, as evidenced in people’s responses of us trying to go to Malawi. That said, we learned some skills relevant for London, but these were not relevant for Malawi. No one would listen to what happened there.

“No, why would you go into the place you’re trying to do business?” Well, why do you think?

The only thing an institution will give you are a few formal settings where you can pitch and potentially receive funds. That’s what you get from these accelerators.

Now, individuals are important. So within each of these places, like London, there were a few individuals. I think calling up Mark Corbett was very important because he was a huge development for us in many ways. Individuals that can actually sit down with you and be open-minded to what you’re saying. They are extremely important. It’s having the willingness and faith to give time. Mark had that.

And listen. I mean, Mark, on his computer had posted, always this sticky note, “ABC”: always be closing. It’s very just at the core of what the business needs to do, you need to sell, you need to get your ROI and you have your sustainability.

Other people bound by their dogmatic accelerator-approach thought that if you went to enough marketing sessions you’d somehow find the path to a sustainable business. In Malawi? It’s not how it works.

Mark would actually sit down, listen to what’s going on, and really tried to analyse the structure of the business to understand where sustainability comes, what’s the plan to get there, and how to maximise it at its fullest potential. Mark did not bind himself to how London said it should work but listened to how it actually worked from our experiences. This is one of the most essential skills to run a successful and ethical business.

It’s not being dogmatic, arrogant, or pompous. It’s not requiring us to translate. It’s not requiring us to sanitise the business realities of Malawi for this normalised vision of what business should be in London. That’s why he we had such good conversations with Mark, he didn’t require that.

If you go down the same path as we did, you will see that there are these people like Mark who provided some sense of community. But they were few and far between.

Is there something from this period of incubation in London that you took with you to Malawi and actually worked?

Muhammad: I would say, less so for Malawi because that was a very different context and more so for startups in general.

I mean, this was the kick-start of my career within startups, because it gave you the startup ecosystem. It gave you access to the community, it gave you office space in central London, it gave you pitch events to go, you were entrenched in the startup community.

I pitched loads of times! If I look at the first time I pitched compared to last time… I could probably pitch anything to you right now.

A lot of times a pitch veils what’s actually going on in your business because you create the context of the problem, and then the context of the ideal solution…but it convinces people and it’s a sales strategy that works.

After the community and pitching practice, the best part about Cambridge, King’s and all these other places: they were absolutely free. They charged me nothing and I did not need to sacrifice equity for their support of my business.

So there really is little I can be complaining about if you’re looking at this as a function of money or return on investment.

Robert: London taught you theatre, right?

Robbie and Muhammad at a pitch event

This is what this was: pitching is a performance.

It’s the most disconnected thing from the reality of business the majority of the time. But that performance creates hype, hope, aspiration, and it creates affect, right?

You have this affectual resonance, you feel inspired, you feel competent.

Muhammad in pitch-mode

This is what London teaches you.

Muhammad puts it in a positive way, but he’s a bit more pragmatic in the sense that realistically you do need to know these things to accrue funds in Euro-America, right?

But I’m also in the position that if you’re accruing funds from people that want you to dance on stage, rather than actually have a genuine conversation about your business, then these are not the people that you should be accruing funds from.

You could also imagine a different type of startup environment where you can be honest about your business, and then even the people that we have worked with, I mean, these people like the opposite of those in your big regal Bush House (literally the most expensive building in London).

I hate it (Bush House). I tried not to go there unless I had a meeting with someone. We’re talking about getting other people motivated, you kind of have to like feed them this potion (the potion of ‘regality’ and elitism).

So imagine a world where you don’t have to do that. Imagine a world where you don’t have to dance. You don’t have to motivate people by lollipops.

What if it’s really just about the business and what you’re doing?

It’s a bloody shame. This is why the majority of young people are going to start-up something, steal, and get on this hype-train to get the fame of running a startup. “I’m 21. I’m a CEO.” Oh ouhlala!

Yuji: Well, it’s a sort of fantasy right? The millennial fantasy of leapfrogging your career by becoming a Founder of something. An illusion for yourself, telling yourself that you’re better… than the little fish at big companies.

Muhammad: Yeah, I totally agree. Actually, I think there are two problems. When you think  of the people that are running accelerators ,they are marketeers, not business people. Many have not even run a startup before.

That means that on the whole, they’re looking for status.

So they don’t want you to tell them that “my business is not doing well” they will ask you questions like  “how many users do you have?”. So they can put that nice little statistic in their marketing brochures about how “our startups have raised this amount of money.” “Our startups have this many users” and everything is going great.

If you don’t report the right statistics to them, no matter how relevant to your business, they’ll see you as a failure.

Robert: We obviously don’t have the same metrics of success.

Muhammad: I mean, they’re not thinking very long term. I find investors are sometimes digging their own grave, because they always talk about the principal agent problem. Startups are not showing the right information, but at the same time, they create the context that startups are pushed towards doing that, and as Robbie said, dancing on the stage.

Robert: Theatre, startup is theatre, everything’s a performance.

Yuji: Yeah. So basically, you ask, ‘How do I get the girl?’ And then they told you how to dance. But then you find out she doesn’t really like dancing.

Muhammad: Yeah.

Max: The whole theatrical experience of startup culture, there’s a kind of maniacal optimism around everything. Quite often, even in terms of how even the really difficult parts of it are contextualised, there tends to be this kind of mania for for every single aspect of the process, and very little of it is ever appraised in any way other than in the most positive possible terms. That you should be so frank with us about it is highly valuable.

Robert It’s important to say this is not a critique of innovation, because that’s the response to this: people will say that innovation is actually a solution to all social problems.

But what do you want out of innovation? In reality, we have most of the solutions, right? If you really want to fix this food logistics problem in Malawi, you don’t need innovation, you need political, economic analysis. Famine does not have to exist in the world. There are enough resources, and we should not be fooled for a second that this is anything other than a political choice.

You need funding from the government. National and international commitment to goals that are defined by common people, not global governance. That’s not on the table within the private sector. You can have government collaborations, but that’s very different than actually having governments commit to funding and prioritising things. Innovation, whatever that actually means, only really goes so far.

Look at the number of startups that actually sustain themselves in East Africa, outside of Nairobi, it’s small! So, you know, innovation is not a solution.

It’s a theatrical performance for which we enjoy. We enjoy it! And that is the colonialist aspect of the present, just like colonialists enjoyed resource extraction we are enjoying the hype of innovation, you know, only accomplishing so much. So Muhammad and I did enjoy it. But we learned that we, and especially London, needed to think farther.


Thanks for reading Part 1 of a multi-part Portrait of Young Founders Muhammad Altalib and Robert Smith. In Part 2, we will jump into Mo and Robbie’s time on the ground in Malawi and how they faced the realities of being an entrepreneur in East Africa.

Categories
Editorial

It is time to talk about Technology differently

While manifestly found in many animal species, humanity’s ability to devise and wield intricate tools is unique in its breadth and impact. Be it part of our genetic code, a proportionally massive cranium or an elegant pair of opposable-thumbs, some set of perfect conditions has allowed for the presence of a magnificent talent; our obsession with finding easier ways to achieve our diverse ends. We would do well to remember this. Technology is not an end in itself, nor is it a single ubiquitously recognised set of means. It is a talent found in all of us, an urge to create and innovate and move past obstacles set before us.

Statistically, most of you will be readers from Europe or North America. Recently, we have been exposed to a certain idea of what “Technology” is supposed to mean. If we go by published output from mainstream Technology- or Business press outlets, we could be easily led into thinking that Technology is euphemism for the “Information Technology industry”. Some of us might associate the word to a mosaic of gadgets that together form part of this vaguely coined “Fourth Industrial Revolution” – a Global economy driven by automation. Why is our definition of Technology so limited?

As initially said by Robert Smith, Co-Founder of Seedlink and anthropology researcher, this is a “Euro-American Centric consensus”. A handful of financiers and technologists from London and San Francisco are setting the tone for how start-ups should be born and companies should be run. It is built around an obsession with the economic domination of four or five Big Tech corporations and the opinions of investors in Silicon Valley or Silicon Circle. This obsession is blinding us from the exciting developments in technology, like the midday sun outshining the moon and stars.

It is in fact a double blind. First you are being misled into thinking IT may be the most important technology, simply by merit of investment volumes and value (see CB-Insights’ 2019 List of Unicorns by Industry). Next that Big Tech may be an appropriate poster-child for contemporary technological development.

Let us decide to take a step-back, or rather, to remove our headsets and examine the question of technology as the fruit of an anthropologically-encoded set of creative or innovative behaviours based on improving the human condition.

Now a gospel to be repeated on San Franciscan dinner-tables, Moore & Grove’s balanced corporate-innovative environment at Intel in the 1970s, created the foundation on which several breakthrough technologies like the MOS transistor and integrated-circuit were developed. This foundation and the success that came with it enabled Intel and several other early digital companies to create a financially-supportive environment for start-ups to pursue ambitious high-risk projects.

It is in fact quite revealing how much directional influence Moore and Grove have had on the ideological tapestry of Silicon Valley. Moore’s law dictates the technical keystone: “The number of transistors on a microchip doubles every two years.” Elsewhere, one of Grove’s laws (the exact law is subject to a great-many disputes) dictates the cultural keystone: “Success breeds complacency. Complacency breeds failure. Only the paranoid survive.” (Attributed to Andy Grove in: Ciarán Parker (2006) The Thinkers 50: The World’s Most Influential Business. p. 70). Another Grovian law is that “A fundamental rule in technology says that whatever can be done will be done.” (Attributed to Andrew S. Grove in: William J. Baumol et al (2007) Good Capitalism, Bad Capitalism, and the Economics of Growth. p. 228). Built on these two keystones was the ideological evolution of Silicon Valley, built into a highly self-confident arena for microchip-based solutions to an apparently infinite plethora of identifiable problems. It explains the emergence and dominance of disruptive innovation and unique value proposition as pillar concepts. It gives prelude to the impact left by Peter Thiel’s Zero to One, which we have already covered here.

The recounting of the early days seems to be missing key ingredients. In addition to the leaders of the Intel corporation, were Gordon French and another Moore, Fred Moore. French and Moore were co-Founders of the Homebrew Computer Club, a club for DIY personal computer building enthusiasts founded in Menlo Park. This informal group of computer geeks was in all intents and purposes a digital humanist entreprise, openly inviting anyone who seeks to know more about electronics and computers to join the conversation and build with like-minded peers. Its great influence on Steve Wozniak and the many Stanford University engineers to that have built the Valley cannot be overstated.

Technologists from across the globe have inspired themselves off of this origin story, and innovative ecosystems have cropped-up in mimickry. New uses of IT, democratised and cheaper-to-access, have led to fascinating developments in parts of the developing world that do not enjoy California’s access to investment funds. And there is also the fact that Silicon Valley was not the only Tech story of the last 50 years (think vaccines, cancer research and environmental technologies). More colours come to light and the grey-bland world of Euro-American financialised IT will fade back into a world of people finding new ways of solving problems, finding new problems to solve, finding new problems from ways of solving, finding new solutions to problems yet unseen.

We dove into the mission of Supriya Rai — who seeks to bring beauty and colour into hundreds of identical-looking London office buildings with Switcheroo. She is now also Wonk Bridge’s CTO!

Portraits of Young Founders: Supriya Rai

We followed Muhammad and Robbie, who broke away from the London incubator scene after an initially successful Agri-tech IoT prototype, to radically changing their business plan to launch a logistics service company in East Africa, against the wishes of their Euro-American investment mentors. Rather than launch Seedlink to improve the lives of Malawians and East Africans at large, which entirely satisfy the white saviour narrative and follow a set of Euro-American prescribed ROIs, they sought to build a proposition that would fit in this unique business climate. How can a company that connects rural farmers to urban centres ignore common practices like tipping that are branded as bribery in the Euro-American world. What explained the gap between the London investors’ expectations and the emerging strategy needed to succeed in East Africa?

Thanks to a double-feature from our China-correspondent Edward Zhang, we analysed how different countries used the power of their societal and political technology as well as how they leveraged their national cultures to combat Covid-19. Sometimes, technologies are a set of cultural values and political innovations developed over the course of generations.

The Chinese Tech behind the War on Coronavirus

The Technologies that will help China recover from COVID-19

We also saw how a different application of a mature information technology such as the MMO video-game has helped fight autism where many other methods have failed.

Building a Haven for the Autistic on Minecraft

The real world

 

Photo by Namnso Ukpanah on unsplash / Edited by Yuji Develle

I am writing this article on the foothills of Mount Kilimanjaro, in the shade of a hotel found not far from a bustling Tanzanian town. Here, I can observe a much healthier use of technology, less dictated by the tyranny of notifications and more driven by connection between individuals found in the analog. People here use social media and telephones regularly, but they spend the majority of their time outside and depend on cooperation between townsfolk to survive (in the absence of public utilities or private sector).

 

My own photo of a Tanzanian suburb town near Arusha (Yuji Develle / December, 2020)

The Internet is available but limited to particular areas of towns and villages; WIFI hotspots at restaurants, bars or the ubiquitous mobile-phone stands (Tigo, M-PESA, Vodacom).

 

Left: A closed Tigo kiosk, Right: A Tigo pesa customer service shop (Yuji Develle / December, 2020)

The portals to the Digital Civilization have been kept open but also restricted by the lack of internet-access in peoples’ homes (missing infrastructure and the relatively high cost of IT being primary reasons why). It has kept IT from frenetically expanding into what it has become in the North-Atlantic and East Asia.

Like an ever-expanding cloud, the Technology-Finance Nexus has taken over our Global economy and replaced many institutions that served as pillars to the shape and life of analog world.

  • Social Networks have come to replace the pub, the newspaper kiosk, the café
  • Remote-working applications, the office
  • Amazon, the brick-and-mortar store, the pharmacy, the supermarket
  • Netflix, the cinema
  • Steam or Epic Games, the playground

These analog mainstays have been taken apart, ported and reassembled into the digital world. While the size of our Digital civilization continues to grow in volume and richness, the analog is shrinking and emptying with visible haste. The degradations that the disappearances provoke and that the exclusive-use of these Digital alternatives generate are unfortunately well documented at Wonk Bridge.

Astroturfing — the sharp-end of Fake News and how it cuts through a House-Divided

Social Media and the Syndication of the ‘Friend’

A new way of covering Tech

With our most recent initiative, Wonk World, we seek to avoid falling into the trap of overdiscussing and overusing the same Tech stories, told through and about the same territories, as representations of Tech as a whole. We aim to shed light into the creative and exciting rest-of-the-world.

We will be reaching out to technologists and digital humanists located far beyond Tech journalism’s traditional hunting grounds: Israel, China, Costa Rica. We will be following young Founders’ progress through the gruelling process of entrepreneurship in our Portraits of Young Founders newseries. Finally, we are looking for ways to break out of our collective echo-chambers and bring new perspectives into the Wonk Bridge community, so diversity of region as well as vision will constitue one of Wonk Bridge’s credos.

So join us, wherever you are and however you are, beyond the four walls of your devices and into the unexplored regions of the world and dimensions of the mind to see technology as Wonk Bridge sees it: the greatest story of humankind.

Categories
Editorial

Just do nothing: An inconvenient digital truth

As addictive and stimulating technology proliferates across society, we are losing our most ancient and coveted ability. Join us as we explore the loss of our ability to do nothing and how stand-up comedians have become the unlikely torch bearer of an inconvenient digital truth.

Image for post

Have you ever tried sitting in a room and doing nothing? And when I mean nothing, I mean absolutely nothing. Chances are you won’t last very long and that’s mainly because the human brain has a ferocious appetite for information stimuli. It’s why meditation is so hard and yet advocated by so many. Fundamentally, we aren’t very good at quieting our brain and the past decade of technological advancement has been anything but helpful.

According to the basic fundamentals of human computer interaction (HCI), there are three main ways or modalities by which we interact with computers:

Visual (Poses, graphics, text, UI, screens, and animations)

Auditory (Music, tones, sound effects, voice)

Physical (Hardware, buttons, haptics, real objects)

Regardless of what computer type you are using — whether it’s a smart phone or laptop — physical inputs and audio/visual outputs dominate HCI. Indeed, these forms of interaction and feedback are the very foundation of how humans have developed computers to function alongside them.

Now take into account another fundamental theme of HCI development because with every successful iteration of technology, there exists a main defining principle: Mainly, people who use technology want to capture their ideas more quickly and more accurately. Keep this in mind for later.

Whether it’s 1839s’ Joseph Jacquard who used programmable weaving looms to create a portrait of himself using 24,000 punched cards or WWII military agencies that invested in the development of the first ‘monitor’ to allow radar operators to plot aircraft movement, the development and evolution of technology is largely predicted by this theme of speed and accuracy.

Image for post
Portrait of Joseph Jacquard next to his iconic weaving loom computer

So let’s go back to my introduction: doing nothing. Like I said, it’s real hard, but my hypothesis is that it’s much harder than it used to be. If you read one of my past articles on human brain development, I explored the idea of the modern brain not being so different to how it was 2,000 years ago. In other words, there simply hasn’t been enough time for evolution to weed out certain mutations of our brain genealogy. Therefore, how we develop as an individual and functioning person, is just as much nurture as it is nature.

Now, I’m aware that my argument will be formed by a series of ‘sweeping & shallow statements’, but I’d like you to picture what most modern societies of both the past and present would have done when confronted with the reality of doing nothing. Whether it’s a pilgrim town in colonial Virginia during the 1600s preparing for the harsh winter or a small present day Tibetan village nestled in the Himalayan mountains going about their usual day, both isolated societies, if not for the menial tasks of survival and hardship, are generally confronted with the reality of doing nothing on the daily.

Image for post
Children playing with toys during 17th Century Colonial America

For the children of both these societies, once most chores are done, they would generally be allowed to go out and play. In doing so, they had to quickly confront the idea of doing nothing. Sure, they had games, they had toys, but the realm in which these tools of time reside are largely within the imagination. In fact, playing with others for a vast majority of mammalian species is an essential form of growth and development.

Today, we are plagued by bright screens, sharp sounds, and intruding notifications. From the very first pager beeping, to the early 2000s MSN Messenger nudge (I can still hear that sound in my head), and the evolution and proliferation of the Facebook notification beep, we have slowly grown accustomed to be alerted by our technology. Most notably, is the proliferation of the newsfeed, which has largely evolved to lure us into a web-like slot machine of personalized and attention grabbing media.

Image for post
MSN Messenger user interface with symbolic ‘knock’ nudge

If you reflect back on your historic usage of Facebook, it generally follows this path: status text (2007) → photo post (2010) → video/story stream (2012). Remember that earlier theme and it’s three modalities? I believe it has dominated the evolution and usage of our most prolific technologies, especially when it comes to sharing aspects of ourselves and others across our various digital networks.

Moreover, this digital game of carrot & stick has greatly been exacerbated by how quickly modern society has shifted its fundamental functions to the current dominant technology. From how we consume our news, to monitor our work, and even order food, every function is now app based and by virtue, notification based. The consequence is that we are quickly being trained to look to our our phones to understand our life.

Image for post

This is not to say that all this is bad. As I’m sure many of you reading this are thinking, social media networks can be a great source of social good. Even a company with as a bad a reputation as Facebook does not deserve the ‘shtick’ (for lack of a better word) it gets. Thanks to Whatsapp and Messenger, you are allowed to communicate with your friends and loved ones no matter where you are. Google helps you gain knowledge and explore your interests by allowing you to quickly scan the web and find the information you are looking for. Did I mention this is all for free?

But there is an inherent danger when we grow too dependent on a certain technology. Texting is great but have you tried actively listening to a conversation? Google searching is fantastic but have you tried reading a book from start to finish? Indeed, most of us joke about our dwindling attention spans but I fear none of us take it very seriously.

If our attention is to be monetized for ads by Silicon Valley, we need to also start seeing it as our currency to how we learn and grow as individuals. The less attention we are willing to give, the less personal development we will get in return. From clickbait journalism, to the inherent shallowness and distraction of social media, the examples for this argument are numerous and worrying.

Image for post

I believe I can speak for most generation Zers when I say we were lucky to have barely avoided the advent of social media while growing up. Because, by and large, as children, we were forced to confront the same idea of doing nothing as most other past societies. We had to use our imaginations and our social skills to play by ourselves and with others. Of course, critics will say we had game consoles like the Playstation and cable television like Comcast, but it wasn’t as enslaving. Today, video platforms like Netflix let you binge, game developers like Electronic Arts let you win, and social media companies like Facebook steal your time.

Because even gaming , which I believe represents superior elements of story-telling and cooperative strategy, has been tweaked for profit by executives and developers to be addicting. Once upon a time, triple-A video games were simply great for their 1 player or 2 player story mode. Like opening a book, you could dive into a world, play, learn, and explore but there was no mechanism to constantly lure you back besides the gameplay itself. It was just as easy to stop as it was to begin. Today, you have loot boxes and pay-to-win features which aren’t truly about the game. It’s about hooking you emotionally and getting you to pay more money.

Image for post
Screenshot of mobile game Jam City.

Yet, I digress, because this article isn’t about the exploitation of gaming as a medium or even how most platforms today function as a social slot machines. No, this article is about how many of us are slowly becoming incapable of doing nothing. It is how we are slowly but surely being re-wired by tech-based companies, whose bottom line is not to make you a better or more informed person, but instead to keep you glued to a screen and push advertisements and paid services.

There’s a quote in a 2001 stand-up act by the late-great comedian George Carlin who I believe really drives this point home. Although he is speaking about the proliferation of overbearing parents, I believe the same logic can be applied to my discussion. Just a quick disclaimer, there is profanity in this video but as you already know, he’s a comedian.

“You know, [talking about overly concerned parents organizing playtime] something that should be spontaneous and free is now being rigidly planned, when does a kid ever get to sit in the yard with a stick anymore, you know, just sit there with a ******* stick, do today’s kids even know what a stick is?”

— George Carlin

The idea of children no longer being taught or given the opportunity to simply sit in the yard with a stick is humorously worrying. Whether it’s hyper vigilant parents who coddle them for their safety or frustrated parents who shove a screen in their face to keep them from being annoying, children today are the victims of societies rush to quicker and more accurate technology. Although the theme of speed and accuracy has served us well, skyrocketing productivity from the punchcard, to the mainframe, to the PC, and now to the smartphone, I believe there is an inherent danger in our chase for quicker and more accurate technology.

I am not writing this article to give solutions. That is not what my primary intent was when I set out to write this. I’m not here to tell you to meditate, or to stop you from using social media, or even to limit the use of your phone. Moreover, I am conscious enough to realise that much of what I’m saying is rooted in personal hypocrisy, because I am just as much a slave to my inability to do nothing as most of you are.

But if there is one message I’d like to get across, it’s that we should embrace the nothing. The idea that maybe we don’t need to be stimulated by our looping relationship with the physical, visual, and audio modalities of modern technology. You can silence your phone and put it in the other room. You can sit in a train and not scroll through a newsfeed. You can stare at a wall and do nothing.

Because if you force your brain to be quiet, you’d be surprised how much it will start saying.

“The thing is, you need to build an ability to be yourself and just not be doing something. The ability to just sit there and be a person. Underneath everything in your life, there is that thing, that forever empty. That knowledge that it’s all for nothing and you’re alone…”

“And sometimes when things clear away and you’re in your car, and you start feeling it — this sadness, life is tremendously sad, just by being in it — That’s why we text and drive, people are willing to risk taking a life and ruining their own because they are unwilling to be alone for a second ”

— Louis C.K.

Categories
Editorial

When the tool uses you: How immersive tech could exploit our illusion of control

From the fax machine making information vulnerable to loss and theft to the internet making malware easy for susceptible users to download, malicious actors have always found a way to exploit our naivety to new technology. What dangers should users, businesses, and governments expect from immersive technology?

Image for post

You’re sitting in a virtual meeting room. Although the marble walls and mahogany table encompassing the space appear vectored and block-like, you feel oddly at ease. As you look around the room, everything feels intuitively wrapped around your eyes. You’re surprised to find how fluid your hands feel as you gesticulate to a nearby avatar. Hovering between the two of you is a larger than life three dimensional model of your proposed project.

Snap back to the reality of your boring home-office. You’re actually on Zoom. Your computer monitor is bright but the glare from the nearby window hurts your eyes. The video-chat interface is cluttered with tiny webcams talking over one another. You’re connected to the internet but you feel disconnected from your team and although you may not see it now, you are living on the verge of a paradigm shift.

The immersive paradigm shift is a moment in time where the line between what we perceive is ‘real’ and what is not will blur indefinitely. This is a world where cameras are programmed to defy reality, bodies swing and walk into nothing, and eyes become sentient portals to a collective imagination.

If you haven’t guessed it by now, of course I’m talking about the trifecta of incoming immersive technology, or rather the much anticipated mass market emergence of augmented reality (AR), virtual reality (VR), and mixed reality (MR). While all three somewhat differ from one another, they share one important aspect: that is, the representation of a new dimension to human computer interaction (HCI).

Image for post

That’s not to say this is our first rodeo. Over the past 25 years we’ve seen technology bring forth dramatic changes to the economic and social fabric of our society. From the internet powering our knowledge economy to mobile computing transforming how we communicate, these significant evolutions are judged not just by their technical sophistication but by their intrinsic ability to transform our lives.

Thanks to advances in computer vision — particularly in object sensing, gesture identification, and tracking — sensor fusion and artificial intelligence has furthered our interaction with computers as well as the machines understanding of the real world.

Moreover, advances in 3D rendering, optics — such as projections, and holograms — and display technologies have made it possible to deliver highly realistic virtual reality experiences. As a result, immersive technologies can now allow us to interact with ourselves and machines in a completely different manner as we will no longer be confined to a 2D screen.

As scary as that may sound, governments and businesses need to be preparing for the various modalities that will be introduced by immersive tech across their products and processes. This moment in time is no different to the shift from fax to email or the introduction of the smartphone. Moving to VR and AR will simply be the next natural step in staying relevant and competitive.

So if immersive technologies are poised to profoundly change the way we work, live, learn, and play, what ramifications should we come to expect? As speech, spatial, and, biometric data are fed into artificial intelligence, new questions will emerge over the extent of our virtual privacy and security. As technology becomes more comfortable and intuitive, we are at risk of going under the illusion of control.

Image for post

Throughout the history of computing, every significant shift in modality has brought with it new and potentially destabilising threats. If we fail to ask the right questions, the problems we will experience adjusting to this new technology may be greater than those posed by the internet and mobile computing combined. Let’s explore some examples:

It’s no secret that augmented reality technologies, which overlay virtual content on users’ perceptions of the physical world, are now a commercial reality. Recent years saw the success of AR powered camera filters such as Instagram stories, with more immersive AR technologies such as head-mounted displays and automotive AR windshields now being shipped or in development.

Image for post

With over 3.4 billion AR capable devices expected to hit the market by 2022, augmented reality is predicted to make the earliest splash amongst consumers. We should expect wearables that will allow us to navigate in the real world through Google Maps and camera applications that will scan the relevant objects surrounding you in a grocery store. Therefore, anticipating and addressing the securityprivacy, and safety issues behind AR is becoming increasingly paramount.

Buggy or malicious AR apps on an individual user’s device are at risk of:

  • Recording privacy-sensitive information from the user’s surroundings | Productivity tools
  • Leak sensitive sensor data (e.g., images of faces or sensitive documents) to untrusted apps | Instagram & Snapchat
  • Disrupt the user’s view of the world (e.g., by occluding oncoming vehicles or pedestrians in the road) | Google Maps

Multi-user AR systems can experience:

  • Vandalism such as with this incidence with augmented reality art in Snapchat
  • Privacy risks that bystanders may face due to non-consensual recording by the devices of nearby AR users

For the most part, AR security research focuses exclusively on individual apps and use cases; mainly because many problems we have already experienced with internet and mobile computing are expected to crossover to the new AR medium.

For instance, when the app store was first launched, many iPhone apps were nefariously designed to siphon and package individual mobile data in the background. Security analysts expect similar issues to arise with AR; only this time it won’t just be our location data they’re after but our more sensitive biometric data. More on that later.

Lastly, AR technologies may also be used by multiple users interacting with shared virtual content and/or in the same physical space. This includes virtual vandalism of shared virtual spaces and unauthorised access to group interactions. However, these risks have not yet been studied or addressed extensively. This will surely change as the technology hits the mainstream.

Image for post
Jeff Koons’ augmented reality Snapchat artwork gets ‘vandalized’

Virtual reality is the use of computer technology to create a simulated virtual environment. As visual creatures, humanity has been dreaming of creating virtual environments since the inception of VR developmental research in the early 60s. At first, commercial uses were mainly in video games and advanced training simulations (NASA) but as the technology advanced, so did our potential for using it.

Since the 2012 launch of the Oculus Rift, digital tools for VR have slowly begun to emerge. From Facebook’s all-in approach with the virtually collaborative social-media-esque Horizon and Valve’s newly released and highly praised virtual zombie game Half-Life: Alyx, there are plenty of examples today to show off the prowess of current-state VR. Indeed, with so many development and hardware companies competing for market share, it may feel like virtual reality has finally arrived.

However, as new tools and applications for virtual reality continue to develop, new questions are emerging over intellectual property rightsSince everything in virtual reality will be a renderised model of something, control over the aesthetics, feel, and look of a certain model may imply some form of ownership. This will become more pressing as the medium extends into other fields such as autonomous vehicles, e-commerce and even medical procedures.

For instance, items bearing a brand, recognisably shaped cars, dangerous weapons, and iconic places, have appeared in video games for years. A great example is Rockstar’s Grand Theft Auto series which has had numerous IP battles after satirically recreating the cities and environments of Miami and Los Angeles.

Image for post

Once we reach a form of ‘near reality’ within a game environment (one that is higher fidelity than the current 2D experience), we should expect intellectual property issues in virtual reality to sky rocket. For instance, a printed image of a painting from Google Images is much less of an IP issue than perhaps a virtual high-quality model of the same painting within a future virtual space.

Couple this with the fact that the depth-sensors in our phones are increasingly more capable of scanning real-life objects and modelling it real-time, means that in the future, anyone will be able create a virtual model of anything, and place it virtually anywhere.

Intellectual property predictions to expect:

  • IP protection of places, and buildings is a growing trend with EU lawmakers continuing to debate whether built structures, which are open to the public, should have rights attached to them. This is known as the so-called “freedom of panorama”.
  • IP protection of experiences such as touching or smelling a particular store, airline or hotel chain is possible with haptic virtual technologies. Although it is difficult to justify protecting an aesthetic today (only Apple has managed with its store layout), this may be more relevant in the future with VR.
  • Featuring a branded item or a well-known person is currently seen as a potential intellectual property infringement. How will this change if it is the player who is inserting self-scanned models rather than the game developer? Who is going to be liable?

This last point is most interesting because it relates to whether the platform or developer is liable even though it may be the user who is placing IP-protected models into the virtual environment. This concept is a similar crossover to the early days of peer to peer technologies with Napster and Limewire when users uploaded IP protected MP3 and video files to shared servers.

In the future, VR should expect similar IP problems that we get today. Faster computers and smarter artificial intelligence programs will allow users and developers to upload virtual objects at an unprecedented ease. Add on to this the idea that virtual reality will someday be as realistic as real-life and we’ll have an interesting problem on our hands.

Unlike virtual reality which immerses the end user in a completely digital environment, or augmented reality which layers digital content on top of a physical environment, mixed reality (MR) occupies a sweet spot between the two by blending digital and real world settings into one.

Image for post

When it comes to mixed reality, biometric and environmental data is an essential yet consequential by-product of sensory technology. This is mainly because developers need access to data to tweak specific functionalities and perfect the comfort and usability behind an immersive tool.

Thus, as immersive tools enter our homes, we are at risk of digitalising and exposing our most personal of information. The potential by-product of these applications siphoning biometric data is fundamentally tied to our security and privacy. Nobody at first knew how much user data the mobile phone was collecting through our apps. Why shouldn’t we expect the same with immersive devices?

The data collected will someday include:

  • Finger prints
  • Voiceprints
  • Hand & face geometry
  • Electrical muscle activity
  • Heart-rate
  • Skin response
  • Eye movement detection
  • Pupil dilation
  • Head position
  • Hand pose recognition
  • Emotion registry
  • Unconscious limb movement tracking

At its core, there is nothing more sensitive and unique than an individual’s biometric data. For instance, heart rate, skin response, and eye movement within a controlled virtual environment can be collected to potentially analyse an individual’s reaction to a virtual advertisement. Thus, a feeling that is meant to be reserved for your own inner-self can someday be downloaded and scrutinised by external corporate entities.

Additionally, it’s important to mention that unauthorised collection of biometric data is prohibited under article 4(14) of the GDPR. However, despite this, questions on the potential consequences of this data being mis-collected or misused remains highly relevant. Advertising will be the first to enter this space but expect greater consequences with the continued advent of the surveillance nation state.

Every major modality shift in technology has brought with it new threats and dangers. From the fax machine making information vulnerable to loss and theft to the internet making malware easy for susceptible users to download, malicious actors will always find a way to exploit our naivety and ignorance.

As users and consumers of digital technology, we need to be aware of the privacy risks involved when hooking ourselves up to sensor-laden devices. Virtual reality can be really useful and fun but remember to make sure you’re biometric data isn’t getting funneled to a third party.

In the business world, immersive technology will force many companies to rethink their internal and external processes — due diligence and hiring the right people will be important. So is taking the necessary steps towards protecting your IP or making sure your virtual products can’t be hacked or ‘vandalised’.

Lastly, governments and public institutions need to prove to the public that they can preempt the various threats immersive tech will bring to business, social well-being, and user privacy. So far, legislators and tech companies have been playing a game of cat and mouse. As we move forward, a firm hand and some much needed transparency will be key.

The future of technology will no doubt be impressive. Someday we will look up to the skies to access our information instead of down to our phones. Yet, the warning here is that comfort is never bliss. Where there is comfort, there is an opportunity for naivety and exploitation. As we gear up for the immersive paradigm shift, please remember to stay informed.

Categories
Editorial

The Neurological Conditioning of Sound

The greatest weapon in a sound designer’s arsenal is the mere fact that we listen first and react second. Join us as we briefly explore the neurological, anthropological, and digital histories behind how we interpret sound and why not everything you hear should sound like the truth.

Image for post

Remember that time when you were alone in that quiet house for the first time and heard a creepy sound? Maybe it was a windy day and the floor creaked and the window bellowed. That sound you heard, was clearly the logical result of wind pushing into a creaky wooden structure, yet the auditory impact is interpreted by the hypothalamus (a small but very important part of your brain that regulates fight or flight) as a threat.

Your thoughts quickly flow into scenarios: is it a ghost? Or perhaps a robber? For the first five seconds these possibilities are all you might consider. They dominate your imagination and thought processing. Until the rational side of your brain — granted some time passed without other similarly scary sounds — convinces you that the sound is nothing to be afraid of. But part of you still believes that, during those first five seconds, you actually saw, or at least heard, a spooky ghost making that sound.

If you are unlucky enough to believe you have witnessed paranormal activity, you can consider yourself conditioned. In humans, conditioning is part of a behavioural repertoire of intelligent survival mechanisms supported by our neurobiological system.These underlying mechanisms promote adaptation to changing ecologies and efficient navigation of natural dangers. In this case, you have been conditioned to be aware of a sound attached to a particular danger.

Conditioning is a big reason why our brains don’t like to be surprised. Otherwise known as the Survival Optimization System (SOS), our response to most danger usually begins with a sound. This is because, as far as the human experience goes, you hear way faster than you see and at over 300,000 kilometers per second, sound gets into the ear so fast that it modifies all other input and sets the stage for it.

Image for post
Tree graph provided by The ecology of human fear: survival optimisation and the nervous system.

We hear first and listen second because in this Darwinian struggle we call life, it’s considerably faster and more effective for our brains to react to the possibility of a threat then to wait for its validity. Thus, the bi-product of a ghostly trauma, is a deep mechanistic rewiring of our neurobiological system to that specific occurence of sound. So for at least the near future, any sounds you hear alone in a quiet house will trigger your brain’s survival mechanisms and neurobiological nervous system to react fearfully to the potential presence of a spooky ghost.

Yet, conditioning doesn’t only happen with things that scare us. As we said before, conditioning is a natural process the brain undergoes when faced with repetitive sensory information. It is a software-like response that codes a defence mechanism into our subconscious reactions.

In psychology, sound conditioning is defined as:
A process in which a stimulus that was previously neutral, comes to evoke a particular response, by being repeatedly paired with another stimulus that normally evokes the response.

A classic example of a sound conditioning experiment is the Pavlov Experiment which sought to establish if salivation in dogs could be caused with the pairing of a bell sound stimulus. Everytime Pavlov rang the bell, he would feed the dogs. After doing this repeatedly, the pairing of food and bell eventually established the dog’s conditioned response of salivation to the sound of the bell. After repeatedly doing this pairing, Pavlov removed the food and when ringing the bell the dog would salivate.

Image for post

What the Pavlov experiment demonstrated is that most intelligent animals, including humans, given sensory repetition, are capable of experiencing a conditioned response to a conditioned stimulus. It’s a big reason why we people listen for cars before crossing the road, why particular songs make us remember the past, and in a more humorous sense, why children run after the ice cream truck.

Throughout human history we have devised alarms that alert us to small and large dangers. As humans gathered into larger groups and more permanent settlements, we artificially conditioned ourselves to respond to alarms that would warn us of incoming danger of all kinds. From early fire alarms alerting 100 people that a building is on fire, to tsunami alarm-systems alerting millions to get to high-ground, the story of alarms is largely the story of civilisation.

We hear dozens of conditioned alarms without even realising it such as car horns, police sirens, school bells, and most pertinently, the digital songs, sounds, and notifications of our everyday consumer technology.

Today, most people will wake up and listen to sounds they have been conditioned to hear. For instance, some may incorporate a specific upbeat song into a wakeup or workout routine. Others may play particular songs that can be associated to the memories of romantic thoughts and relationships. Even outdoor concerts and music venues can function as places of establishment and communication of tribal signatures such as identity and mating readiness.

A popular, catchy summer song (Daft Punk’s Get Lucky comes to mind for 2013) may define an entire summer, not just in one country, but around the world. Together these songs — whether associated most to a workout, romantic date, or summer party — represent rituals of emotional outputs or certain moods.

It’s no secret that sound designers today take great interest in our emotional conditioning to particular and ubiquitous sound. In fact, how past societies have interpreted sound historically is a large part of a sound designers inspiration to understanding how to select the right ding for your app.

For instance, the talking drum of West Africa is an interesting and unusual example. The drum was specifically designed to make a variety of sounds that emulate human speech, giving it a basic but intuitive beat-like vocabulary. This made the drum an effective signalling device for long-distance communication between remote African villages.

Upon drumming a beat, other far away drums would hear the signal and pass on the beat-like message similar to how a torch runner passes on a flame to another torch. Perfectly sophisticated, too; only weeks after Abraham Lincoln’s death, news of the tragedy and its complex implications had penetrated the African interior on the drum.

From an anthropological and sound design perspective, the drum of West Africa was far beyond any other audible communication device of its day. By communicating a wide variety of messages based on rhythm, tone, and strength, its sound was designed perfectly for what it was needed for. It was an ideal, early, and elegant solution to a common problem villages had when communicating. The outcome was that three strong beats meant an attack was coming. This would conditions other villages to be prepared to mobilize together and defend an alliance.

Today sound design has transitioned greatly in its effort to convey messages. We have gone from the drawn out drum beats of the Savannah, to the binary pings of morse code, to the monotonous buzzes of the pager, and now to the myriad of pinging sounds from our smartphones.

The outcome is that we have become conditioned to the smartphone the same way we are conditioned to a fire alarm or West African drum beat. For some, the ding of a social media message can bring forth excitement, butterflies to the stomach, or even a sigh of relief.

The video above contains a recording of an all too familiar sound in our current pandemic times: the Zoom incoming call ringtone. The deliberately interchanging of high-pitch and medium-pitch notes resembles a non-threatening plea or cry for attention, which repeated can quickly turn into an aggressively annoying noise that must be addressed. Sounds are a major tool in the software and hardware developer’s arsenal to usher the types of emotional reactions intended of the user. We respond instinctively to natural sounds — which can trigger any set of emotions. We also respond instinctively to artificial sounds, who are most effective at doing so when they mimic sounds that we are already conditioned to.

Nowhere is emotional conditioning to sound more prevalent than in our current and historic use of social media. Take for example the now-retired Facebook Messenger notification. For some, hearing that sound will transport them back to 2013. Perhaps they will associate that sound with a lost love creating an neurological output of emotional pain. However, for most of us today, the ding of a social media app gears the brain to expect some form of social gratification.

Indeed, before even glancing at our screens to see who it is that liked our last photo or sent us a message, we begin to imagine the realm of possibilities for who may be trying to contact us. Is it a crush? Is it a friend inviting you to that party you wanted to go? With every chat, comes an expectation, and the stronger that expectation is emotionally, the stronger you will be conditioned emotionally to that sound.

When distinct and repeated sensory stimuli, like UI sounds, are paired with feelings, moods, and memories, our brains build bridges between the two.

– Rachel Kraus

As devices, software applications, and apps become omnipresent, the User Interface (UI) sounds they emit — the pings, bings, and bongs vying for our attention — have also started to contribute to the sonic fabric of our lives. And just as a song has the power to take you back to a particular moment in time, the sounds emitted by our connected devices can trigger memories, thoughts, and feelings, too.

A word of advice from someone who has felt the anxiety of a message tone and the sadness of an old song, I believe all of us today should be more aware of how digital sounds can be tuned to condition our emotional lives. Like Pavlov’s dogs, Silicon valley is conditioning our usage of their products to expect social gratification from the various dings and boops of our devices. We need to learn to expect these feelings from the outside world, not from the digital world inside our pockets.

Only then can we begin to clean up the noise and listen to the music.

Image for post

“Who we are is not just the neurons we have,” Santiago Jaramillo, a University of Oregon neuroscientist who studies sound and the brain, said, referring to cells that transmit information. “It’s how they are connected.”

Categories
Editorial

A Short Introduction to the Mechanics of Bad Faith

A Background to the Mechanics
Six Policies to Encourage Good Faith
What Is Bad Faith?
The Trouble with Calling out Bad Faith
Game Theory, Mutually Assured Destruction and Technology
Game Theory

Image for post

Daniel Mróz
Mutually Assured Destruction
Technological Esperanto

Image for post

John Postel/Wikipedia

Six Proposals for the Promotion of Good Faith

Reducing False Positives
1. All fora should publish lists of logical fallacies for people to avoid.
2. All fora should publish lists of philosophy smells or philosophical anti-patterns.
Making It Easier and More Productive to Handle Bad Faith
3. Claims of bad faith should be recorded diligently.
4. All claims of bad faith should be falsifiable.

Image for post

Hamilton and Burr dueling/Wikipedia
Incentivizing Good Faith and Disincentivizing Frivolous Accusations
5. All claims of bad faith should be reciprocal.
6. Good Faith Bonds

Image for post

Daniel Mróz
Finding the Best of Humanity Expressed by Computers

* I have not found reference to the philosophy smell idea anywhere, so believe it to be original. I will of course hand it to its rightful owner, if corrected.