Categories
Editorial

Internet Walden: Introduction—Why We Should Be Free Online

Anyone who cares about human flourishing should concern themselves with Internet freedom.

Image credit: Marta de la Figuera

Goodness, truth, beauty. These are not common terms to encounter during a discussion of the Internet or computers; for the most part, the normal model seems to be that people can do good or bad things online, but the Internet is just technology.

This approach, I think, is one of the gravest mistakes of our age: thinking or acting as though technology is separate from fields like philosophy or literature, and/or that criticisms from other fields are either irrelevant or at best secondary to technicalities. This publication, serving the Digital Humanities, is part of a much-needed correction.

I say that technology can be just or unjust in the same sense that a law can: an unjust law doesn’t possess the sort of ethical failing available only to a sentient being, rather, it has an ethical character (such as fairness or unfairness) as does the action that it encourages in us. We should accept the burden of seeing these qualities as ways: towards or away from goodness, truth and beauty.

Such a way is akin to a method or a path, like for example mediation or the practice of empathy: it’s not necessarily virtuous in itself, but the idea is that with consistent application one develops one’s virtue, or undermines it. My claim is that this is especially true for the ways in which we use technology, both as individuals and collectively.

In Computer Lib, Ted Nelson describes “the mask of technology,” which serves to hide the real intentions of computer people (technicians, programmers, managers, etc.) behind pretend technical considerations. (“We can’t do it that way, the computer won’t allow it.”) There’s another mask, that works in the opposite way: the mask of technological ignorance. We wear it either to avoid facing difficult ethical questions about our systems (hiding behind the fact that we don’t understand the systems) or as an excuse when we offload responsibilities onto others.

This essay concerns itself primarily with three ways: the secondary characteristics that lend themselves to our pursuit of goodness, truth and beauty, specifically in the technology of communication. They are, freedom, interoperability, and ideisomorphism; the latter is a concept which I haven’t heard defined before, but which can be summarized thus: the quality of systems which are both flexible enough to express the complexity and nuance of human thought and which have features that lend themselves to the shape of our cognition. (Ide, as in idea, iso as in equal to, morph as in shape.)

We should care about freedom, because we require it to build and experiment with systems in pursuit of the good; interoperability, because it forces us to formulate the truth in its purest form and allows us to communicate it; ideisomorphism, because it allows us to combine maximal taste and creativity with minimal technological impositions and restrictions in our pursuit of beauty. For details on these ways, please read on.

I won’t claim that this is a complete treatment of the ethical character of machines, my subject is machines for communication, and the best I can hope for is to start well.

In short, bad communications technology causes and covers up ethical failures. Take off the mask. We have nothing to lose but convenient excuses and stand to gain firstly, tools that act as force-multipliers for our best qualities and, secondly, some of the ethical clarity that comes from freedom and diverse conversation, and, if nothing else, a better understanding of ourselves.

Oliver Meredith Cox, January 28th, 2021

An Introduction to Walden: Life on the Internet

I argue that anyone who cares about human flourishing should concern themselves with Internet freedom, interoperability and ideisomorphism; I make this claim because the ethical character of the Internet appears to be the issue which contains or casts its shadow over the greatest number of other meaningful issues, and because important facts about the nature of the Internet are ways to inculcate our highest values in ourselves.

The Internet offers us the opportunity to shed the mask of technological ignorance: by understanding its proper function we should know how to spot the lies, and how to use the technology freely. We might then transform it into the retina and brain of an ideisomorphic system that molds to and enhances, rather than constricting, our cognition.

As such, with respect to the Internet, I say that we should:

  1. Learn/understand the tools and their nature.
  2. Use, build or demand tools that lend themselves to being learned.
  3. Use, build or demand tools that promote the scale, nuance and synchrony of human imagination, alongside those that nurture people’s capacity to communicate.

This piece is one part of a two-part introduction to the series, both parts are equal in importance and in order.

The other (What Is the Internet?) is an introduction to the technology of the Internet itself, so whenever questions come up about such things or if anything is not clear, consider either referring to that piece or reading it first; specifically, one of the reasons why I think we should turn our attention very definitely to the Internet is the fact that most people know so little about it, categorize it incorrectly and mistake it for other things (many confuse the Internet and the Web, for example).

Prospectus

  • Part 1: Introduction
    • Why We Should be Free Online: (this article) in which I explain why you should care about Internet freedom.
    • What Is the Internet: An explanation of what the Internet is (it probably isn’t what you think it is).
  • Part 2: Diary
    • Hypertext (one of an indefinite number of articles on the most popular and important Internet technologies)
    • Email
    • Cryptocurrency
    • Your article here: want to write an article for this series? Reach out: oliver dot cox at wonk bridge dot com.
  • Part 3: Conclusion
    • What Should We Create?A manifesto for what new technology we should create for the purpose of communicating with each other.

Call to Action

Do you care about Internet freedom and ethics? Do you want to take an Internet technology, master it and use it on your own terms? Do you want to write about it? Reach out: oliver dot cox at wonk bridge dot com.

A Note on Naming

For a full discussion on naming conventions, please see this piece’s companion, What Is the Internet?. However, I must clarify something up front. Hereafter, I will use a new term: computer network of arbitrary scale (CNAS [seenas]), which refers to a network of computers and/or the technology used to facilitate it, which can achieve arbitrarily large scale.

I use to distinguish between the 1. Internet in the sense of a singular brand, and 2. the class of networks of which the Internet is one example. The Internet is our name for the network running on a particular set of protocols (TCP/IP), it is a CNAS, and today it is the only CNAS. Imagine if a single brand so dominated an industry, like if Ford sold 99.9 percent of cars, so that there was no word for “car” (you would just say “Ford”), and you could hardly imagine the idea of there being another company. But, I predict that soon there will be more, and that they will be organized in different ways and run on different protocols.

Why the Internet Matters

First: the question of importance. Why do I think that the Internet matters relative to any other question of freedom that one might have? I know very well that many people with strong opinions think their subject the most important of all: politics, art, culture, literature, sport, cuisine, technology, engineering; if you care about something, it is likely that you think others should care, too. I know that there isn’t much more that I can do than to take a number, stand in line, and make my case that I should get an audience with you.

Here’s why I think you should care:

1. The Internet is engulfing everything everything we care about.

I won’t bore you with froth on the number of connected devices around or how everyone has a smartphone now; rather, numerous functions and technologies that were separate and many of which preceded the Internet are being replaced by it, taking place on it, or merging with it: telephony, mail, publishing, science, the coordination of people, commerce.

2. The Internet is the main home of both speech and information retrieval.

This is arguably part of the first point, but I think it deserves its own column inches: most speech and information exchange now happens online, most legacy channels (such as radio) are partly transmitted over the Internet, and even those media that are farthest from the digital (perhaps print) are organized using the Internet. At risk of over-reaching, I say that the question of free speech in general is swiftly becoming chiefly a question of free speech online. Or, conversely, that offline free speech is relevant to the extent that online speech isn’t free.

3. The Internet is high in virtuality.

When I claim above that the issue of all issues, someone might respond, “What, is it more important than food?” That is a strong point, and I am extremely radical when it comes to food and think that people should understand what they eat, know what’s in it, hold food corporations to account, and that to the extent that we don’t know how to cook or work with food, we will always be victim to people who want to control or fleece us. However, the Internet and cuisine are almost as far apart on the scale of virtuality as it is possible to be.

Virtuality, as defined by Ted Nelson, describes how something can seem or feel a certain way, as opposed to how it actually is, physically. For example, a ladder has no virtuality, (usually) the way it looks and how we engage with it corresponds 100% to the arrangement of its parts. A building, on the other hand, has much more virtuality: the lines and shape of a building give it a mood and feel, beyond the mere structure of the bricks, cement and glass.

Food is has almost no virtuality (apart from cuisine with immense artifice); the Internet, however, has almost total virtuality: the things that we do with it, the Web, email, cryptocurrency, have realities in the screen and in our imagination that are almost limitless, and the only physical thing that we typically notice is the “router” box in our home, the Wifi symbol on our device, the engineer in their truck and, of course, the bill. This immense virtuality is both what makes the Internet so profound, but also so dangerous: there are things going on beneath the virtual that threaten our rights. You are free to the extent that you understand and control these things.

Ted Nelson explains virtuality during his TED conference speech (start at 31:16):

4. The Internet has lots of technicalities, and the technicalities have bad branding.

All this stuff: TCP/IP, DNS, DHCP, the barrage of initialisms is hard to master and confusing, especially for those who are non-engineers or non-technical. I’m sorry, but I think we should all have to learn it or at least some of it. Not understanding something gives organizations with a growth obligation perhaps the best opportunity to extract profit or freedom from you.

5. The Internet is the best example that humanity has created of an open, interoperable system to connect people.

It is our first CNAS. As fish with water, it is easy to forget what we have achieved in the form of the Internet: it connects people of all cultures and religions, and nationalities (those that are excluded are usually so because of who governs them, not who they are), it works on practically all modern operating systems, it brings truths about the universe to those in authoritarian countries or oppressive cultures, and connects the breadth of human thinkers together.

To see the profundity of this achievement, remember that, today, many Mac- and Windows-formatted disks are incompatible with the other system, and that computer firms still attempt to lock their customers into using their systems by trapping them in formats and ways of thinking that don’t work with other systems, or that, even, culturally, some people refuse to use other systems or won’t permit other systems in their corporate, university or other departments.

Mac, Windows, GNU/Linux, Unix, BSD, Plan 9, you name it, it will be able to connect to the Internet; it is the best example of a system that can bridge types of technology and people. Imagine separate and incompatible websites, only for users of particular systems: this was an entirely possible outcome and we’re lucky it didn’t happen a lot more than the little it did (see Flash). The Internet, despite it’s failures and limitations, massively outclasses other technology on a regular basis, and is therefore something of a magnetic North, pulling worse, incompatible and closed systems along with it.

6. The Internet is part of a technological feedback loop.

As I mentioned in point 2. above, the Internet is now the main way in which we store, access and present information; the way in which we structure and present information today influences what we want to pursue in the future, the ideas we have and what, ultimately, we build. The Internet hosts and influences an innovation cycle:

  1. Available storage and presentation systems influence how we think
  2. The way we think influences our ideas
  3. Our ideas influence the technology we build, which takes us back to the start

This means that bad, inflexible, closed systems will have a detrimental effect on future systems, as will open, flexible systems engender better future systems. There is innovation, of course, but many design paradigms and ways of doing things get baked in, and sometimes are compounded. As such, I say that we ought to exert immense effort in creating virtuous Internet systems, such that these systems will compound into systems of even more virtue: much like how those who save a lot, wisely and early are (allowing for market randomness, disaster and war) typically rewarded with comfort decades later.

Put briefly, the Internet combines: the most integrating and connecting force in history, difficulty, virtuality, working in a feedback loop; it is the best we have, it is under constant threat, and we need to take action now.

The rest of this introduction will speak to the following topics:

  • Six imperatives for communications freedom
  • What we risk losing if we don’t shape the Internet to our values
  • Why, ultimately, I’m optimistic about technology and particularly the technology of connection
  • Why this moment of crisis tells us that we are overdue for taking action to improve the Internet and make it freer
  • What we have to gain

Six Imperatives for Communications Freedom

The technology of communication should:

  1. Be free and open source.
  2. Be owned and controlled by the users, and should help the rightful entity, whether an individual, group or the collective, to maintain ownership over their information and their modes of organizing information.
  3. Have open and logical interfaces, and be interoperable where possible.
  4. Help users to understand and master it.
  5. Let users to communicate in any style or format.
  6. Help users to work towards a system that facilitates the storage, transmission and presentation of both the totality of knowledge and of the ways in which it is organized.

1. The technology of communication should be free and open source.

First: what is free software? A program is free if it allows users the following:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute and make copies so you can help your neighbour.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

You will, dear reader, detect that this use of the word free relates to freedom not merely to something being provided free of charge. Open source, although almost synonymous, is a separate concept promoted by a different organization: the Open Source Initiative promotes open source and the Free Software Foundation, free.

A note, I think that we should obey a robustness principle when it comes to software and licenses: Be conservative with respect to the software you access (i.e. obey the law, respect trademarks, patents and copyright; pay what you owe for software, donate to and promote projects that give you stuff without charging a fee); be liberal with respect to the software you create (i.e. make it free and open source wherever and to the extent possible).

Fundamentally, the purpose of free software is to maximize freedom, not to impoverish software creators or get free stuff; any moral system based on free software must build effective mechanisms to give developers the handsome rewards they deserve.

To dig further into the concept and its sister concept, why do we say open source? The word “source” here refers to a program’s source code, instructions usually in “high-level” languages that allow programmers to write programs in terms that are more abstract and closer to the ways in which humans think, making programming more intuitive and faster. These programs are either compiled or interpreted into instructions in binary (the eponymous zeroes and ones) that a computer’s processor can understand directly.

Having just these binary instructions is much less useful than having the source, because the binary is very hard (perhaps impossible in some cases) for humans to understand. As such, what we call open source might be termed, software for which the highest level of abstraction of its workings is publicly available. Or, software that shows you how it does what it does.

Point 0. matters because the technology of communication is useful to the extent that we can use it: we shouldn’t use or create technology, for example, that makes it impossible to criticise the government or religion.

Of course, one might challenge this point, asking, for example, whether or why software shouldn’t include features that prevent us from breaking the law. I have ideas and opinions on this, but will save them for another time. Suffice to say that free software has an accompanying literature as diverse and exacting as the commentary on the free speech provision of the First Amendment: there is much debate about how exactly to interpret and apply these ideas, but that doesn’t stop them from being immensely useful.

Point 1. is extremely important for any software that concerns privacy, security or, for that matter, anything important. If you can’t inspect your software’s core nature, how can you see whether it contains functions that spy on you, provide illicit access to your computer, or bugs that its creators missed that will later provide unintentional access to hackers? See the WannaCry debacle for a recent example of a costly and disastrous vulnerability in proprietary software.

Point 2. matters for communications in that when software or parts of software can be copied and distributed freely, this maximises the number of people that have access and can, thus, communicate. It matters also in that if you can see how a system works, it’s much easier to create systems that can to talk to it.

However, the “free” in free software is the cause of confusion, as it makes it sound like people that create free or open source software will or can never make money. This is a mistake worth correcting:

  1. Companies can and do charge for free software, for example, Red Hat charges for it’s GNU/Linux operating system distro, Red Hat Enterprise Linux. The fee gets you the operating system under the exclusive Red Hat trademark, support and training: the operating system itself is free software (you can read up on this firm to see that they really have made money).
  2. A good deal of programmers are sponsored to create free software by their employers, at one point, Microsoft developers were the biggest contributors to the Linux kernel (open source software like Linux is just too good to ignore).

Point 3. might be clearer with the help of a metaphor. Imagine if you bought a car, but, upon trying to fit a catalytic converter, or a more efficient engine, were informed that you were not permitted to do so, or even found devices that prevented you from modifying it. This is the state that one finds when trying to improve most proprietary software.

In essence, most things that make their way to us could be better, and in the realm of communication, surmounting limitations inherent in the means of communication opens new ways of expressing ourselves and connecting with others. Our minds and imaginations are constrained by the means of communication just as they are by language; the more freedom we have, the better. Look, for example, to the WordPress ecosystem and range of plugins to see what people will do given the ability to make things better.

There are names in tech that are well known among the public: most notably Bill Gates and Microsoft, Steve Jobs and Apple; we teach school children about them, and rightly so, they and those like them have done a great deal for a great many. However, I argue that there are countless other names of a very different type whose stories you should know, here are two: Jon Postel, a pioneer of the Internet who made the sort of lives we life now possible through immense wisdom and foresight, his brand: TCP/IP; Linus Torvalds, who created the Linux kernel, which (usually installed as the core of the GNU operating system) powers all of the top supercomputers, most servers, most smart-phones and a non-trivial share personal computers.

Richard Dawkins has an equation to evaluate the value of a theory:

value = what it explains ÷ what it assumes

Here’s my formulation but for technology:

value = what it does ÷ the restrictions accompanying it

Such restrictions include proprietary data structures, non-interoperable interfaces, and anything else that might limit the imagination.

Gates and Jobs’ innovations are considerable, but almost all of them came with a set of restrictions that separate users from users and communities from communities. Postel and Torvalds, their collaborators, and others like them in other domains not mentioned, built and build systems that are open and interoperable, and that generate wealth for the whole world by sharing new instrumentality with everyone. All I’m saying is that we should celebrate this sort of innovator a lot more.

2. The technology of communication should be owned and controlled by the users, and should help the rightful entity, whether an individual, group or the collective, to maintain ownership over their information and their modes of organizing information

I will try to be brief with what risks being a sprawling point. In encounter after encounter, and interaction after interaction, users sign ideas, identities, privacy and control over how we communicate to unaccountable corporations. This is a hazard because (confining ourselves only to social media and the Web) we might pour years of work into writing and building an audience, say, on Twitter, to have everything taken away because we stored our speech on a medium that we didn’t own and, for example, a network like Twitter represents a single choke-point for authoritarian regimes like the government of Turkey.

On a slightly subtler note, expressing our ideas via larger sites makes us dependent on them for the conversation around ideas, also: conversations should accompany the original material, not live on social profiles far from it, where they are sprayed into the dustbin by the endless stream of other content.

We the users should pay for our web-hosting and set up our own sites: we already have the technology necessary to do this. If you care about it, own it.

3. The technology of communication should have open and logical interfaces, and be interoperable where possible.

What is interoperability, the supposed North Star here? I think the best way to explain interoperability is to think of it as a step above compatibility. Compatibility means that some thing is can work properly in connection with another thing, e.g. a given USB microphone is compatible with, say, a given machine running a particular version of Windows. Interoperability takes us a step further, requiring there to be some standard (usually agreed by invested organizations and companies) which 1. is publicly available and 2. as many relevant parties as possible agree to obey. USB is a great example: all devices carrying the USB logo will be able to interface with USB equipment; these devices are interoperable with respect to this standard.

There are two main types of interoperability: syntactic and semantic. The former refers to the ability of machines to transmit data effectively: this means that there has to be a standard for how information (like images, text, etc.) is encoded into a stream of data that you can transmit, say, down a telephone line. These days, much of this is handled without us noticing or caring. If you’d like to see this in action, right-click or ⌘-click on this page and select “View Page Source” — you ought to see a little piece of code that says ‘ charSet=”utf-8″ ‘ — this is the Webpage announcing what system it is using. This page is interoperable with devices and software that can use the utf-8 standard.

Semantic interoperability is much more interesting: it builds on the syntactic interoperability and adds the ability to actually do work with the information in question. Your browser has this characteristic in that (hopefully) it can take the data that came down your Internet connection and use it to present a Webpage that looks the way it should.

Sounds great, right? Well, people break interoperability all the time, for a variety of reasons:

  1. Sometimes there’s no need: One-off, test or private software projects usually don’t need to be interoperable.
  2. Interoperability is hard: The industry collaboration and consortia necessary to create interoperable standards require a great deal of effort and expense. These conversations can be dry and often acrimonious: we owe a great deal to those who have them on our behalf.
  3. Some organizations create non-interoperable systems for business reasons: For example, a company might create a piece of software that saves user files in a proprietary format and, thus, users must keep using/paying for the company’s software can access their information.
  4. Innovation: New approaches eventually get too far from older technology to work together; sometimes this is a legitimate reason, sometimes it’s an excuse for reason 3.

Reason three is never an excuse for breaking interoperability, reason two is contingent, and reason one and four fine. In cases where it is just too hard or expensive to work up a common, open standard, creators can help by making interfaces that work logically, predictably and, if possible, document them: this way collaborators can at least learn how to build compatible systems.

4. The technology of communication should help users to understand and master it.

Mastery of something is a necessary condition for freedom from whatever force that would control it. To the extent that you don’t know how to build a website or operating system or a mail server, you are a captive audience for those who will offer to do it for you—there is nothing wrong with this, per se, but I argue that the norm should be that any system that makes these things easy should be pedagogical: it should act as a tutorial to get you at least to the stage of knowing what you don’t know, rather than keeping your custom through ignorance. We should profit through assisting users in the pursuit of excellence and mastery.

Meanwhile, remember virtuality: the faulty used car might have visible rust that scares you off, or might rattle on the way home, letting you know that it’s time to have a word with the salesperson. Software that abuses your privacy or exposes your data might do so for years without you realizing, all this stuff can happen in the background; software, therefore, should permit and encourage users to “pop the hood” and have a look around.

Users: understand your tools. Software creators: educate your users.

5. The technology of communication should let users communicate in any style or format.

Modern Internet communication systems, particularly the Web and to an extent email, beguile us with redundant and costly styling, user interfaces, images, etc. The most popular publishing platforms, website builders like WordPress and social media, force users either to adopt particular styling or to make premature or unnecessary choices in this regard. The medium is the message: forced styling changes the message; forced styling choices dilute the message.

6. The technology of communication should help users to work towards a system that facilitates the storage, transmission and presentation of both the totality of knowledge and of the ways in which it is organized.

This is a cry for action for the future: expect more at the end of this article series. Picture this: all humanity’s knowledge, artistic and other cultural creations, visibly sorted, thematically and conceptually, via sets, links and other connections, down to the smallest functional unit. This would allow any user, from a researcher to a student to someone who is curious to someone looking for entertainment, to see how any thing created by humanity relates to all other things. This system would get us a lot closer to ideisomedia. Let’s call it the Knowledge Explorer.

This is not the Web. The Web gave us the ability to publish easily electronically, but because links on the Web point only one way, there can exist no full view of the way in which things are connected. Why? For example, if you look at website x.com, you can quite easily see all the other websites to which it links: all you need to do is you can look at all the pages on that site and make a record.

How, what if you asked what other websites link to x.com? The way the Web functions now, with links stored in the page and going one-way, the only way to see what other websites link to a given site is to inspect every other site on the rest of the Web. This is why the closest things we have to an index of all links are expensive proprietary tools like Google and SEMRush. If links pointed both ways, seeing how things are connected would be trivial.

Jaron Lanier explains this beautifully in the video below (his explanation starts at 15:48):

Google and SEMRush are useful, but deep down it’s all a travesty: we the users, companies, research groups, Universities and other organizations set down information in digital form, but practically throw away useful information on how it is structured. We have already done the work to realize the vision of the Knowledge Explorer, but because we have bad tools, the work is mostly lost. Links, connections, analogies are the fuel and fire of thinking, and ought to be the common inheritance of humanity, and we should build tools that let us form and preserve them properly.

As you might have already realized, building two-way links for a hypertext system is non-trivial. All I can say is that this problem as been solved. More on this much later in this series.

This concludes the discussion of my six imperatives. Now, what happens if these ideas fail?

What Do We Have to Lose?

1. Freedom

People with ideas more profound than mine have explored the concept of freedom of expression more extensively than I can here and have been doing so for some time; there seems little point in rehearsing well-worn arguments. But, as this is my favourite topic, I will give you just one point, on error-correction. David Deutsch put it like this, in his definition of “rational:”

Attempting to solve problems by seeking good explanations; actively pursuing error correction by creating criticisms of both existing ideas and new proposals.

The generation of knowledge is principally about the culling of falsehoods rather than the accrual of facts. The extent to which we prevent discourse on certain topics or hold certain facts or ideas to be unalterably true or free from criticism, is the extent to which we prevent error correction in those areas. This is something of a recapitulation of Popper’s idea of falsification in formal science: in essence, you can never prove that something correct, only incorrect, therefore what we hold to be correct is so unless we find a way to disprove it.

As mentioned above with respect to the First Amendment, I’m aware of how contentious this issue is; as such, I will set out below a framework, which, I hope, simplifies the issue and provides both space for agreement and firm ground for debate. Please note that this framework is designed to be simple and generalizable, which requires generalizations: my actual opinions and the realities are more complex, but I won’t waste valuable column inches on them.

My framework for free expression online:

  • In most countries (especially the USA and those in its orbit), most spaces are either public or private; the street is public, the home is private, for example. (When I say “legal” in this section, I mean practically: incitement to violence muttered under one’s breath at home is irrelevant.)
    • In public, one can say anything legal.
    • In private, one can say anything legal and permitted by the owner.
  • Online, there are only private spaces: 1. The personal devices, servers and other storage that host people’s information (email, websites, blockchains, chat logs, etc.) are owned just like one owns one’s home; 2. Similarly, the physical infrastructure through which this information passes (fiberoptic cables, satellite links, cellular networks) is owned also, usually by private companies like ISPs; some governments or quasi-public institutions own infrastructure, but we can think of this as public only in the sense that a government building is, therefore carrying no free speech precedent.
    • Put simply, all Internet spaces are private spaces.
    • As in the case of private spaces in the physical world, one can say anything legal and permitted by the owner.

From this framework we can derive four conclusions:

  1. There is nothing analagous to a public square on the Internet: think of it instead as of a variety of private homes, halls, salons, etc. You are free to the extent that you own the technology of communication or work with people who properly uphold values of freedom, hence #2 of my six imperatives. This will mean doing things that aren’t that uncommon (like getting your own hosting for your website) through to things that are very unusual (like creating our own ISPs) and more. I’m not kidding.
  2. Until we achieve imperative #2, and if you care about free expression, you should a. encrypt your communications, b. own as many pieces of the chain of communication through which your speech passes, c. collaborate and work with individuals, organizations and companies that share your values.
  3. We made a big mistake in giving so much of our lives and ideas to social networks like Twitter and Facebook, and their pretended public squares. We should build truly social and free networks, on a foundation that we actually own. Venture capitalists Balaji Srinivasan and Naval Ravikant are both exploring ideas of this sort.
  4. Prediction: in 2030, 10% of people will access the Internet, host their content, and build their networks via distributed ISPs, server solutions and social networks.

Remember, I’m not necessarily happy about any of this, but I think this is a clear view of the facts. I apologize if I sound cynical, but it’s better to put yourself in a defensible position than to rely on your not being attacked. As Hunter S. Thompson said, “Put your faith in God, but row away from the rocks.”

I am aware that this isn’t a total picture, and there are competing visions of what the CNASs can and should be; I am more than delighted to hear from and discuss with people who disagree with me on the above. I can’t do them justice, but here are some honourable mentions and thorns:

  1. The Internet (or other CNAS) as a public service. Pro: This could feasibly create a true public square. Con: It seems like it would be too tempting for any administration to use their control to victimize people.
  2. Public parts within the overall Internet or CNAS; think of the patchwork of public and private areas that exist in countries, reflected online—this might feasibly include free speech zones in public areas. See the beautifully American “free speech booth” in St. Louis Airport for a physical example.
  3. Truly distributed systems like the Bitcoin and other blockchains, which are stored on the machines of all members raise the question of whether these are the truly communal or public spaces; more on this in future writings.

I think that the case I made here for freedom of expression is broadly the same when applied to privacy: one might even say that privacy is the freedom not to be observed. In essence, you are private to the extent that you control the means of communication or trust those that do. Your computer, your ISP, and any service that you use all represent snooping opportunities.

We should be prepared to do difficult and unusual things to preserve our freedom and privacy: start our own ISPs, start our own distributed Internet access system, or, better, our own CNAS. I note a sense of learned helplessness with respect to this aspect of my connectivity (speaking especially for myself) there are communities out there to support you.

Newish technology will be very helpful, too:

  • WISPs: wireless internet providers, which operate without the need to establish physical connections to people’s homes.
  • Wireless mesh networks: wireless networks, including among peers, wherein data is transmitted throughout a richly connected “mesh” rather than relying on a central hub.

Finally, and fascinating as it is, I simply don’t have the space to go into the discussion of how to combine our rights with the proper application of justice. For example, if everyone used encryption, it would be harder for police to monitor communications as part of their investigations. All I can say is that I support the enforcement of just laws, including through the use of communications technology, and think that the relevant parties should collaborate to support both criminal justice and our rights: this approach has served the countries that use it rather well, thus far.

2. Interoperability

To illustrate how much the Internet has done for us and how good we have it now in terms of interoperability, let’s look back to pre-Internet days. In the 70s, say, many people would usually access a single computer via terminal, often within the same building or campus, or far away via a phone line. For readers who aren’t familiar, the “Terminal” or “Command Line” program on your Mac, PC, Linux machine, etc. emulates how these terminals behaved.

These terminals varied in design between models, manufacturers and through time: most had keyboards with which to type inputs into the computer, some had printouts, some had screens, and sometimes more stuff. However, not all terminals could communicate with all computers: for example, most companies used the ASCII character encoding standard (for translating between binary and letters, numbers and punctuation), but IBM used its own proprietary EBCDIC system—as a result, it was challenging to use IBM terminals with other computers and vice-versa.

This is more than just inconvenient: it locked users and institutions into particular hardware and data structures, and trapped them in a universe constituted by that technology—as usual, the only people truly free were those wealthy or well-connected enough to access several sets of equipment. Actions like this break us up into groups, and prevent such groups from accessing each other’s ideas, systems, innovations, etc. Incompatibility, thought sometimes an expedient in business, is a pure social ill.

To be clear, I am not saying that you have to talk to or be friends with everyone or be promiscuous with the tech you use. If you want to be apart from someone, fine, but being apart from them because of tech is an absurd thing to permit. We need to be able to understand each other’s data structures, codes and approaches: the world is divided enough, along party, religious and cultural lines to permit new artificial divisions.

The most magnanimous thing about the Internet is that it is totally interoperable, based on open standards. I almost feel silly saying it: this beautiful fact is so under-appreciated that I would have to go looking to find another person making the same point. Put it this way, no other technology is as interoperable as the Internet.

It’s tempting to think of the Internet as something normal or even natural; the truth is far from it: it’s sui generis. 53% of the world population use it, making it bigger than the World’s greatest nations, religions and corporations: anything of a similar scale has waged war, sought profits or sent missionaries; the Internet has no need for any of these things.

The Internet is one of the few things actually deserving of that much overused word: unique. But it does what it does because of something much more boring: standards, as discussed above. These standards aren’t universal truths derived from the fabric of the universe, they’re created by fallible, biased people, with their own motivations and philosophical influences. Getting to the point of making a standard is not the whole story: making it good and useful depends on the character of these people.

We should care more about these people and this process: remember, all the normal forces that pull us into cliques and break connections haven’t declared neutrality with respect to the Internet: they can’t help themselves, and would be delighted to see it broken into incompatible fiefdoms; rather, we should focus immense intellectual energy and interest:

  1. on maintaining the philosophical muscle necessary to insist that the Internet stay interoperable
  2. on proposing virtuous standards
  3. on selecting and supporting excellent people to represent us in this endeavour

The Internet feels normal and natural, even effortless in an odd way; the truth is the exact opposite, it is one of a kind, it is not just artificial, it was made by just a few people, and it requires constant energy and attention. Let us give this big, strange monster the attention it deserves, lest the walls go up.

Beyond this, the fact that the Internet is our only CNAS puts us in a perilous position. We should create new CNASs with a variety of philosophies and approaches; this will afford us:

  1. Choice
  2. Some measure of antifragility, in that a variety of approaches and technologies increases the chances of survival if one or more breaks
  3. Perhaps, even, something better than what we have now

3. Ideisomorphism

Bad technology generally puts constrains on the imagination, and on the way in which we think and communicate, but arguing and articulating the effect is much harder and the conclusions less clear cut than with my previous point on free expression. Put it this way: most, practically all of us take what we are given when it comes to tools and technology, some might have ideas about things that could be better, fewer still actually insist that things really ought to be better, and the small few that have the self-belief, tenacity, good fortune and savvy to being their ideas to market, we call innovators and entrepreneurs.

More importantly, these things influence the way we think. For example, Visicalc, the first spreadsheet program for personal computers (and the Apple II’s killer app) made possible a whole range of mathematical and organizational functions that were impossible or painfully slow before: it opened and deepened a range of analytical and experimental thinking. Some readers recognize what I might call “spreadsheet muscle-memory”—when a certain workflow or calculation comes to mind in a form ready to realize in a spreadsheet.

With repeated use, the brain changes shape to thicken well-worn neural pathways: and if you use computers, the available tools, interfaces and data structures train your brain. Digital tools can, therefore, be mind-expanding, but also stultifying. To borrow from an example often used by Ted Nelson, before Xerox PARC, the phrase “cut and paste” referred to the act of cutting a text on paper (printed or written) into many pieces, then re-organizing those pieces to improve the structure.

The team at PARC cast aside this act of total thinking and multiple concurrent actions, and instead gave the name “cut and paste” to a set of functions allowing the user to select just one thing and place it somewhere else. Still today, our imaginations are stunted relative to those who were familiar with the original cut and paste—if you know anything about movies, music or programming, you’ll recognize that many of the best things happen more than one thing at a time.

This is why I argue so vehemently that we shouldn’t accept what we are given online so passively: everything you do online, especially what you do often, is training your mind to work in a certain way. What way? That depends on what you do online.

For the sake of space, I’ll confine myself to the Web. My thesis is this:

  1. The Web as it stands today is primarily focused on beguiling and distracting us.
  2. It presents with two-dimensional worlds (yes there is motion and simulated depth of field, but most of the time these devices gussy up a two-dimensonal frame rather than expressing a multi-dimensional idea).
  3. It is weighed down with unnecessary animation and styling, leaving practically no attention (or for that matter bandwidth) left for information.

I’m here to tell you that you need not suffer through endless tracking, bloated styling, interfaces designed to entrap or provoke tribal feelings while expressing barely any meaning. If you agree, say something. Take to heart what Nelson said: “If the button is not shaped like the thought, the thought will end up shaped like the button.” This is why we have become what we’ve become: divided, enraged, barely able to empathize with someone of a different political origin or opinion.

Then there are the more profound issues: as mentioned above, links only go one way, the Web typically makes little use of the magic of juxtaposition and parallel text, there are few robust ways of witnessing, visually, how things are connected, and for the most part, Web documents are usually one-dimensional (they have an order, start to finish) or two-dimensional (they have an order, and they have headings).

People, this is hypertext we’re dealing with, you can have as many dimensions as you like, document structures that branch, merge, move in parallel, loop, even documents that lack hierarchy altogether: imagine a document with, instead of numbered and nested headings, the overlapping circles of a Venn diagram.

Our thinking is so confined that being 2-D is a compliment.

Digital media offered us the sophistication and multidimensionality necessarily, finally, to reflect human thought, and an end to the hierarchical and either-or structures that are necessary with physical filing (you can can put a file in only one folder in your filing cabinet, but with digital media, you can put it in as many as you like or have multiple headings that contain the same sentence (not copies!)), but we got back into all our worst habits. This, to quote Christopher Hitchens, “is to throw out the ripening vintage and to reach greedily for the Kool-Aid.”

Some, or even you, dear reader, might object that all this multidimensionality and complexity will be too confusing for users. This is fair. But first, I want to establish a key distinction, between confusion arising from unnecessary complexity introduced by the creators of the system and confusion arising from the fact that something is new and different. The former is unnecessary and we should make all efforts to eliminate it; the latter is as necessary to the extent that new things sometimes confuse us.

It might sometimes seem that two-dimensionality is something of a ceiling for the complexity of systems or media. There is no such ceiling; for example, most musicians will perform in the following dimensions simultaneously: facets of individual notes like pitch, dynamic (akin to volume), timbre, and facets of larger sections of music that develop concurrently with the notes but at independent scales, like harmony and phrasing.

In my view, we should build massively multidimensional systems, which start as simply as possible and, pedagogically, work from simple and familiar concepts up to complex ideas well beyond the beginner. Ideisomedia will, 1. free us from the clowning condescension of the Web and 2. warrant our engaging with it by speaking to us at our level, and reward us for taking the time to learn how to use it.

Before I talk at length and through the medium of graphs about what we stand to gain by doing this thing correctly, I’d like to make two supporting points. One frames why the mood of this introduction is so imperative, the other frames technological growth and development in a way that, I think, offers us cause for optimism.

Why Now, and Why I Think We’re Up to the Task

The Connectional Imperative

Firstly, I think that we are experiencing an emergency of communication in many of our societies, particularly in the USA and its satellites. My hypothesis (which is quite similar to many others in the news at the moment) is that the technology of communication, as it is configured currently, is encouraging the development of a set viewpoints that are non-interoperable and exclusive: this is to say that people believe things and broach the things that they believe in ways that are impossible to combine or that preclude their interacting productively.

Viewpoint diversity is necessary for a well-functioning society, but this diversity matters to the extent that we can communicate; this means firstly, actually parsing each others communications (which is analogous to syntactic interoperability: regardless of whether we understand the communication, do we regard it as a genuine communication and not mere noise, do we accept it or ignore it?); secondly, this means actually understanding what we’re saying (which is analogous to semantic interoperability: can we reliably convey meaning to each other?).

I think that both of these facets are under threat; often people call this “polarisation,” which I think is close but not the right characterisation; I am less concerned with how far apart the poles are than whether they can interact and coordinate.

Why is this happening? I think that it is because we don’t control the means of communications and, therefore, we are subject to choices and ideas about how we talk that are not in our interest. Often these choices are profit-driven (like limbic hijacks on Facebook that keep you on page). Sometimes it’s accidental, sometimes expedient design (like the one-way links and two-dimensional documents that characterize the Web, as mentioned earlier). Why is it a surprise so many of us see issues as “us versus them,” or to assume that if someone thinks one thing that they necessarily accept all other aspects associated with that idea, when we typically compress a fantastically multidimensional discussion (politics) onto a single dimension (left-right)?

We need multidimensional conversations, and we already have the tools to express them: we should start using them.

This really is an emergency. We don’t grow only by agreement, we grow by disagreement, error correction, via the changing of minds, and the collision of our ideas with others’: this simply won’t happen if we stay trapped in non-interoperable spaces of ideas, or worse, technology.

On the topic of technology, I am quite optimistic: everyone that uses the Internet can connect, practically seamlessly, with any other person, regardless of sex, gender, race, creed, nationality, ideology, etc. The exceptions here (please correct me if I’m wrong) are always to do with whether you’re prevented (say by your government) from accessing the Internet or because your device just isn’t supposed to connect (it was made before the Internet became relevant, or it is not a communications device (e.g. a lamp)).

TCP/IP is the true technological universal language, it can bring anyone to the table: when at the table you might find enemies and confusion, but you’re here and have at least the opportunity to communicate.

Therefore, I think that we should regard that which is not interoperable, not meaningfully interoperable, or at least not intentionally open and logical, with immense scepticism, and conserve what remains, especially TCP/IP, standards like this and their successors in new CNASs.

Benign Technology

I think that technology is good. Others say that technology is neutral, that people can apply it to purposes that help us or that hurt us. Of course, still others say that it is overall a corrupting influence. My argument is simple, technology forces you to do two things: 1. to the extent that whatever you create works, it will have forced you to be rational; 2. to the extent that you want your creation to function properly in concert with other devices, it will have forced you to think in terms of communication, compatibility and, at best, interoperability. I’m not saying that all tech is good, but rather that to the extent that tech works, it forces you to exhibit two virtues: rationality and openness.

In the first case, building things that work forces you to adopt a posture that accepts evidence and some decent model of reality: obviously this is not a dead cert, people noticeably “partition” their minds into areas, one for science, another for superstitions and so on. My claim is that going through the motions of facing reality head on is sufficient to improve things just a little; tech that works means understanding actions and consequences. This is akin to how our knowledge of DNA trashed pseudo-scientific theories of “race” by showing us our familiarity with all other humans, or how the germ theory of disease has helped to free us of our terror of plagues sent by witches or deities.

I’m not being naive here: I know that virtue isn’t written in the stars; rather, I claim that rationality is available to us as one might use a sifter: anyone can pick it up and use their philosophical values to distinguish gold (in various gradations) from rock. Technology requires us to pick up the sifter, or create it, or refine it, even if you would otherwise be disinclined.

In the case of the second faculty, openness, once you have created a technology, you can make it arbitrarily more functional by giving it the ability to talk to others like it or, better, others unlike it. Think of a computer that stands alone, versus one that can communicate with any other number of other computers over over a telecommunication line. But, in order to create machines that can connect, you have to think in terms of communication, you have to at least open yourself to and model the needs and function of other devices and other people. Allowing for some generalizing, the more more more capable the system, the more considerate the design.

Ironically, the totality of the production of technology is engaged in a tug-of-war: on one side, the need and desire to make good systems pulls us towards interoperability and on the other side, short-sighted profit-seeking and the fact that it’s hard to make systems that can talk pulls us towards non-interoperability. Incidentally, the Internet is a wonderful forcing function, here: the usual suspects like Apple, IBM and Microsoft amazingly Internet-interoperable.

Put simply, if you want to make your tech work, you have to face reality; if you want your tech to access the arbitrarily large benefits of communicating with other tech, you have to imagine the needs of other people and systems. Wherever you’re going, the road will likely take you past something virtuous.

A Rhapsody In Graphs: Up and to the Right, or What Do We Have to Gain?

To begin, remember this diagram:

Technological innovation exists in a feedback loop: so if you want virtuous systems in the future, create virtuous systems today.

You’re familiar, I hope, with Moore’s Law, which states that around every two years, the number of transistors in an integrated circuit doubles, meaning roughly that it doubles in computing power. This means that if you plot computing power against time, it looks something like this:

Moore’s law describes the immense increase in processing capacity that has facilitated a lot of the good stuff we have. Today, the general public can get computers more powerful than the one that guided the Saturn V rocket. The ARPANET (the predecessor to the Internet) used minicomputers to fulfil a role somewhat similar to that of a router today—the PDP-11 minicomputers popular for the ARPANET started at $7,700 (more than $54,000 in 2020 dollars); most routers today are relatively cheap pieces of consumer equipment, coming in at less than $100 a piece. This graph represents more humans getting access to the mind-expanding and mind-connecting capabilities of computers, that the computers themselves are getting better, and that this trend is quickening.

But, one might ask, what about some of the other indexes discussed in this introduction: infreedom, interoperability, ideisomorophism?

Interoperability

For the purposes of this question, we first need to find a meaningful way to think about overall interoperability. For instance, it doesn’t really matter to us that coders create totally incompatible software for their own purposes, all the time; meanwhile, as time passes, the volume of old technology that can no longer work with new technology increases due to changing standards and innovation: this matters only to the extent that we have reason to talk to that old gear (there are lots of possible reasons, if you were wondering). So, let’s put it like this:

overall meaningful interoperability (OMI) = the proportion of all devices and programs that are interoperable, excluding private and obsolete technology

This give us an answer as a fraction:

  • 100% means that everything that we could meaningfully expect to talk to other stuff can do so.
  • 0% would mean that nothing that we could reasonably expect to talk to other stuff can do so.
  • As time passes we would expect this number to fluctuate, as corporate policy, public interest, innovation, etc. affect what sort of technology we create.

As mentioned above, a variety of different pressures influence overall meaningful interoperability; some companies, for example, might release non-interoperable products to lock in their customers, other firms might form consortia to create shared, open standards.

I think that, long-term, the best we can expect for interoperability would look like the below:

What are you looking at?

  • Back Then (I’m being deliberately vague here) represents a time in the past when computers were so rare, and often very bespoke, that interoperability was extremely difficult to achieve.
  • Now represents the relative present: we have mass computer adoption, and consortia and other groups give us standards like Unicode, USB, TCP/IP and more. At the same time, some groups are still doing their best to thwart interoperability to lock in their customers.
  • The Future is ours to define; I hope that through collaboration and by putting pressure on the creators of technology, we can continuously increase OMI. You’ll notice that the shape is the opposite of Moore’s Law’s exponential growth: this is, firstly, because there’s an upper limit of 100% and, secondly, because it seems fair to assume that we will reach a point where we hit diminishing returns.
  • It is theoretically possible that we might reach a future of total OMI, but perhaps it’s more realistic to assume that through accidents, difficulty and innovation, some islands of non-interoperability will remain.

Freedom

How are things looking for free software? It’s very hard to tell, because the computer world is so diverse and because the subject matter itself is so complex. For example, the growth of Android is excellent news on one level, because it is based on the open source Linux kernel; it is less good news in that the rest of Android is totally proprietary, which makes it confusing. See the graphs below for a recent assessment of things (data from statcounter):

Desktop:

Mobile:

I think it is imperative that we work to create and use more free tools, for no other reason that we as people deserve to know what the stuff in our homes, on our devices or that is manipulating our information is doing. With the right effort, we might be able to recreate the growth of Linux among supercomputer operating systems. I am enthusiastic about this, and see the growth of free software as something as unstoppable, say, as the growth of democracy.

 

Wikipedia

Ideisomorphism

First, dimensions. As mentioned above, we frequently try to express complex ideas in two few dimensions, and this hampers our ability to think and communicate. Computers are, potentially, a way for us to increase the dimensionality of our communication, but only if we use them to their full potential.

The diagram below sets out some ideas, along with their dimensions:

To be clear, I’m not making a value-judgement against lower-dimensional things. Rather, I am saying:

  • Firstly, that one should study any given thing in a manner that allows us to engage with it with the proper number of dimensions.
  • Secondly, that poor tools for the studying, engaging with and communicating that which is in more than two-dimensions acts as a barrier, keeping more of us from learning about some very fun topics.
  • Thirdly, high dimensionality can scare us off when it shouldn’t, e.g. if you can talk you already know how to modulate your voice in more than five dimensions simultaneously: pitch, volume, timbre, lip position, tongue position, etc.

I think that, pedagogically and technologically, we should strive to master the higher dimensions and structures of thinking that allow us to communicate thus. However, we seem to be hitting two walls:

  1. The paper/screen wall: it’s hard to present things in more than two dimensions on paper or screens, and we get stuck with things like document structure, spreadsheets, etc., when more nuanced tools are available.
  2. The reality wall: it’s weird and sometimes scary to think in more than three dimensions, because it’s tempting to try to visualize this sort of thing as a space and, as our reality has just three spacial dimensions, this gets very confusing. This is tragic because a. we already process in multiple dimensions quite easily and b. multidimensionality doesn’t have to manifest spatially, nor they do manifest spatially must all dimensions manifest at once; what matters is the ability to inspect information from an arbitrary number of dimensions seamlessly.

Let us break the multidimensionality barrier! The nuance of our conversations and our thinking requires it. We should:

  1. Where possible, use tools like mind-maps and Venn diagrams (which allow for an arbitrary relationships and dimensions) over strictly hierarchical or relational structures (like regular documents, spreadsheets or relational databases, which are almost always two-dimensional).
  2. Use and build systems that allow for the easy communication and sharing of these structures: it’s easy to show someone a mind-map, but quite hard to share between systems because there’s no standard datastructure.
  3. Remember the technological feedback loop: 2-D systems engender 2-D thinking, meaning more 2-D systems in the future; we need concerted efforts to make things better now, such that things can be better in the future.

For our last graph, I’d like to introduce a new value, that combines the three concerns of this introduction (freedom, interoperability, ideisomorphism) into one, we can call it FII. Where before, I expressed interoperability and freedom as proportions (e.g. the percentage of software that is interoperable); this time, let’s think of these values as a relative quantity with no limit on it’s size (e.g. let’s say that 2020 and 2030 had the same percentage of free software, but 2030 had more software doing more things: this means that 2030 is higher on the freedom scale).

So:

FII = freedom x interoperability x ideisomorphism

We should, therefore, strive to achieve something like the graph above with respect to the technology of communication; think of it as Moore’s law, but for particular aspects of technology that represent our species’ ability to endure and flourish. It’s worth noting, of course, that Moore’s law isn’t a law in the physical sense, the companies whose products follow it achieve these results through continuous, intense effort. It seems only fair that we might expend such efforts to make the technology of communication not just more powerful, but better able to serve our pursuit of virtue; to the extent that I’m right about the moral arc of technology naturally curving upward, we may be rewarded quicker than we think.

What If We Are Successful?

What might happen if we’re successful? Here’s just one of many possibilities, and to explain I will need the help of a metaphor: the Split-brain condition. This condition afflicts people who have had their corpus callosum severed; this part of the brain connects the right and left hemispheres and, without it, the hemispheres have been known to act and perceive the world independently. For example, for someone with this condition, if only one eye sees something, the hemisphere associated with the other eye will not be aware of it.

I liken this to the current condition of humanity, except that instead of two hemispheres, we have numerous overlapping groupings of different sizes: nations, religions, ideologies, technologies, and more. Like split brain patients, often these parts don’t understand what the other is doing, might have trouble coordinating, or even come into conflict.

We have the opportunity to build our species’ corpus callosum, not that we might be unify the parts, but that the parts might coordinate; and, in that the density of connections is a power function of the number of nodes in the system, this global brain might dwarf the achievements of history’s greatest nations with feats on a planetary scale and in its pursuit of goodness, truth and beauty.

Leave a Reply

Your email address will not be published. Required fields are marked *