Categories
Editorial Repost

The Accidental Tyranny of User Interfaces

The potential of technology to empower is being subverted by tyrannical user interface design, enabled by our data and attention.

My thesis here is that an obsession with easy, “intuitive” and perhaps even efficient user interfaces is creating a layer of soft tyranny. This layer is not unlike what I might create were I a dictator, seeking to soften up the public prior to an immense abuse of liberty in the future, by getting them so used to comical restrictions on their use of things that such bullying becomes normalised.

A note of clarification: I am not a trained user interface designer. I am just a user with opinions. I don’t write the following from the perspective of someone who thinks that they could do better; my father, a talented programmer, taught me early on that everyone thinks that they could build a good user interface, but very few actually have the talent and the attitude to do so.

As such, all the examples that I shall discuss are systems where the current user interface is much worse than a previous one. I’m not saying that I could do it better; they did it better, just in the past.

This is not new. Ted Nelson identifies how in Xerox’s much lauded Palo Alto Research Center (the birthplace of personal computing), the user was given a graphical user interface, but in return gave up the ability to form their own connections between programs, which were thereafter trapped inside “windows” — the immense potential of computing for abstraction and connection was dumbed down to “simulated paper.” If you’d like to learn more about his ideas on how computing could and should have developed, see his YouTube series, Computing for Cynics.

Moore’s law describes that computers are becoming exponentially more powerful as time passes, meanwhile our user interfaces — to the extent that they make ourselves act stupidly and humiliate ourselves — are making us more and more powerless.

YouTube’s Android app features perhaps the most egregious set of insulting user interface decisions. The first relates to individual entries for search results, subscriptions or other lists. Such a list contains a series of video previews that contain (today) an image still from the video, the title, the name of the channel, a view count, and the publishing date.

What if I want to go straight to the channel? This was possible, once. What if I want to highlight and select some of the text from the preview? I can’t. Instead, the entire preview, rather than acting like an integrated combination of graphics, text and hypertext, is just one big, pretty, stupid, button.

This is reminiscent of one of my favorite Dilbert cartoons. A computer salesperson presents Dilbert with their latest model, explaining that their latest user interface is so simple, friendly and intuitive that it only has one button, which they press for you when it’s shipped from the factory. We used to have choices. Now we are railroaded.

Image for post

Do you remember when you could lock your phone or use another app, and listen to YouTube in the background? Not any more. YouTube took away this — my fingers hovered over the keyboard there for a moment, and I nearly typed “feature” — YouTube continuing to play in the background is not a feature, it should be the normal operation of an app of that type; the fact that it closes when you switch apps is a devious anti-feature.

YouTube, combining Soviet-Style absurdity and high-capitalist banality, offers to give you back a properly functioning app, in return for upgrading to Premium. I’m not arguing against companies making available additional features in return for an upgrade. Moreover, my father explained how certain models of IBM computers came with advanced hardware built-in — upgrading would get you a visit from an engineer to activate hardware you already had.

IBM sells you a car, you pay for the upgrade, but realize that you already had the upgraded hardware, they just suppressed it; YouTube sells you a car, then years later turns it into a clown-car, and offers you the privilege of paying extra to make it into a normal car. Imagine a custard pie hitting a human face, forever.

Obviously this simile breaks down in that the commercial relationship between YouTube and me is very different to the one between a paying customer and IBM. If you use the free version of YouTube, you pay the company in eyeballs and data — this sort of relationship lacks the clarity of a conventional transaction, and the recipient of a product or service that is supposedly free leaves themselves open to all manner of abuses and slights, being without the indignation of a paying customer.

WhatsApp used to have a simple, logical UI; this is fast degrading. As with YouTube, WhatsApp thwarts the user’s ability to engage with the contents of the program other than in railroaded ways.

Specifically, one used to be able to select and copy any amount of text from messages. Now, when one tries to select something from an individual message, the whole thing gets selected, and the standard operations are offered: delete, copy, share, etc.

What if I want to select part of a message because I only want to copy that part, or merely to highlight so as to show someone? Not any more. WhatsApp puts a barrier between you and the actual textual content of the messages you send and receive, letting you engage with them only in the ways for which it provides.

On this point I worry that I sound extreme — today I tried this point on a friend who didn’t see why this matters so much to me. Granted, in isolation, this issue is small, but it is one of a genre of such insults that are collectively degrading our tools.

That is is to say that WhatsApp pretends that the messages one the screen belong some special category, subject only to limited operations. No. It’s text. It is one of the fundamental substrates of computing and any self-respecting software company ought to run on the philosophical axiom that users should be able to manipulate it, as text.

Another quasi-aphorism from my father. We were shopping for a card for a friend or relative, in the standard Library of Congress-sized card section in the store. Looking at the choices, comprehensively labelled 60th Birthday, 18th Birthday, Sister’s Wedding, Graduation, Bereavement, etc., he commented, Why do they have to define every possible occasion? Can’t they just make a selection of cards and I write that it’s for someone’s 60th birthday?

This is about the shape of it. The Magnetic North to which UIs appear to be heading is one in which all the things people think you might want to do are defined and are given a button. To refer to the earlier automotive comparison, this would be like a car without a steering wheel or gas pedal. Instead there’s a button for each city to which people think you might want to visit.

There’s a button for Newark but not for New York City? Hit the Button for Newark then walk the rest of the way. What kind of deviant would want to go to New York City anyway, or for that matter what muddle-headed lunatic would desire to go for a drive without having first decided upon the destination?

I work in the Financial District in Manhattan. Our previous building had normal lifts: you call a lift and, inside, select your floor. This building has a newer system: you go to a panel in the lobby and select your floor, the system then tells you the number of the lift that it has called for you. Inside, you find that there no buttons for floors.

This is impractical. Firstly, there is no way to recover if you accidentally get in the wrong lift (more than once, the security guards on the ground floor have seen a colleague and me exit a lift with cups of coffee and laptops, call another, and head straight back upstairs). Meanwhile, one has to remember in order for the system to function. I don’t go to the office to memorize things, I go to the office to work. Who wants to try to hold in mind the number for your lift while trying to talk to a friend?

More importantly, and just like WhatsApp, it’s like getting into a car but finding the steering wheel immovable in the grip of another person, who asks, “Where would you like to go?” What if I get in the lift and change my mind? This says nothing for the atomizing effect this has on people. Before, we might get into a lift and I, being closest to the control panel, would ask “which floor?” Now we’re silent, and there’s one fewer interruption between the glint of your phone, the building, and the glass partitions of your coworking space.

My father set up my first computer when I was 8 or 9 years old. Having successfully installed Red Hat GNU/Linux, we booted for the first time. What we saw was something not unlike this:

Image for post

This is a list of the processes that the operating system has launched successfully. It runs through it every time you start up. I see more or less the same thing now, running the latest version of the same Linux. It’s a beautiful, ballsy thing, and if it ever changes I will be very sad.

Today, our software treats us to what you might call the Ambiguous Loading Icon. Instead of a loading bar, percentage progress, or list, we’re treated to a thing that moves, as if to say we’re working on it, without any indication that anything is happening in the background. This is why I like it when my computer boots and I see the processes launching: there’s no wondering what’s going on in the background, this is the background.

One of the most egregious examples of this is in the (otherwise functional and inexpensive) Google Docs suite, when you ask it to convert a spreadsheet into the Google Sheets format:

Image for post

We’re treated to a screen with the Google Docs logo and a repeating pattern that goes through the four colors of Google’s brand. Is it doing something? Probably. Is it working properly? Maybe. Will it ever be done? Don’t know. Each time that ridiculous gimmick flips colors is a slap in the face for a self-respecting user. Every time I tolerate this, I acclimatize myself to the practice of hiding the actual function and operation of a system from the individual, or perhaps even to the idea that I don’t deserve to know. This the route of totalitarianism.

I’m not pretending that this is easy. I understand that software and user interface design is a compromise between multiple goals: feature richness (which often leads to difficult user interfaces), ease of use (which often involves compromising on features or hiding them), flexibility, and many others.

I might frame it like this:

  1. There exists an infinite set of well-formed logical operations (that is, there is no limit to the number of non-contradictory logical expressions (e.g. A ⊃ B (the set A contains B)) that one can define.
  2. Particular programming languages allow a subset of such expressions, as limited by the capabilities and power of the hardware (even if a function is possible, it might take an impractical amount of time (or forever) to complete).
  3. Systems architects, programmers and others provide for a subset of all possible operations as defined by 2. in their software.
  4. Systems architects, programmers and others create user interfaces that allow us to access a subset of 3. according to their priorities.

They have to draw the line somewhere. It feels like software creators have placed too much emphasis on prettiness and ease of use, very little on freedom, and sometimes almost no emphasis on letting the user know what’s actually happening. I’m not asking for software that provides for the totality of all practical logical operations, I’m asking for software that treats me like an adult.

Some recommendations:

  1. Especially for tools intended for non-experts, there seems to be a hidden assumption that the user should be able to figure it out without training, and figure it out by thrashing around randomly when the company changes the user interface for no reason. A version of this is laudable, but often leads to systems so simplistic that they makes us feckless and impressionable. Perhaps a little training is worth it.
  2. No fig-leaves: hiding a progress message under an animated gimmick was never worth it.
  3. Perhaps the ad-funded model is a mistake, at least in some cases. As in the case of YouTube, it’s challenging to complain about an app for I don’t pay conventionally. The programs for which I do pay, for example Notion, are immensely less patronizing. Those for which I don’t pay conventionally, but aren’t run on ads, like GNU/Linux, Libre Office, Ardour, etc. are created by people who so value things like openness, accessibility, freedom (as in free), that they border on the fanatical. Perhaps we should pay for more stuff and be more exacting in our values. (Free / open source software is funded in myriad ways, too complex to explore.)

All this matters because the interfaces in question do the job of the dictator and the censor, and we embrace it. More than being infuriating, they train us to accept gross restrictions in return for trifling or non-existent ease of use, or are a fig leaf covering what is actually going on.

Most people do what they think is possible, or what they think they are allowed to do. Do you think people wouldn’t use a Twitter-like “share” function on Instagram, if one existed? What about recursive undo/redo functions that form a tree of possible document versions? Real hyperlinks that don’t break when the URL for the destination changes?

We rely on innovators to expand our horizons, while in fact they are defining limited applications of almost unlimited possibilities. Programmers, systems architects, businesspeople and others make choices for us: in doing so they condition in us that which feels possible. When they do so well, they are liberators; when they do so poorly, we are stunted.

Some of these decisions appear to be getting worse over time, and they dominate some of the most popular (and useful) systems; the consciousness-expanding capabilities of technology are being steered into a humiliating pose in a cramped space, not by force, but because the space is superficially pretty, easy to access and because choices are painful.

This article was originally posted on Oliver Meredith Cox

Categories
Editorial

Satanic Panic 2: Facebook Boogaloo

Image for post

McMartin Preschool in the early 1980s // Investigation Discovery
Image for post
2004 study shows crime reporting dominates local TV coverage // Pew Research
Image for post
Not your typical fringe conspiracy aesthetic
Image for post
Notice the QAnon hashtags #greatawakening, #maga, and #painiscoming
Categories
Editorial

‘Who Watches the Watchers?’ — The Internet and the Panopticon

How the philosophical concept of the Panopticon can help us visualise the structure of the internet and our position in it.

Image for post

Panopticon according to Bentham’s original design

What does the Panopticon concept offer us in terms of understanding the relationship between individual and society in today’s world and what future developments might result from this understanding?

The Internet-as-Panopticon
The individual as a ‘prisoner’ of the Panopticon

Image for post

The role of the Panopticon ‘guard’ online
Image for post
https://www.slashfilm.com/the-quarantine-stream-justice-for-nurse-ratched-in-one-flew-over-the-cuckoos-nest/

Image for post

Presidio Modelo today
Categories
Roundtable

The Unintentional Comedy: A Commentary on The Social Dilemma

Wonk Bridge Reviews the Year’s Funniest Warning

You cannot accuse the Western populace of the Early Digital period of not having a sense of humour about the rather extensive list of predicaments facing down their collective bid for a peaceful and productive digital life — to mark an age which, in the totality of its mental state if not its material circumstances, feels thoroughly dystopian, they have also inaugurated a “golden age of dystopian fiction.”

Whether or not The Social Dilemma is fiction is hard to say — both with respect to the melodrama it self-consciously makes out of its perils-of-social-media thesis, and in terms of the questionable cuts of soap opera it slides between its earnest galleries of talking-and-postulating heads. It is, however, certainly dystopian in feel. In fact, it could be suggested that for a documentary about the questionable sociopolitical impact of big technology and social media, it rather seems to enjoy exalting in its dystopian premise too much, preferring the warmth of the gradually boiling water it is immersed in to formulating and prescribing robust solutions to the problems it surfaces.

Whatever The Social Dilemma is, and however good or bad, it can only hope to explore a few of the avenues of its vast subject (which is, for all intents and purposes, the future itself) in its 90-minute run-time. A good many of its points, its virtues, and its loose-ends merit further elaboration. That’s why a bunch of Wonk Bridge staffers got around the table for a commentary.

Elaborate, we said we must. And elaborate, we most certainly did. We took five representative timestamps from the documentary, which you can watch here, and discussed the events depicted. With you for this commentary are Wonk Bridge’s co-founder Yuji Develle, head of UX and product design Alessandro Penno, contributor Hamish Pottinger, and myself, co-founder Max Gorynski.

Timestamp 8:16 — “My email to Google”

Max: I thought this clip constituted some of the funniest minutes of TV I’ve seen all year. All the way through it’s leading up to a moment of almost perfect bathos: the obsessive regard over minute facets of product design which collectively connote very little to anyone who uses the product; the way in which this whole trope of ‘Google, the miracle of agility and efficiency,’ is invoked sympathetically even though Google is supposed to be one of our antagonists; our protagonist’s hubris, “I thought I was starting a revolution” with a soothingly platitudinous email, and his having seemingly failed to notice that he’s working for ‘What You Most Want to Hear Inc.’; and then the way it just dissolves into apathy afterwards. Even though the point of this episode is not supposed to be comic, the way it landed was exquisitely comic in a very bleak way.

Yuji: Yeah, the way he focuses on something so mundane, and then when it comes to solving a big issue, he doesn’t have many ideas.

Max: That, and the way in which the language which tends to surround big tech is very self-consciously revolutionary, in what is usually a rather superficial or even facile sense — and when it comes to commuting that intent into reformative action, it rarely seems to achieve any real lift-off, once the ‘Founder’s Excitement’ around the original spike phase has passed.

Alessandro: And the way it was framed, ‘I lit the torch, this is it!’ It gets back to the idea [that we discussed prior to taping] that the documentary sometimes goes over its own head, and emotionally exaggerates what it’s trying to do. The documentary ends up obscuring its own point by trying to be a movie, being so absurdly dramatic — you made a Powerpoint, you got the ball moving maybe, but why act like it was some big thing?

Hamish: I think throughout the documentary as a whole, every user is treated as a completely passive entity; the role of the user, and their awareness…I don’t think it comes in at all. And Tristian Harris is building up Gmail as something that can totally brainwash you…it all seems a little ramped-up to me.

Yuji: I might have an unpopular opinion here, but when the commentator said ‘We need to make email more boring,’ my reaction was — ‘Hold up, I spend 8 hours a day on email. I want my app to be engaging, I don’t want it to be even more boring, and it is very boring.’ We talk about gamification of work often in industry, making work a little more fun — and believe me, it needs to be more on the fun end of things than the less.

The problem here is not the tool — it’s the culture of work, of always being ‘on’, which Silicon Valley has been quite responsible for bringing about.

Max: There’s a real serious perspectival flaw in the documentary in general, which is that almost all the voices you hear from — and I admit that on a cursory level, it’s a daring conceit, having the craftspeople of these technologies come in to, ostensibly, fill [those technologies] full of lead —but all the voices you hear from possess the same narrowness of perspective that helped shaped the issues concerned. As you say Yuj, the idea of Tim being addicted to email is almost entirely related to his own personal proclivities relating to him:

a) As a programmer, and someone involved in the design of the product itself, and…

b) As someone who is naturally an evangelist of that kind of creed of work.

It’s a criticism you can generalise in other directions, and has been expressed elsewhere, that there are lot of voices that exist apart from this ecosystem, and come from a variety of disciplinary backgrounds…people like Meredith Broussard [Alice Thwaite, Nick Diakopoulos and more] who can broaden the argument on ethics and technology, who have different points of geopolitical reference, different educational backgrounds. Remember that stat, that some 40% of all founders of American startups attended either Harvard or Stanford [sic: the correct statistic is that 40% of all venture capitalists active in America went to these two schools].

Alessandro: Email as a function, an app, is so primitive. It’s based around a refresh engine, with chronological listing and folders. Notifications — very basic too. “A new email’s here!” Oh, well, great. To use that as an example of addictive technology [shows a way in which the documentary is out-of-touch].

Yuji: It’s cute isn’t it?

Alessandro: You could’ve talked about Facebook, with its much more sophisticated notification system, how it works the user emotionally through friends and relationships; or Instagram with stories etc. Email? The least addictive thing.

Yuji: One thing might be that they wanted to start out with email, the most mundane tool, as a way of saying ‘Oh wow, if he’s addicted to this, then the rest is going to be really scary.’ Not sure if that’s the strongest approach rhetorically.

Max: I would love to believe that the doc works on levels that are that subtle, Yuj, but I think a lot of the less valuable aspects of it which we’ve intuited owe to the fact that it sets about its subject like a sledgehammer, rather than a finely honed chisel.

Hamish: If we take it broader — it’s interesting to explore what’s driving the need to use our phones so much. If it’s an addiction, something’s driving it, there’s a human aspect there. We understand the human aspect of drug addiction now, but The Social Dilemma was more like one of those 50s anti-drug PSAs, where the kid takes one toke of weed and jumps out of his window. I’d have liked for the documentary to explore more what is actually driving us to overuse these tools.

Timestamp 27:18 — “Growth Tactics, Manipulation, and the Lab Rat”

Alessandro: What I really found interesting about this timestamp — is how, yes, we are like lab rats, and these apps in general are testing us on a mass scale…These companies can so quickly iterate on the results of [psychometric] A/B testing, about what drives the most traffic, the most reactions. No other product can conjure such precise change in what you’re looking for.

Yuji: The problem I have with these kind of mass media documentaries, is that they tend to insult viewer intelligence and dumb down a lot of the technical concepts and mechanics. Breaking this all down to A/B testing is like saying that all governments create economic plans through ‘statistics’, and that ‘statistics’ are the reason you pay this much in taxes. There’s more to it than that. A/B testing can be used for good, they have to be done…we’re not talking about ‘Do you prefer blue or red?’-type A/B testing. We’re talking about ‘Does including the word terrorist in an immigration article drive more clicks?’-type A/B testing.

Alessandro: Yeah. I mean…

Imagine you and I are both reading the same book; or rather, we believe we’re buying and reading the same book, but based on how we act after reading it, the book changes.

We need to remember that not all experiences of these products are the same. A/B testing is very valid, reasonable way to test your product — but it’s interesting that the product that’s pushed is the one most beneficial to that company, and they can choose [the narrative] based on how people react. Not everyone is getting the same thing, even though we all think we are.

Max: What do you think is the future of regulating this?

Alessandro: We should know. We should know more. I know Facebook sometimes tells you — ‘Are you okay to be a user tester in this group?’ But it should be abundantly clear [at all times] what is going on, who is being observed. I hope that transparency movement becomes stronger.

Yuji: I would agree, and add also that, while everyone says ‘Education [about this] is important’, people are not aware that their reality could be different, in what they’re exposed to, from other people’s reality. That goes in the real world, too. It’s very easy to vulgarise [your conception of other people’s reality]. There needs to be awareness that there’s this second round of information seeking — after you’ve seen what you’ve seen, you must ask ‘What do others see?’ That’s needed.

Hamish: I do wonder what the difference is between these approaches and typical, manipulative top-down real-world advertising. It’s another point of bathos in the documentary; Tristan is at a live-conference and he brings up that point. ‘It’s [the same set of tactics] used in [advertising and in other things].’ And then a cut to him in a room saying ‘No, it’s completely different.’ It never really answers the question.

In terms of power and scale, I’ll accept, we’ve never seen anything like it; but from the 50s and 60s, we’ve been on this course, developing means of tapping into your psychology to sell you things. I kept thinking of Mad Men — the guy who plays Pete Campbell in Mad Men is the embodiment of A.I. in this. I wonder if that was deliberate?

Timestamp 33:24 — “Social media is a drug and a biological imperative”

Max: The clip cut short before an excerpt from the year’s scariest horror movie, in which a family is ruthlessly disintegrated by the scourge of push notifications.

Yuji: We’re joking about it, but it does happen.

Max: True, I suppose.

Yuji: I think as well we should salute the Netflix formula of combining the fictional narrative and the documentary elements; it makes it very watchable, and it’s best to get people to watch til the end. Makes it more contextual,

Hamish: I’m not mad about the dramatisations, whether they’re intentionally funny or not.

Alessandro: The question I want to ask at this timestamp is — after millions of years of evolution has shaped our internal drive for social connection, and understanding, we are not wired to handle the abundance of data and connection given to us by social media. So how can we go back and fix this? If we ever can? How can we pull ourselves out of this addiction for social media?

It goes especially for those who grew up with this advanced, sophisticated abundance of social media all through their childhood. Can we pull back from this direction?

Max: I think the subsequent question that that first question unfolds into is ‘Is a solution to this primarily technological, or is it primarily counter-technological? Are software fixes or UX changes going to be the thing that reverses this social engineering? Or is it the solution to that likely to be found in a more unfashionable set of circles?’ Because my feeling is if you could use the exact same market imperatives to create an adequate solution, one would already have been created.

I think one of the reasons it can be hard to push this conversation we’re having at the minute to a scale that involves a very large share of the public, is because a lot of people aren’t all that interested in the idea that, in this world of supreme convenience created by these technologies, the first-stage solution to moderating their use is, in fact, to moderate their use, using the human machinery as opposed to a plug-in. Is the solution analogue, or is it digital?

Alessandro: To add to that, I thought the whole plot symbol of the safe; I thought that was funny, because with all this technology, the family ends up resorting to a simple safe that locks.

Hamish: I can’t see a solution that would be technological.

I think [the solution comes] with the public reaching digital maturity. We’ll become aware of all these negative effects, which will create a bubble [of dissonant feeling] which will then burst, and we’ll all realise that we’re, in fact, not that fussed by social media.

I think it’ll come down to that. We’ll be more educated about these things. Regulation might’ve come in reining in how much political influence these things can wield.

Alessandro: Let me just counter that, Hamish. If broadcast media hasn’t been able to regulate itself, has gotten more and more polarised, and abides by the same principles, how can we expect the same of social media? I feel regulations could work, but to what end?

Max: I think it’d be useful to interject at this point to recall that there were certain provisions made for the integrity of the media, in the States at least; FCC regulations, specifically the Fairness Doctrine, designed to prevent the rise of what became the Rush Limbaugh school of radio and televisual reporting, were repealed during the Reagan era to enlarge the potential market the media could reach. A lot of major movements in media you can see coming to their fullest fruition or flower now, were seeded to an extent in the 80s, and it’s just been a case of the process of vertical integration moving, lumbering as inevitably as a planet towards completion, and then being junctioned to this raft of fabulous new technologies that we have now.

Timestamp 55:30 — “2.7 billion Truman Shows”

Yuji: This is the echo-chamber problem, that we know well at Wonk Bridge and which I hold very close to my heart, personally. Instead of recapping what an echo chamber is, I wanted to touch on this assumption, prevalent in Silicon Valley, and in Tristan Harris’ writings, which I follow closely — that your view of the world is shaped primarily by digital means, that you as an individual will take in most or all of your information from your devices. I’d dispute it in a sense — you’re still more likely to be influenced by friends, family, even employers online, than strangers.

Max: Although those people do comprise the inner circle of one’s social network, presumably.

Hamish: There was a point in the documentary where it subsides into — ‘Echo chambers, yes, we know this, common knowledge etc.’, nothing new. Then I saw the Google results [being tailored and shaped to preference] bit and I thought ‘Crap!’ We tend to be critical of our sources, if we’re smart; that information from Instagram comes from one [milieu], Facebook from another. But Google is your encyclopedia, where you go to find truth, and where people’s opinions will be shaped. The assertion that we all break down into purely atomised individuals, I don’t buy into. I did some research on this recently, and it turns out that Facebook friends are still very likely to be your real friends; you might have an echo chamber in real life anyway. On the other hand, planted sources of information that are coming to you — that’s new.

Yuji: I’d have liked the documentary to, at this point, show us not how political polarisation occurs, which we’re quite familiar with already — the Pew report was brilliant — but rather, ‘How has consumption of information changed?’ Do people still rely on links being sent over by their friends or people who they, in an analog space, would trust? Or are there new people involved? Influencers are the unknown quantity here, in my mind. It kind of suggests nonetheless a willingness to follow perceived authenticity.

Max: I think that a phrase, concerning the way in which social media apportions out information, which I’ve never heard used but which I think is very useful in helping delineate the bounds of the problem we’re talking about here, is ‘decentralised knowledge’. These networks specialise in decentralised knowledge sharing. If the dissemination of knowledge does not come from a set of centralised points, if it’s just coming from your associates’ feeds, then you have few means of orientating yourself around something which puts those pieces of information in order. Obviously the orienting principle here is not distinguishable from the ‘bottom-line’ principle — that bottom line principle being, ‘what kind of news is going to be most profitable?’

The most profitable news, the most ‘perfect’ news, is the kind that keeps you in an endless waltz, keeps you burrowing further into a feed because it doesn’t allow you an answer or an option to resolve that piece of distressing information. This keys into the less scrupulous methods of news dissemination through the years, from Randolph Hearst’s yellow journalism onwards. What’s different now is that Facebook allows that information to be spread from no point of centrality, removing means of plausible contest that are easier when the point of origin of a piece of information is clearly visible — before hand, you knew it’s come from this paper or this writer, and thus you could, through becoming familiar with the proclivities of the writer or paper, make an informed hypothesis of where its flaws might be. That’s not possible on a news feed.

Furthermore, owing to the way your mind is conditioned to work when poring over a social media feed, you’re focused on the immediately accessible information, not on seeking an alternative to check and contrast what you’ve just heard about. You’re focused on the tactile sensation of using the interface. I think what we’re seeing pretty irrefutably is that, without that centralising principle to help you make sense of so much decentralised information, the thing you’ll fall back on, as you’re alluding to Yuji with your point about influencers, is a vision of yourself. You’ll fall back, in other words, on prejudice.

Yuji: Touching on the centralisation point — I’d like to add a level of detail to that. If you think of the kind of ‘supply chain’ for centralised knowledge, I’d agree that knowledge production and consumption is complicated by being decentralised. However, the bottleneck has moved, to the filtration of knowledge — news aggregation, who decides what piece of information you read. That in my view has centralised quality, and those who’ve centralised it have no incentive for quality, just the maximisation of ad spend.

Max: A practical centralisation, but not an intellectual or editorial centralisation, whatever term you’d prefer to use. Very much the opposite of the News Series concept we’ll be flying with on the new site.

Timestamp 1:29:23 — Credits/ “The Big Fightback”

Max: I think a common bone of contention in a lot of responses to this documentary is the approach to a solution which it takes. I think it’s telling in and of itself that the solution chapter is pressed up against the credits sequence, as though it were an afterthought, and I thought a lot of those solutions mooted were weak. ‘Unsubscribe from app x’, ‘Disable all push notifications’, ‘Renounce email after 6pm’, ‘Give yourself a digital holiday’. That all sounds like addict-speak to me, someone trying to rationalise their overuse to themselves, but for a program that sets up its issues with such grandeur, and rightfully so, and positing such wide-ranging social reach, I found the solutions more than a little impotent.

Hamish: A good point, and I think they do acknowledge it — like at the beginning, where the interviewer asks them all serially “What’s the problem?” and they all hesitate for a really long time. It’s the same with the solution — it’s social media, such a mind-boggling thing, how do you rein it in? Putting reins on it at all does seem far-fetched, but I keep having this idea in my head that once everyone’s a bit scared of the consequences of this stuff, whether or not they want to powers that be to do something, that maybe is where the collective action comes from. Did you see the Black Lives Matter ‘Blackout’ on Instagram? It’s the first and only time I’ve seen people, through collective action, use the associated algorithms to hijack social media, and disrupt content — although what they made was itself content. I remember scrolling through, ten black screens and then an ad, over and over again, thinking ‘Wow’. Alighting on a cause, then using social media for it.

Yuji: I don’t know that using social media would be the best course of action for this. I think this [documentary represents] the early stages of a movement. The early stages of environmentalism were — ‘Recycle’, ‘Be More Conscious’. Now people are on the street, writing to their elected representatives, serious conversations are being done at a representative political level. Where’s that level of action?

Max: Yeah, I think the major question which faces us down as we come to the end of this feature — is whether or not the response to this comes about by organic means or by structured, contrived means. When you look more historically at the annals of quiet revolution, these things have tended to come from positions wherein mercantile interests and the collective bottom-line were threatened with disturbance, as in the Glorious Revolution in 1688…obviously we’re unlikely to see disturbance of profit motive here, as it’s precisely what’s fuelling this phenomenon. The other kind of quiet revolution is managed by educational principle, beginning a gradual and long-unfolding reform of thought from the grassroots up —

Yuji: And you could make the parallel that many point towards increased education and quality of living as being responsible for, for instance, the fall of the Soviet Union.

Max: Precisely, and of course it never came to flower, but the same thing could be said of Tiananmen; the Renaissance was essentially accomplished by the instantiation of a new system of humanist education, codified in manual educational tracts. It all boils down then to why we feel the need to use these platforms to begin with. As we’ve touched on, they’re not without intrinsic value or potential value as tools; the question we’re really asking here, is why we’ve elected to value them so much higher than what is their substantive output. That’s the question, whether an organic counter-movement does develop or something more contrived and structural and grounded in historical precedent takes the mantle of resisting the ill effects of these technologies, we fundamentally need to ask:

“Why is it that we feel the need in our daily lives to allow these things to monopolise our attentions, and guide us towards ends that are ostensibly, collectively, if not necessarily to an individual (and entirely contingent upon use), bad?”