Categories
Editorial Repost

The Accidental Tyranny of User Interfaces

The potential of technology to empower is being subverted by tyrannical user interface design, enabled by our data and attention.

My thesis here is that an obsession with easy, “intuitive” and perhaps even efficient user interfaces is creating a layer of soft tyranny. This layer is not unlike what I might create were I a dictator, seeking to soften up the public prior to an immense abuse of liberty in the future, by getting them so used to comical restrictions on their use of things that such bullying becomes normalised.

A note of clarification: I am not a trained user interface designer. I am just a user with opinions. I don’t write the following from the perspective of someone who thinks that they could do better; my father, a talented programmer, taught me early on that everyone thinks that they could build a good user interface, but very few actually have the talent and the attitude to do so.

As such, all the examples that I shall discuss are systems where the current user interface is much worse than a previous one. I’m not saying that I could do it better; they did it better, just in the past.

This is not new. Ted Nelson identifies how in Xerox’s much lauded Palo Alto Research Center (the birthplace of personal computing), the user was given a graphical user interface, but in return gave up the ability to form their own connections between programs, which were thereafter trapped inside “windows” — the immense potential of computing for abstraction and connection was dumbed down to “simulated paper.” If you’d like to learn more about his ideas on how computing could and should have developed, see his YouTube series, Computing for Cynics.

Moore’s law describes that computers are becoming exponentially more powerful as time passes, meanwhile our user interfaces — to the extent that they make ourselves act stupidly and humiliate ourselves — are making us more and more powerless.

YouTube’s Android app features perhaps the most egregious set of insulting user interface decisions. The first relates to individual entries for search results, subscriptions or other lists. Such a list contains a series of video previews that contain (today) an image still from the video, the title, the name of the channel, a view count, and the publishing date.

What if I want to go straight to the channel? This was possible, once. What if I want to highlight and select some of the text from the preview? I can’t. Instead, the entire preview, rather than acting like an integrated combination of graphics, text and hypertext, is just one big, pretty, stupid, button.

This is reminiscent of one of my favorite Dilbert cartoons. A computer salesperson presents Dilbert with their latest model, explaining that their latest user interface is so simple, friendly and intuitive that it only has one button, which they press for you when it’s shipped from the factory. We used to have choices. Now we are railroaded.

Image for post

Do you remember when you could lock your phone or use another app, and listen to YouTube in the background? Not any more. YouTube took away this — my fingers hovered over the keyboard there for a moment, and I nearly typed “feature” — YouTube continuing to play in the background is not a feature, it should be the normal operation of an app of that type; the fact that it closes when you switch apps is a devious anti-feature.

YouTube, combining Soviet-Style absurdity and high-capitalist banality, offers to give you back a properly functioning app, in return for upgrading to Premium. I’m not arguing against companies making available additional features in return for an upgrade. Moreover, my father explained how certain models of IBM computers came with advanced hardware built-in — upgrading would get you a visit from an engineer to activate hardware you already had.

IBM sells you a car, you pay for the upgrade, but realize that you already had the upgraded hardware, they just suppressed it; YouTube sells you a car, then years later turns it into a clown-car, and offers you the privilege of paying extra to make it into a normal car. Imagine a custard pie hitting a human face, forever.

Obviously this simile breaks down in that the commercial relationship between YouTube and me is very different to the one between a paying customer and IBM. If you use the free version of YouTube, you pay the company in eyeballs and data — this sort of relationship lacks the clarity of a conventional transaction, and the recipient of a product or service that is supposedly free leaves themselves open to all manner of abuses and slights, being without the indignation of a paying customer.

WhatsApp used to have a simple, logical UI; this is fast degrading. As with YouTube, WhatsApp thwarts the user’s ability to engage with the contents of the program other than in railroaded ways.

Specifically, one used to be able to select and copy any amount of text from messages. Now, when one tries to select something from an individual message, the whole thing gets selected, and the standard operations are offered: delete, copy, share, etc.

What if I want to select part of a message because I only want to copy that part, or merely to highlight so as to show someone? Not any more. WhatsApp puts a barrier between you and the actual textual content of the messages you send and receive, letting you engage with them only in the ways for which it provides.

On this point I worry that I sound extreme — today I tried this point on a friend who didn’t see why this matters so much to me. Granted, in isolation, this issue is small, but it is one of a genre of such insults that are collectively degrading our tools.

That is is to say that WhatsApp pretends that the messages one the screen belong some special category, subject only to limited operations. No. It’s text. It is one of the fundamental substrates of computing and any self-respecting software company ought to run on the philosophical axiom that users should be able to manipulate it, as text.

Another quasi-aphorism from my father. We were shopping for a card for a friend or relative, in the standard Library of Congress-sized card section in the store. Looking at the choices, comprehensively labelled 60th Birthday, 18th Birthday, Sister’s Wedding, Graduation, Bereavement, etc., he commented, Why do they have to define every possible occasion? Can’t they just make a selection of cards and I write that it’s for someone’s 60th birthday?

This is about the shape of it. The Magnetic North to which UIs appear to be heading is one in which all the things people think you might want to do are defined and are given a button. To refer to the earlier automotive comparison, this would be like a car without a steering wheel or gas pedal. Instead there’s a button for each city to which people think you might want to visit.

There’s a button for Newark but not for New York City? Hit the Button for Newark then walk the rest of the way. What kind of deviant would want to go to New York City anyway, or for that matter what muddle-headed lunatic would desire to go for a drive without having first decided upon the destination?

I work in the Financial District in Manhattan. Our previous building had normal lifts: you call a lift and, inside, select your floor. This building has a newer system: you go to a panel in the lobby and select your floor, the system then tells you the number of the lift that it has called for you. Inside, you find that there no buttons for floors.

This is impractical. Firstly, there is no way to recover if you accidentally get in the wrong lift (more than once, the security guards on the ground floor have seen a colleague and me exit a lift with cups of coffee and laptops, call another, and head straight back upstairs). Meanwhile, one has to remember in order for the system to function. I don’t go to the office to memorize things, I go to the office to work. Who wants to try to hold in mind the number for your lift while trying to talk to a friend?

More importantly, and just like WhatsApp, it’s like getting into a car but finding the steering wheel immovable in the grip of another person, who asks, “Where would you like to go?” What if I get in the lift and change my mind? This says nothing for the atomizing effect this has on people. Before, we might get into a lift and I, being closest to the control panel, would ask “which floor?” Now we’re silent, and there’s one fewer interruption between the glint of your phone, the building, and the glass partitions of your coworking space.

My father set up my first computer when I was 8 or 9 years old. Having successfully installed Red Hat GNU/Linux, we booted for the first time. What we saw was something not unlike this:

Image for post

This is a list of the processes that the operating system has launched successfully. It runs through it every time you start up. I see more or less the same thing now, running the latest version of the same Linux. It’s a beautiful, ballsy thing, and if it ever changes I will be very sad.

Today, our software treats us to what you might call the Ambiguous Loading Icon. Instead of a loading bar, percentage progress, or list, we’re treated to a thing that moves, as if to say we’re working on it, without any indication that anything is happening in the background. This is why I like it when my computer boots and I see the processes launching: there’s no wondering what’s going on in the background, this is the background.

One of the most egregious examples of this is in the (otherwise functional and inexpensive) Google Docs suite, when you ask it to convert a spreadsheet into the Google Sheets format:

Image for post

We’re treated to a screen with the Google Docs logo and a repeating pattern that goes through the four colors of Google’s brand. Is it doing something? Probably. Is it working properly? Maybe. Will it ever be done? Don’t know. Each time that ridiculous gimmick flips colors is a slap in the face for a self-respecting user. Every time I tolerate this, I acclimatize myself to the practice of hiding the actual function and operation of a system from the individual, or perhaps even to the idea that I don’t deserve to know. This the route of totalitarianism.

I’m not pretending that this is easy. I understand that software and user interface design is a compromise between multiple goals: feature richness (which often leads to difficult user interfaces), ease of use (which often involves compromising on features or hiding them), flexibility, and many others.

I might frame it like this:

  1. There exists an infinite set of well-formed logical operations (that is, there is no limit to the number of non-contradictory logical expressions (e.g. A ⊃ B (the set A contains B)) that one can define.
  2. Particular programming languages allow a subset of such expressions, as limited by the capabilities and power of the hardware (even if a function is possible, it might take an impractical amount of time (or forever) to complete).
  3. Systems architects, programmers and others provide for a subset of all possible operations as defined by 2. in their software.
  4. Systems architects, programmers and others create user interfaces that allow us to access a subset of 3. according to their priorities.

They have to draw the line somewhere. It feels like software creators have placed too much emphasis on prettiness and ease of use, very little on freedom, and sometimes almost no emphasis on letting the user know what’s actually happening. I’m not asking for software that provides for the totality of all practical logical operations, I’m asking for software that treats me like an adult.

Some recommendations:

  1. Especially for tools intended for non-experts, there seems to be a hidden assumption that the user should be able to figure it out without training, and figure it out by thrashing around randomly when the company changes the user interface for no reason. A version of this is laudable, but often leads to systems so simplistic that they makes us feckless and impressionable. Perhaps a little training is worth it.
  2. No fig-leaves: hiding a progress message under an animated gimmick was never worth it.
  3. Perhaps the ad-funded model is a mistake, at least in some cases. As in the case of YouTube, it’s challenging to complain about an app for I don’t pay conventionally. The programs for which I do pay, for example Notion, are immensely less patronizing. Those for which I don’t pay conventionally, but aren’t run on ads, like GNU/Linux, Libre Office, Ardour, etc. are created by people who so value things like openness, accessibility, freedom (as in free), that they border on the fanatical. Perhaps we should pay for more stuff and be more exacting in our values. (Free / open source software is funded in myriad ways, too complex to explore.)

All this matters because the interfaces in question do the job of the dictator and the censor, and we embrace it. More than being infuriating, they train us to accept gross restrictions in return for trifling or non-existent ease of use, or are a fig leaf covering what is actually going on.

Most people do what they think is possible, or what they think they are allowed to do. Do you think people wouldn’t use a Twitter-like “share” function on Instagram, if one existed? What about recursive undo/redo functions that form a tree of possible document versions? Real hyperlinks that don’t break when the URL for the destination changes?

We rely on innovators to expand our horizons, while in fact they are defining limited applications of almost unlimited possibilities. Programmers, systems architects, businesspeople and others make choices for us: in doing so they condition in us that which feels possible. When they do so well, they are liberators; when they do so poorly, we are stunted.

Some of these decisions appear to be getting worse over time, and they dominate some of the most popular (and useful) systems; the consciousness-expanding capabilities of technology are being steered into a humiliating pose in a cramped space, not by force, but because the space is superficially pretty, easy to access and because choices are painful.

This article was originally posted on Oliver Meredith Cox

Categories
Editorial

Satanic Panic 2: Facebook Boogaloo

Image for post

McMartin Preschool in the early 1980s // Investigation Discovery
Image for post
2004 study shows crime reporting dominates local TV coverage // Pew Research
Image for post
Not your typical fringe conspiracy aesthetic
Image for post
Notice the QAnon hashtags #greatawakening, #maga, and #painiscoming
Categories
Editorial

‘Who Watches the Watchers?’ — The Internet and the Panopticon

How the philosophical concept of the Panopticon can help us visualise the structure of the internet and our position in it.

Image for post

Panopticon according to Bentham’s original design

What does the Panopticon concept offer us in terms of understanding the relationship between individual and society in today’s world and what future developments might result from this understanding?

The Internet-as-Panopticon
The individual as a ‘prisoner’ of the Panopticon

Image for post

The role of the Panopticon ‘guard’ online
Image for post
https://www.slashfilm.com/the-quarantine-stream-justice-for-nurse-ratched-in-one-flew-over-the-cuckoos-nest/

Image for post

Presidio Modelo today
Categories
Roundtable

The Unintentional Comedy: A Commentary on The Social Dilemma

Wonk Bridge Reviews the Year’s Funniest Warning

You cannot accuse the Western populace of the Early Digital period of not having a sense of humour about the rather extensive list of predicaments facing down their collective bid for a peaceful and productive digital life — to mark an age which, in the totality of its mental state if not its material circumstances, feels thoroughly dystopian, they have also inaugurated a “golden age of dystopian fiction.”

Whether or not The Social Dilemma is fiction is hard to say — both with respect to the melodrama it self-consciously makes out of its perils-of-social-media thesis, and in terms of the questionable cuts of soap opera it slides between its earnest galleries of talking-and-postulating heads. It is, however, certainly dystopian in feel. In fact, it could be suggested that for a documentary about the questionable sociopolitical impact of big technology and social media, it rather seems to enjoy exalting in its dystopian premise too much, preferring the warmth of the gradually boiling water it is immersed in to formulating and prescribing robust solutions to the problems it surfaces.

Whatever The Social Dilemma is, and however good or bad, it can only hope to explore a few of the avenues of its vast subject (which is, for all intents and purposes, the future itself) in its 90-minute run-time. A good many of its points, its virtues, and its loose-ends merit further elaboration. That’s why a bunch of Wonk Bridge staffers got around the table for a commentary.

Elaborate, we said we must. And elaborate, we most certainly did. We took five representative timestamps from the documentary, which you can watch here, and discussed the events depicted. With you for this commentary are Wonk Bridge’s co-founder Yuji Develle, head of UX and product design Alessandro Penno, contributor Hamish Pottinger, and myself, co-founder Max Gorynski.

Timestamp 8:16 — “My email to Google”

Max: I thought this clip constituted some of the funniest minutes of TV I’ve seen all year. All the way through it’s leading up to a moment of almost perfect bathos: the obsessive regard over minute facets of product design which collectively connote very little to anyone who uses the product; the way in which this whole trope of ‘Google, the miracle of agility and efficiency,’ is invoked sympathetically even though Google is supposed to be one of our antagonists; our protagonist’s hubris, “I thought I was starting a revolution” with a soothingly platitudinous email, and his having seemingly failed to notice that he’s working for ‘What You Most Want to Hear Inc.’; and then the way it just dissolves into apathy afterwards. Even though the point of this episode is not supposed to be comic, the way it landed was exquisitely comic in a very bleak way.

Yuji: Yeah, the way he focuses on something so mundane, and then when it comes to solving a big issue, he doesn’t have many ideas.

Max: That, and the way in which the language which tends to surround big tech is very self-consciously revolutionary, in what is usually a rather superficial or even facile sense — and when it comes to commuting that intent into reformative action, it rarely seems to achieve any real lift-off, once the ‘Founder’s Excitement’ around the original spike phase has passed.

Alessandro: And the way it was framed, ‘I lit the torch, this is it!’ It gets back to the idea [that we discussed prior to taping] that the documentary sometimes goes over its own head, and emotionally exaggerates what it’s trying to do. The documentary ends up obscuring its own point by trying to be a movie, being so absurdly dramatic — you made a Powerpoint, you got the ball moving maybe, but why act like it was some big thing?

Hamish: I think throughout the documentary as a whole, every user is treated as a completely passive entity; the role of the user, and their awareness…I don’t think it comes in at all. And Tristian Harris is building up Gmail as something that can totally brainwash you…it all seems a little ramped-up to me.

Yuji: I might have an unpopular opinion here, but when the commentator said ‘We need to make email more boring,’ my reaction was — ‘Hold up, I spend 8 hours a day on email. I want my app to be engaging, I don’t want it to be even more boring, and it is very boring.’ We talk about gamification of work often in industry, making work a little more fun — and believe me, it needs to be more on the fun end of things than the less.

The problem here is not the tool — it’s the culture of work, of always being ‘on’, which Silicon Valley has been quite responsible for bringing about.

Max: There’s a real serious perspectival flaw in the documentary in general, which is that almost all the voices you hear from — and I admit that on a cursory level, it’s a daring conceit, having the craftspeople of these technologies come in to, ostensibly, fill [those technologies] full of lead —but all the voices you hear from possess the same narrowness of perspective that helped shaped the issues concerned. As you say Yuj, the idea of Tim being addicted to email is almost entirely related to his own personal proclivities relating to him:

a) As a programmer, and someone involved in the design of the product itself, and…

b) As someone who is naturally an evangelist of that kind of creed of work.

It’s a criticism you can generalise in other directions, and has been expressed elsewhere, that there are lot of voices that exist apart from this ecosystem, and come from a variety of disciplinary backgrounds…people like Meredith Broussard [Alice Thwaite, Nick Diakopoulos and more] who can broaden the argument on ethics and technology, who have different points of geopolitical reference, different educational backgrounds. Remember that stat, that some 40% of all founders of American startups attended either Harvard or Stanford [sic: the correct statistic is that 40% of all venture capitalists active in America went to these two schools].

Alessandro: Email as a function, an app, is so primitive. It’s based around a refresh engine, with chronological listing and folders. Notifications — very basic too. “A new email’s here!” Oh, well, great. To use that as an example of addictive technology [shows a way in which the documentary is out-of-touch].

Yuji: It’s cute isn’t it?

Alessandro: You could’ve talked about Facebook, with its much more sophisticated notification system, how it works the user emotionally through friends and relationships; or Instagram with stories etc. Email? The least addictive thing.

Yuji: One thing might be that they wanted to start out with email, the most mundane tool, as a way of saying ‘Oh wow, if he’s addicted to this, then the rest is going to be really scary.’ Not sure if that’s the strongest approach rhetorically.

Max: I would love to believe that the doc works on levels that are that subtle, Yuj, but I think a lot of the less valuable aspects of it which we’ve intuited owe to the fact that it sets about its subject like a sledgehammer, rather than a finely honed chisel.

Hamish: If we take it broader — it’s interesting to explore what’s driving the need to use our phones so much. If it’s an addiction, something’s driving it, there’s a human aspect there. We understand the human aspect of drug addiction now, but The Social Dilemma was more like one of those 50s anti-drug PSAs, where the kid takes one toke of weed and jumps out of his window. I’d have liked for the documentary to explore more what is actually driving us to overuse these tools.

Timestamp 27:18 — “Growth Tactics, Manipulation, and the Lab Rat”

Alessandro: What I really found interesting about this timestamp — is how, yes, we are like lab rats, and these apps in general are testing us on a mass scale…These companies can so quickly iterate on the results of [psychometric] A/B testing, about what drives the most traffic, the most reactions. No other product can conjure such precise change in what you’re looking for.

Yuji: The problem I have with these kind of mass media documentaries, is that they tend to insult viewer intelligence and dumb down a lot of the technical concepts and mechanics. Breaking this all down to A/B testing is like saying that all governments create economic plans through ‘statistics’, and that ‘statistics’ are the reason you pay this much in taxes. There’s more to it than that. A/B testing can be used for good, they have to be done…we’re not talking about ‘Do you prefer blue or red?’-type A/B testing. We’re talking about ‘Does including the word terrorist in an immigration article drive more clicks?’-type A/B testing.

Alessandro: Yeah. I mean…

Imagine you and I are both reading the same book; or rather, we believe we’re buying and reading the same book, but based on how we act after reading it, the book changes.

We need to remember that not all experiences of these products are the same. A/B testing is very valid, reasonable way to test your product — but it’s interesting that the product that’s pushed is the one most beneficial to that company, and they can choose [the narrative] based on how people react. Not everyone is getting the same thing, even though we all think we are.

Max: What do you think is the future of regulating this?

Alessandro: We should know. We should know more. I know Facebook sometimes tells you — ‘Are you okay to be a user tester in this group?’ But it should be abundantly clear [at all times] what is going on, who is being observed. I hope that transparency movement becomes stronger.

Yuji: I would agree, and add also that, while everyone says ‘Education [about this] is important’, people are not aware that their reality could be different, in what they’re exposed to, from other people’s reality. That goes in the real world, too. It’s very easy to vulgarise [your conception of other people’s reality]. There needs to be awareness that there’s this second round of information seeking — after you’ve seen what you’ve seen, you must ask ‘What do others see?’ That’s needed.

Hamish: I do wonder what the difference is between these approaches and typical, manipulative top-down real-world advertising. It’s another point of bathos in the documentary; Tristan is at a live-conference and he brings up that point. ‘It’s [the same set of tactics] used in [advertising and in other things].’ And then a cut to him in a room saying ‘No, it’s completely different.’ It never really answers the question.

In terms of power and scale, I’ll accept, we’ve never seen anything like it; but from the 50s and 60s, we’ve been on this course, developing means of tapping into your psychology to sell you things. I kept thinking of Mad Men — the guy who plays Pete Campbell in Mad Men is the embodiment of A.I. in this. I wonder if that was deliberate?

Timestamp 33:24 — “Social media is a drug and a biological imperative”

Max: The clip cut short before an excerpt from the year’s scariest horror movie, in which a family is ruthlessly disintegrated by the scourge of push notifications.

Yuji: We’re joking about it, but it does happen.

Max: True, I suppose.

Yuji: I think as well we should salute the Netflix formula of combining the fictional narrative and the documentary elements; it makes it very watchable, and it’s best to get people to watch til the end. Makes it more contextual,

Hamish: I’m not mad about the dramatisations, whether they’re intentionally funny or not.

Alessandro: The question I want to ask at this timestamp is — after millions of years of evolution has shaped our internal drive for social connection, and understanding, we are not wired to handle the abundance of data and connection given to us by social media. So how can we go back and fix this? If we ever can? How can we pull ourselves out of this addiction for social media?

It goes especially for those who grew up with this advanced, sophisticated abundance of social media all through their childhood. Can we pull back from this direction?

Max: I think the subsequent question that that first question unfolds into is ‘Is a solution to this primarily technological, or is it primarily counter-technological? Are software fixes or UX changes going to be the thing that reverses this social engineering? Or is it the solution to that likely to be found in a more unfashionable set of circles?’ Because my feeling is if you could use the exact same market imperatives to create an adequate solution, one would already have been created.

I think one of the reasons it can be hard to push this conversation we’re having at the minute to a scale that involves a very large share of the public, is because a lot of people aren’t all that interested in the idea that, in this world of supreme convenience created by these technologies, the first-stage solution to moderating their use is, in fact, to moderate their use, using the human machinery as opposed to a plug-in. Is the solution analogue, or is it digital?

Alessandro: To add to that, I thought the whole plot symbol of the safe; I thought that was funny, because with all this technology, the family ends up resorting to a simple safe that locks.

Hamish: I can’t see a solution that would be technological.

I think [the solution comes] with the public reaching digital maturity. We’ll become aware of all these negative effects, which will create a bubble [of dissonant feeling] which will then burst, and we’ll all realise that we’re, in fact, not that fussed by social media.

I think it’ll come down to that. We’ll be more educated about these things. Regulation might’ve come in reining in how much political influence these things can wield.

Alessandro: Let me just counter that, Hamish. If broadcast media hasn’t been able to regulate itself, has gotten more and more polarised, and abides by the same principles, how can we expect the same of social media? I feel regulations could work, but to what end?

Max: I think it’d be useful to interject at this point to recall that there were certain provisions made for the integrity of the media, in the States at least; FCC regulations, specifically the Fairness Doctrine, designed to prevent the rise of what became the Rush Limbaugh school of radio and televisual reporting, were repealed during the Reagan era to enlarge the potential market the media could reach. A lot of major movements in media you can see coming to their fullest fruition or flower now, were seeded to an extent in the 80s, and it’s just been a case of the process of vertical integration moving, lumbering as inevitably as a planet towards completion, and then being junctioned to this raft of fabulous new technologies that we have now.

Timestamp 55:30 — “2.7 billion Truman Shows”

Yuji: This is the echo-chamber problem, that we know well at Wonk Bridge and which I hold very close to my heart, personally. Instead of recapping what an echo chamber is, I wanted to touch on this assumption, prevalent in Silicon Valley, and in Tristan Harris’ writings, which I follow closely — that your view of the world is shaped primarily by digital means, that you as an individual will take in most or all of your information from your devices. I’d dispute it in a sense — you’re still more likely to be influenced by friends, family, even employers online, than strangers.

Max: Although those people do comprise the inner circle of one’s social network, presumably.

Hamish: There was a point in the documentary where it subsides into — ‘Echo chambers, yes, we know this, common knowledge etc.’, nothing new. Then I saw the Google results [being tailored and shaped to preference] bit and I thought ‘Crap!’ We tend to be critical of our sources, if we’re smart; that information from Instagram comes from one [milieu], Facebook from another. But Google is your encyclopedia, where you go to find truth, and where people’s opinions will be shaped. The assertion that we all break down into purely atomised individuals, I don’t buy into. I did some research on this recently, and it turns out that Facebook friends are still very likely to be your real friends; you might have an echo chamber in real life anyway. On the other hand, planted sources of information that are coming to you — that’s new.

Yuji: I’d have liked the documentary to, at this point, show us not how political polarisation occurs, which we’re quite familiar with already — the Pew report was brilliant — but rather, ‘How has consumption of information changed?’ Do people still rely on links being sent over by their friends or people who they, in an analog space, would trust? Or are there new people involved? Influencers are the unknown quantity here, in my mind. It kind of suggests nonetheless a willingness to follow perceived authenticity.

Max: I think that a phrase, concerning the way in which social media apportions out information, which I’ve never heard used but which I think is very useful in helping delineate the bounds of the problem we’re talking about here, is ‘decentralised knowledge’. These networks specialise in decentralised knowledge sharing. If the dissemination of knowledge does not come from a set of centralised points, if it’s just coming from your associates’ feeds, then you have few means of orientating yourself around something which puts those pieces of information in order. Obviously the orienting principle here is not distinguishable from the ‘bottom-line’ principle — that bottom line principle being, ‘what kind of news is going to be most profitable?’

The most profitable news, the most ‘perfect’ news, is the kind that keeps you in an endless waltz, keeps you burrowing further into a feed because it doesn’t allow you an answer or an option to resolve that piece of distressing information. This keys into the less scrupulous methods of news dissemination through the years, from Randolph Hearst’s yellow journalism onwards. What’s different now is that Facebook allows that information to be spread from no point of centrality, removing means of plausible contest that are easier when the point of origin of a piece of information is clearly visible — before hand, you knew it’s come from this paper or this writer, and thus you could, through becoming familiar with the proclivities of the writer or paper, make an informed hypothesis of where its flaws might be. That’s not possible on a news feed.

Furthermore, owing to the way your mind is conditioned to work when poring over a social media feed, you’re focused on the immediately accessible information, not on seeking an alternative to check and contrast what you’ve just heard about. You’re focused on the tactile sensation of using the interface. I think what we’re seeing pretty irrefutably is that, without that centralising principle to help you make sense of so much decentralised information, the thing you’ll fall back on, as you’re alluding to Yuji with your point about influencers, is a vision of yourself. You’ll fall back, in other words, on prejudice.

Yuji: Touching on the centralisation point — I’d like to add a level of detail to that. If you think of the kind of ‘supply chain’ for centralised knowledge, I’d agree that knowledge production and consumption is complicated by being decentralised. However, the bottleneck has moved, to the filtration of knowledge — news aggregation, who decides what piece of information you read. That in my view has centralised quality, and those who’ve centralised it have no incentive for quality, just the maximisation of ad spend.

Max: A practical centralisation, but not an intellectual or editorial centralisation, whatever term you’d prefer to use. Very much the opposite of the News Series concept we’ll be flying with on the new site.

Timestamp 1:29:23 — Credits/ “The Big Fightback”

Max: I think a common bone of contention in a lot of responses to this documentary is the approach to a solution which it takes. I think it’s telling in and of itself that the solution chapter is pressed up against the credits sequence, as though it were an afterthought, and I thought a lot of those solutions mooted were weak. ‘Unsubscribe from app x’, ‘Disable all push notifications’, ‘Renounce email after 6pm’, ‘Give yourself a digital holiday’. That all sounds like addict-speak to me, someone trying to rationalise their overuse to themselves, but for a program that sets up its issues with such grandeur, and rightfully so, and positing such wide-ranging social reach, I found the solutions more than a little impotent.

Hamish: A good point, and I think they do acknowledge it — like at the beginning, where the interviewer asks them all serially “What’s the problem?” and they all hesitate for a really long time. It’s the same with the solution — it’s social media, such a mind-boggling thing, how do you rein it in? Putting reins on it at all does seem far-fetched, but I keep having this idea in my head that once everyone’s a bit scared of the consequences of this stuff, whether or not they want to powers that be to do something, that maybe is where the collective action comes from. Did you see the Black Lives Matter ‘Blackout’ on Instagram? It’s the first and only time I’ve seen people, through collective action, use the associated algorithms to hijack social media, and disrupt content — although what they made was itself content. I remember scrolling through, ten black screens and then an ad, over and over again, thinking ‘Wow’. Alighting on a cause, then using social media for it.

Yuji: I don’t know that using social media would be the best course of action for this. I think this [documentary represents] the early stages of a movement. The early stages of environmentalism were — ‘Recycle’, ‘Be More Conscious’. Now people are on the street, writing to their elected representatives, serious conversations are being done at a representative political level. Where’s that level of action?

Max: Yeah, I think the major question which faces us down as we come to the end of this feature — is whether or not the response to this comes about by organic means or by structured, contrived means. When you look more historically at the annals of quiet revolution, these things have tended to come from positions wherein mercantile interests and the collective bottom-line were threatened with disturbance, as in the Glorious Revolution in 1688…obviously we’re unlikely to see disturbance of profit motive here, as it’s precisely what’s fuelling this phenomenon. The other kind of quiet revolution is managed by educational principle, beginning a gradual and long-unfolding reform of thought from the grassroots up —

Yuji: And you could make the parallel that many point towards increased education and quality of living as being responsible for, for instance, the fall of the Soviet Union.

Max: Precisely, and of course it never came to flower, but the same thing could be said of Tiananmen; the Renaissance was essentially accomplished by the instantiation of a new system of humanist education, codified in manual educational tracts. It all boils down then to why we feel the need to use these platforms to begin with. As we’ve touched on, they’re not without intrinsic value or potential value as tools; the question we’re really asking here, is why we’ve elected to value them so much higher than what is their substantive output. That’s the question, whether an organic counter-movement does develop or something more contrived and structural and grounded in historical precedent takes the mantle of resisting the ill effects of these technologies, we fundamentally need to ask:

“Why is it that we feel the need in our daily lives to allow these things to monopolise our attentions, and guide us towards ends that are ostensibly, collectively, if not necessarily to an individual (and entirely contingent upon use), bad?”

Categories
Editorial

Who Are You? A Short History of Identity Online

What social media does with us and what we do with it…

The Internet is a bit like gravity, immune systems, or verbal communication — a humble miracle which we abuse, and which becomes more improbable and complicated the more we try to comprehend it. And despite bookending our waking days and being the topic and medium of so much conversation, how much do we actually know about the effect of Mark Zuckerberg’s apps — through which a huge part of our contribution to and understanding of the world occurs — on our identities?

Social media is* an era-defining phenomenon because it has reinvented communication; up to the beginning of any adult’s lifetime, human interaction took place predominantly in a single time or place — mostly both. The TV news was shown at a specific time in your country, and if it made you want to call Tony Blair a liar you probably said it to your partner or cat. Now, anything can be viewed by anyone, anywhere if content is made accessible to the public. Private communication can also occur in any place and at any time. This turns out to be quite a big deal when you consider how much of our lives — work, music, politics, family, and religion — is mediated by apps. So, how exactly are our identities being reshaped?

An initial line of thinking might be this:

Or, in other words:

This is the idea that the Internet breaks down national barriers and gives the world a more singular identity. In some respects, it seems intuitive — there are plenty of examples of cultural practices spreading rapidly with the movement of people and things, from the ubiquitous consumption of coffee and potatoes to the birth of such precious international music genres as Turkish psychedelic rock or Nigerian synth funk. In fact, culture no longer has to spread across borders, but already exists in its own delocalised and often anonymous world online. This is why we see a kind of universal culture emerging synchronously through the likes of Instagram and TikTok, manifested in the form of dances, clothes, images (memes), interests (likes and follows), and crazes.

Global village, global pillage, local knowledge

Are we becoming homogenised, then? While it is tempting to assume so, the more we explore the idea of homogenisation, the less certain it seems. Yes, a given human being will likely have a Samsung or Apple device somewhere on her person. Yes, democracy more or less reigns in all continents (I’m counting the social dynamics of penguin huddles). But economies are one thing and culture another; in terms of our values and practices, while there are bound to be similarities emerging, in many ways we are still different.

In the UK and the US we aren’t adopting foreign styles of dressing or dancing — and are mostly discouraged from doing so. It also seems obvious that nationalism and wilful ignorance of the experience of others is rife; on the morning of writing, the BBC headlines include two separate disputes over national waters in the Aegean and the Strait of Taiwan, and a MAGA rally that defied Covid-19 regulations — each of them backed by no small number of nationalists. Also in the news today is the emergence of ‘LGBT-free zones’, the result of a movement whose leader cites Western sexual ideology as a threat to Polish traditions. The movement distributes stickers that show an ‘X’ running through the rainbow flag, placing itself in opposition to an image that is understood globally as a symbol of tolerance. So much for mutual understanding, then.

Image for post

Nevertheless, like with any complex phenomenon the natural human reaction is to latch onto a belief to make sense of it all — hence ‘globalisation’ is taken for granted as something that has happened over the last 20 or so years. Yet evidence that globalisation is not a straightforward process can be found in the collective belief that it is. In other words, the mere perception that Western culture is taking over leads to acts of resistance; anti-English language and anti-cultural appropriation sentiments express understandable fears that traditions which define identities are being broken down and/or illegitimately consumed. Naturally, people from minority cultures might therefore place more importance on their identities and practices, and use the global resource of social media to do so.

But, you might say, those interested in foreign cultural practices can engage with them on their phones. Absolutely — our window to the outside world has rapidly moved from collective mass media consumption to personalised feeds and if I’m into capoeira I will see Brazilians doing flips when I open up Instagram. My social media feeds are like no one else’s in the world — I define it and it defines me. But would this indicate that we are homogenised, or that we are fragmented? Are we now defined by our membership to microcultural groups, as gamers, korfball players, yoga practitioners, and horticulturalists? Rather than all being same, we could now be infinitely different. Given that we are now in control of our media, and it makes no difference whether our interlocutors are sitting in the house next door or in Paraguay, aren’t locality and nationality irrelevant to our identities?

Not quite — life online is still bound to geographical location. Our Facebook friends are still mostly made up of people physically close to us and our news consumption is probably national or local. Also, if social media can strengthen ties for those with niche cultural interests, it can do so for those with an interest in their nation as well. Moments of collective national experience — royal weddings and elections — are still published, consumed, and discussed with disgust, zeal, and venom on Twitter. We may process the world in a delocalised cyberspace, but that is still a place with frontiers.

What’s more, those frontiers may be policed by nation-states that (try to) ban certain apps and exploit the popularity of others. Cambridge Analytica certainly made a good go of harvesting our contributions to the world while manipulating the aforementioned personalised flow of information to push not-so-personalised nationalist rhetoric on the receiver. We are not in control of our feeds after all. Social media has not quite signalled the ‘death of the state’ as some claim. Rather, governments maintain the power to promote ideal values and identities from both above and below, at both macro-levels of communication (by manipulating advertising/posting content on official accounts) and micro-level ones (generating shares, comments, and discussion).

The verdict

So, which is it, fragmentation or homogenisation?

Hmm. It is probably impossible to measure at the level of academic research, though is seems that these two ideas, although contradictory, are not mutually exclusive. A curious and convincing answer from Prof. Stig Hjarvard suggests that various things are happening at once, and it cannot be said that new technologies move society in any single direction. Crudely summarised, he describes how, in two separate ways, our cultural identity flows freely across borders:

Global identity:
  • Fortnite dances, dabbing, bottle flips, things generally done by adolescent boys that drink Monster.
  • Cardi B, Despacito, Kim Kardashian, and other phenomena that are the topic of conversation for millions (billions?) of people worldwide.
Individual identity:
  • The ability to be part of transnational ‘microcultures’ of your choosing — gamers, knitters, samba for Satanists, etc.
  • Your personal feed (when not violated by Dominic Cummings)
  • Uploading content of yourself for public consumption

And in two ways, our identities are still connected to where we live:

National identity:
  • Collective ripping of Prince Andrew on Twitter
  • Captain Tom’s fundraiser
  • (Possibly sneaky) social media activity of political parties
Local identity:
  • Neighbourhood Facebook groups
  • Political separatist media

It would be extremely wanting to conclude that life is just the same as before, just online. A world where you can figuratively walk up to the global elite and call them dinguses is not the same as a world with one radio station and no television. I think we would all agree that social media has changed us in some way, and I would argue that it does so variably in these four ways, and sometimes simultaneously. Sometimes, social media allows us to extend our preexisting, physical lives, but it can also expose us to all manner of novelties and take us away from our surroundings. Where the balance is drawn is impossible to tell and will inevitably vary depending on the person. And perhaps that’s just it — which of these four identities predominates will depend on what we believe, who and what we’re influenced by, what we (dis)like, and who we want to be. In true spirit of the age of personalised content, it is tailor-made for each one of us.


* Yes, I shall be using ‘media’ in the singular — please don’t @ me.

Categories
Five Minuter

Astroturfing — the sharp-end of Fake News and how it cuts through a House-Divided

 

A 5-minute introduction to Political Astroturfing.

Dear Reader,

At Wonk Bridge, among our broader ambitions is a fuller understanding of our “Network Society”[1]. In today’s article, we’re aiming to connect several important nodes in that broader ambition. Our more seasoned readers will already see how Political Astroturfing simultaneously plays on both the online and offline to ultimately damage the individual’s ability to mindfully navigate in-between dimensions.

Definition

Political Astroturfing is a form of manufactured and deceptive activity initiated by political actors who seek to mimic bottom-up (or grassroots) activity by autonomous individuals.(slightly modified from Kovic et al. 2018’s definition which we found most accurate and concise)

While we will focus on astroturfing conducting exclusive by digital means, do keep in mind that this mischievous political practice remains as old as Human civilisation. People have always sought to “Manufacture Consent” through technologically-facilitated mimickry, and have good reason to continue resorting to the prevalent communications technologies of the Early Digital age to do so. And without belabouring the obvious, mimickry has always been a popular tactic in politics because people continue to distrust subjectivity from parties who are not friends/family/ “of the same tribe”.

Our America Correspondent and Policy-columnist Jackson Oliver Webster wrote a piece about how astroturfing was used to stir and then organise the real-life anti-COVID lockdown protests across the United States last April. Several actors began the astroturfing campaign by opening a series of “Re-open” website URLs and then connecting said URLs to “Operation Gridlock” type Groups on Facebook. Some of these Groups then organised real-life events calling for civil unrest in Pennsylvania, Wisconsin, Ohio, Minnesota, and Iowa.

The #Re-Open protests are a great example of the unique place astroturfing has in our societal make-up. They work best when taking advantage of already volatile or divisive real-world situations (such as the Covid-19 lockdowns, which were controversial amongst a slice of the American population), but are initiated and sped-up by mischievous actors with intentions unaligned with those of the protesters themselves. In Re-open’s case, one family of conspirators — the Dorr Brothers — had used the websites to harvest data from and push anti-lockdown and pro-gun apparel to website visitors. The intentions of the astroturfers can thus be manifold, from a desire to stir-up action to fuelling political passions for financial gain.

The sharp-end of Fake news

Astroturfing will often find itself in the same conversational lexicon as Fake News. Both astroturfing and fake news are seen as ways to artificially shape peoples’ appreciation of “reality” via primarily digital means.

21st century citizenship, concerning medium/large scale political activity and discourse in North America and Europe, is supported by infrastructure on social networking sites. The beerhalls and market-squares have emptied, in favour of Facebook Groups, Twitter Feeds and interest-based fora where citizens can spread awareness of political issues and organise demonstrations. At the risk of igniting a philosophical debate in the comments, I would suggest that the controversy surrounding Fake news at the moment is deeply connected with the underlying belief that citizens today are unprepared/unable to critically appraise or reason with the information circulated on digital political infrastructure, as well as they might have been able to offline. Indeed the particularity of astroturfing lies in its manipulation of our in-built information filtration mechanism, or what Wait But Why refers to as a “Reason Bouncer”.

For a more complete essay on how we developed said mechanism, please refer to their “The Story of Us” series.

Our information filtration mechanism is a way of deciding which information from both virtual and real dimensions is worth considering as “fact” or “truth” and which should be discarded/invalidated. As described in “The Story of Us”, information that appeals to an individuals primal motivations, values or morals tend to be accepted more easily by the “Reason Bouncer”, just as information coming from “trustworthy sources” such as friends, family or other “in-group individuals”. Of course, just like how teenagers try to use fake-IDs to sneak into nightclubs, astroturfing seeks to get past your “Reason Bouncers” by mimicking the behaviour and appealing to the motivations of your “group”.

The effectiveness of this information filtration “exploit” can be seen in the 2016 Russian astroturfing attack in Houston, Texas. Russian actors, operating from thousands of kilometers away, created two conflicting communities on Facebook, one called “Heart of Texas” (right-wing, conservative, anti-Muslim) and the other called the “United Muslims of America” (Islamic). They then organised concurrent protests on the question of Islam in the same city: one called “Save Islamic Knowledge” and another called “Stop the Islamification of Texas” right in front of the Islamic Da’wah Center of Houston. The key point here is that the astroturfing campaign was conducted in two stages: infiltration and activation. Infiltration was key to get past the two Texan communities’ “Reason Bouncer”, by establishing credibility over several months with the creation, population and curation of the Facebook communities. and all that was required to “activate” both communities was the appropriate time, place and occasion.

The “Estonian Solution”

Several examinations of the astroturfing issue have pointed out that, rather than the government or military, ordinary citizens are often the targets of disinformation and disruption campaigns using the astroturfing technique. Steven L. Hall and Stephanie Hartell rightfully point out the Estonian experience with Russian disinformation campaigns as a possible starting point for improving society resilience to astroturfing campaigns.

As one of the first Western countries to have experience a coordinated disinformation campaign in 2007, the people of Estonia rallyed around the need for a coordinated Clausewitzian response (Government, Army, and People) to Russian aggression: “Not only government or military, but also citizens must be prepared”. Hall and Hartell note the amazing (by American standards) civilian response to Russian disinformation, including the creation of a popular volunteer-run fact-checking blog/website called PropaStop.org.

Since 2016, the anti-fake news and fact-checking industry in the United States is booming — with more than 200 fact-checking organisations active as of December 2019. The fight against disinformation and the methods that make astroturfing possible is indeed well and alive in the United States.

Where I disagree with Hall and Hartell, who recommend initiatives similar to those by Estonia in the USA, is that disinformation and astroturfing cannot meaningfully be reduced in the USA without addressing the internal political and social divisions which make the job all too easy and effective. The United States is a divided country, along both Governmental and popular lines. How can the united action of Estonia be replicated when two out of the three axes (Government, Military and People) are compromised?

This — possibly familiar — Pew Research data visualisation (click here for the research) shows just how much this division has exacerbated over time. Astroturfing campaigns like the ones in Houston in 2016 comfortably operate in tribal environments, where suspicion of the internal “Other” (along racial religious, political lines) trumps that of the true “Other” — found at the opposite end of the globe. In divided environments, fact-checking entreprises also suffer from weakened credibility and the suspicion of the very people they seek to protect.

In such environments, short of addressing the issues that divide a country, the best technologists can perhaps do is create new tools transparently and openly. So as to avoid suspicion and invite inspection. But to also seek as many opportunities to work in partnership with Government, the Military and all citizens, with the objective of arming the latter with the ability to critically evaluate information online and understand what digital tools and platforms actually do.

[1] A society where an individual interacts with a complex interplay of online and offline stimuli, to formulate his/her more holistic experience of the world we live in. The term was coined by Spanish sociologist Manuel Castells.

Categories
Editorial

Why are we so keen to trade our civil liberties for security?

Contact Tracing Apps Offer Hope of a Way Out
What type of concerns are cropping up? (Source: Bloomberg)
Lots of Questions; None of Them the Right One

Image for postThe normalisation of surveillance (Source: Lianhao Qu)

So, what has changed?
Image for post
The Camera Wall Art Exhibit in Toronto (Source: Bernard Hermant)
Categories
Editorial

The Forsythia-Industrial Complex

In Steven Soderbergh’s newly rediscovered 2011 film Contagion, a hypothetical novel virus called MEV-1 causes a global pandemic, which has to be stopped by the film’s protagonists, epidemiologists working for the Centers for Disease Control.

While the film contains all sorts of exposition sequences that are legitimately educational about the nature and spread of airborne viruses, the real nugget of gold in the film is in its main subplot. Jude Law plays Alan Krumwiede, an Alex Jones-type conspiracy entrepreneur. He spends his days chasing down sensational stories and posting ranting videos on his website, “Truth Serum”.

Despite the contemporary irrelevance of the “blogosphere”, the Truth Serum subplot is as pertinent today as it was in 2011. In many ways, its implications are more frightening now than ever. In 2011, algorithm-driven social media sites did not have the same preponderance over the information environment that they enjoy today.

Studies indicate that media consumption patterns have changed rapidly over the last decade. While internet-based news consumption was wide-spread by 2011, there were two key differences with the information environment of today. Firstly, online news consumption skewed young. Today, American adults of all ages consume much if not most of their news online. Secondly, news items spread via a number of media, including email chains and blogs. The dynamics of email and blogs as media are fundamentally different from algorithm-driven platforms like Facebook, Twitter, and YouTube. Blog followings and email chains are linear — a person recommends a blog or forwards an email to certain people at their own discretion. Information from these media therefore spreads less virally than content on algorithmic sites. In fact, the entire concept of “virality” essentially cannot exist without the engagement-based recommendations and news feed algorithms behind our now-dominant social media machines.

In the second and third acts of the film (*spoilers*, sorry), Krumwiede begins pushing a homeopathic cure, Forsythia, on his website and eventually on TV. He posts a series of videos on Truth Serum in which he has apparently caught the MEV-1 virus, and nurses himself back to health by using the substance. His endorsement of the drug to his loyal followers causes desperate Americans to loot pharmacies in search of Forsythia. Krumwiede is eventually arrested, and after an antibodies test reveals he never actually had the virus, he is charged with fraud. By this time, he’s moved from Forsythia to anti-vaccination, claiming the CDC and WHO are “in bed” with big pharma, and that’s why they’re trying to vaccinate the entire population. He ends the film urging his millions of loyal online followers not to take the crucial MEV-1 vaccine being produced by American and French pharmaceutical companies.

Image for post

Forsythia today

While the Forsythia subplot serves as a slight diversion from our main protagonists’ investigations and research, it has turned out to be the part of the movie that holds up best in light of our current novel coronavirus pandemic. Of course, in its obvious resemblances to the hydroxycholorquine craze in the US and France. But more worryingly, it’s become obvious that our contemporary information environment is more vulnerable to the Krumwiedes of the world now than it would have been in 2011.

American author and critic Kurt Andersen’s 2017 book Fantasyland argues that the viral, platform-based contemporary Internet is the crucial focal point of our conspiracy-laden politics. Common criticisms of modern social and online media focus on its tendency to create self-reinforcing echo chambers where individuals are only exposed to information that bolsters their existing views. Andersen takes this idea further and turns it on its head, arguing that social media causes the cross-pollination of information silos that would otherwise have remained separate.

This phenomenon is evident in the fact that believing in one conspiracy theory exponentially increases the likelihood you’ll believe in others. In the days before mass access to the internet, an individual with an easily-debunked belief would have been relatively isolated, they can now connect with other like-minded individuals across the globe. Anti-vaxxers can virtually intermingle with chemtrails theorists, 9/11 truthers, and anti-semites. Without this cross-pollination, expansive crowd-sourced conspiracy narratives like Q-Anon would simply not be possible.

In Contagion’s 2011-based universe, Krumwiede is a lone crusader, harrasing reporters and officials, pushing his homeopathic scams, and broadcasting to millions from a webcam as a one-man information army. In 2020, there is a whole parallel information ecosystem across several internet platforms where conspiracy theorists, activists, influence bots, grifters, and extremists can exchange and reinforce each others’ beliefs. Today, Krumwiede would be one of thousands of viral content creators “flooding the zone” with conspiracies, untruths, partial truths, and unverified and misleading claims.

Imagining better media bubbles

Could the Internet become a less toxic place? Maybe, but it’s a difficult problem. In a November 2019 interview with Vox, tech entrepreneur Anil Dash reminisces for “the Internet we lost” with the rise of the social media giants. He points out the weaknesses in “free speech” arguments made by Zuckerberg and other tech moguls, arguing that free discourse can exist without virality and engagement metrics. He says that a “trust network” model that looks more like the blogosphere of the 90s and 00s is perhaps more conducive to civil discourse than our platform-centric information environment today. Bloggers, like TV anchors or op-ed columnists, have to slowly gain the trust of their audience over time. Without virality’s constant interruptions, a stronger bond forms between content producer and content consumer, leaving less room for loud interlopers with wild claims to wedge their way into the discourse.

The issue with this concept is that, in a trust network model, aforementioned trusted information sources are difficult to dislodge once their network has been established. A new video “owning” the old champion relies on virality to dethrone the incumbent, and requires on a news feed or recommendations algorithm to find its way into the incumbent’s audience’s media diet. While the kind of decentralized trust network Dash and other ex-bloggers are nostalgic for would perhaps address the problems of influence bots, viral information blitzes, and other issues caused by engagement-based algorithmic media, it could also exacerbate the silo-ization of our information environment by entrenching a certain set of existing sources.

Categories
Editorial

How Will the Coronavirus Impact Xi’s ‘Made in China 2025′ Plan?

‘Made in China 2025′

First things first, what is ‘Made in China 2025′? Introduced in 2015, the grand plan seeks to transform China’s manufacturing base from being a low-end manufacturer to becoming a high-end, high-tech producer. Among these are information technology, telecommunication (5G), advanced robotics, artificial intelligence, and new energy vehicles. Other sectors include aerospace engineering, emerging biomedicine, high-end rail infrastructure. Mirroring Germany’s Industry 4.0 development, these sectors are pivotal to the ‘4th industrial revolution’ which refers to the reconciliation of cloud computing, emerging technologies, big data into manufacturing supply chains.

Inspired by Germany’s industry “4.0″
Modernizing the manufacturing sector with “Made in China 2025”
How will China achieve its goals?

Beijing seeks to devote resources and intensifying centralized policy planning to coordinate across government, academia and private enterprises. The ambition is to tap into China’s growing middle-class consumer base who are demanding higher quality goods and services, as well as, the value-added global sourcing segment. The measures implemented including:

  • Foreign acquisitions and investments. Chinese companies, both state-owned and private have been encouraged to invest in foreign companies, notably semiconductor firms, to gain access to advanced technology. In 2016, the value of Chinese acquisitions in the United States alone amounted to over $45 billion.
  • Joint venture schemes. China’s strict commercial laws dictate that foreign companies wishing to do business or invest in China would need to enter into joint ventures with Chinese companies. There are both pros and cons associated with the scheme. Pros: foreign companies can invest in businesses that are otherwise restricted by the Party, and use the Chinese partner’s “guanxi” or connections and existing experiences in China. Cons: under these terms the foreign company is required to share sensitive intellectual property knowledge. For example, when China developed its high-speed rail network, it leveraged foreign concepts and designs, notably Japan’s Shinkansen.
  1. Cultivating state-owned and private companies. Despite the economic reforms by the Jiang administration during the 1990s which reduced the role of state firms in the economy, they still account for 1/3 of gross domestic product (GDP) and an estimated 2/3 of China’s outbound investment. Beijing has increased direct support for Chinese enterprises through state funding, low interest loans and tax breaks. The government has also championed home-grown companies by publishing a list of private companies which will help drive its 2025 goal, the list includes the likes of Huawei, ZTE, Alibaba, Tencent, DJI, Xiaomi and Baidu.
Economic activity slows but does not shutdown

Undoubtedly, China has been hit economically. Wuhan the epicentre of the outbreak is a key region for the country’s automotive industry and is an integral part of the Party’s vision for 2025. The Nikkei Asian Review reports that between January and March much of its industrial activity was halted. Despite, 2/3 of the country’s economy being forced to shut down in January, China’s strategic sectors were still operating. Xi’s grand dream has not been forgotten.

What are the geopolitical consequences?

Of greater concern is not the economic impact of the pandemic but rather China’s relationship with its western colleagues post-pandemic era which could positively and negatively impact ‘Made in China 2025’. This wholly depends on Beijing’s action. Foreign Direct Investment (FDI) has already been strained during the Trade War between China and the US prior to the Coronavirus pandemic. With Trump’s remark of “man-made virus” and the deflection of the administration responsibilities by pointing the finger at China will further strain the two powers relationship. Between January-February 2020, Coronavirus dragged China’s FDI down further 8.6% totaling $19.2 billion according to China’s Ministry of Commerce. As aforementioned one of the key drivers to achieving the 2025 goal is encouraging foreign investments and establishing joint ventures. With global economic slowdown resulting from the pandemic this is downward trajectory of FDI in China is likely to continue.

Positives for China?

As the outbreak highlights China’s reliance on the foreign technology and global supply chains, “spur the government to further intensify its efforts to promote domestic innovation” and double down on its ‘Made in China 2025’ plan, says Eswar Prasad (China expert at Cornell University).

Conclusion

Externally, western democracies are likely to reassess its relationship and economic ties with China, foreign investments to China are likely to decrease as a result of western countries reassessment of its relationship with China and the dire global economic situation. Internally, Beijing will need to rely more than ever on its growing middle-class population to buy into the 2025 plan. Perhaps, countries will re-evaluate their supply chain and economic ties with China but ultimately as summarized by Willy Shih (Professor at Harvard Business School) “The World is dependent on China for manufacturing”. This goes beyond medical equipment, it is about textiles, furnitures, toys, electronics, accumulating to trillion dollars of import each year.

Categories
Repost

Why We Cannot Trust Big Tech to be Apolitical

What Google’s whistleblowers and walk-outs have revealed about working in a politicised information duopoly. How we cannot expect neutrality even in tackling COVID-19.

7:30am in the lobby of an independent news agency, a man of scruffy bearing and sweaty palms keeps looking anxiously at his wristwatch. He wasn’t typically so prone to stress, but the last few weeks have given him good reason to look over his shoulder.

A few weeks ago, he was a nondescript employee in one of the world’s most prominent and influential tech organisations. He had begun to find himself bothered by a series of decisions and practices that sharply clashed with his moral compass. So he quietly collected evidence in preparation for his big day. When that day came however, he became a condemned man; condemned to forever roam under the all-pervasive threat of vengeance from the most powerful technology company in the world. He received an anonymous letter making demands and, demanding cease-and-desist action by a specific date. Then, the police showed up to his door on the specious grounds of concern about his “mental health”. Knowing his life may have been on the line, he created a kill-switch: “kill me and everything I have on you will be released to the public”.

Did you enjoy my screenplay for this summer’s next action-thriller hit? Well I have a confession to make; it’s based on a true story. This is actually the beginning of the tale of Zachary Vorhies — the latest in a long line of Google Whistleblowers.

If you want to skip to the conclusion and its link to COVID-19, please click here. If you want to hear the whole sorry saga then, please, read on.

Google as puppet-master?

For those of you well-acquainted with Big Tech’s ethical mishaps, you may have already heard of the Google Data Dump. It was, in short, a substantial leak of internal Google documents, policies and guidance designed to demonstrate Google’s willingness to deliberately shape the information landscape in favour for a certain conception of reality. Said conception of reality seems to exclude right-wing media outlets, and seeks to promote a socially-liberal agenda. As the old Stalinist adage goes: “It’s not the people who vote that count, but the people who count that vote” — put another way, it’s not the facts that matter so much as how you interpret and arrange those facts (and does Google ever interpret and arrange facts, with their 90% search engine market share!).However shocking the suggestions here may be, we need to read through the coverage of this data dump with a critical eye. As a journalist, whenever I approach such a leak, I like to go through the thought-process of the actors involved. Why did Zachary leak the documents? What drove the decision to leak those documents at a specific date? How did he hear about Project Veritas, why did he provide them with a scoop, and what does the recipient of such data gain?

The leaked documents were shared to Project Veritas, an independent whistleblowing outlet which pledges exuberantly on its front page to “Help Expose Corruption!”. Most of its brand content seems to derive from shocking whistleblower revelations that come with the site’s own flavour of sensationalist titling and conspiratorial imagery.

On the site, Zachary’s story plays comfortably into Project Veritas’ audience-expectations. The audience in question is ambiguous and unknown to the writer, and the reflections made are largely based on the platform’s content-reel. The implications are first allowed to fester, and then spread as part of a bigger conspiracy of liberal Google executives forcing coders to prevent the spread of right-wing populism (as encapsulated by Donald Trump). The data dump itself isn’t intrinsically shocking as much as it is when used to support a particular vision of reality. I discussed this topic with several Google insiders working at the company’s Colorado and Ireland offices. They tell me that most of the information in the data dump is easily accessible and circulated frequently amongst Google employees. They tell me that, while it is frowned-upon to discuss such matters openly, the data dump only began having significant traction whence it landed on the doorsteps of Project Veritas, who knew how to use the data to reinforce a fiery conspiratory narrative.

Google as game designer?

Google’s convenient counter-narrative to these revelations runs along the lines that it’s all because, as Genmai says, “Google got screwed over in 2016”. It is clear that, during the 2016 presidential race, a series of right-wing media outlets managed to navigate through the ludicrously arcane Google and Facebook traffic algorithm, and successfully gamed it to the point at which both publishers had to change the rules of their game. There is a strong sense of enmity at Google about how a handful of Albanian or Macedonian fake news artists managed to “out-hack” Google. Designers at heart, Googlers are uncomfortable with the idea that certain content pieces are able to “outperform” (without directly benefitting Google/Facebook financially). Indeed, the only content that is meant to over-perform is paid/sponsored content.

What these whistleblower scandals and recent walk-outs have proven is that we cannot see Big Tech as monolithic, as a set of corporations acting solely to further the interest of shareholders or of ad revenues. Google is a group of individuals, made up of a plethora of political ideologies and socio-ethnic representations. It is a company full of engineers and designers who are aware of their impact on politics and society through their quasi-duopoly on the information space. With this awareness is a confidence and agency inherited from the “Googler” mindset; a perpetual journey to solve problems, even when alone against all odds. Add these ingredients together and what you have is an unstable cocktail of ideas that sometimes leads to breakthrough innovation, sometimes to conflicts over how best to wield technology to change the world (for the better).

Google as a microcosm of society

Perhaps the tech commentariat should have seen it all coming. Big Tech has accumulated a staggering amount of political power, through information-market dominance and financial success. Cries to regulate Big Tech “monopolies” have reached fever pitch. In the meantime, governments the world-over have urged Big Tech to build solutions to deal with some of the societal and political ramifications of their tools.

Who builds these solutions? Googlers do. The very same Googlers endowed with ideological responsibility, voicing their political and social views so that they may have a say in the way society is ultimately run.

So the more interesting question(s) lies a level below the accusations of the Whistleblowers or of the walkouts:

  • Can we trust people to build/design apolitical/non-ideological tools?
  • Should we, as subjective and emotive people, be always neutral/objective?
  • When did we choose to become subjective, when we become actors in the socio-political world, what accountability and responsibilities come with this decision?

The charge for Big Tech is two-fold, therefore:

  1. We cannot trust Google to be apolitical or non-ideological. Despite claims to impartiality, there is little evidence suggesting a system in place to restrict partiality and individual agency from the tools designed. In fact, there doesn’t seem to be the desire to remain impartial, as demonstrated by the contents of the data dumps and history of industrial actions (walkouts and whistleblowers).
  2. We cannot know Google’s editorial line. We all know that MSNBC leans to the Left. We all know Fox News tends to the Right. Publishers on paper, radio, television and magazine have all remained informative and respected news-outlets, while also recognising their own biases. This is called adopting an editorial line. Masquerading behind their label as a “technology company”, Google and other Big Tech have all relinquished their responsibility to identify inherent bias and to communicate this bias (or take deliberate steps to repeal bias) to its readers/users.

Our next piece on the topic will look at how useful the comparison between editorial lines and product design-bias at Google and other Big Tech companies can be.

Feel free to read through the detailed insights of Wonk Bridge’s read through Project Veritas data leak below. A bientôt!

Project Veritas Case Study

Core claims:

  • Senior executives made claims that they wanted to “Scope the information landscape” to redefine what was “objectively true” ahead of the elections
  • Google found out what he did and sent him an unsolicited letter making a threat and several demands including a “request” to cease & desist, comply by a certain date, and scrape the data (but by then Vorhies already sent the data to a legal entity)
  • “Dead Man’s Switch” launched in case “I was killed”, which would release all the documents. The next day, the Police were called on grounds of “mental health” (something Google does apparently frequently with its whistleblowers)

From the data dump, an oft-cited passage:

“If a representation is factually accurate, can it still be algorithmic unfairness?”. This screenshot from a “Humane tech” (a Google initiative) document was used by Vorhies to say facts were being twisted to manipulate reality into “promoting the far-left wing’s social justice agenda”.

Image for post

From the Data dump

Whether or not it does so deliberately, the leaked blacklists point to a preference for slamming the ban-hammer on right-wing conservative or populist content, if our scope is limited to US content.

Image for post
Screenshot but you can find the rest of the leaked list here

As a publisher, there is no clear reason why it should be objective here, just like how MSNBC and Fox News are pretty clear in their ideological stances too. The issue is that Google presents itself as a neutral tool.

What is the solution to an overly ideological publishing monopoly? Generally, this translated to the creation of competitors, which has not occurred. Perhaps it’s early days, but there are enough Conservative coders and programmers out there and enough right-wing capital in circulation to create a rival to Google. Just speculation here.

https://spectator.org/what-should-conservatives-do-about-google/

Google’s response to the Project Veritas leak is much more damning, however. The case here being that Freedom of Speech and social activism should be permitted in both cases (Google’s and Project Veritas’). a) The removal of the video from YouTube b) threatening of the whistleblower… Does it qualify as abuse of power?

The crackdown on whistleblowers (evidence: organisers of the Google Walkout), organisers of industrial action in the Google Walkout decried similar discrimination in reverse to that of the right-wing conservative Googlers. ““I identify as a LatinX female and I experienced blatant racist and sexist things from my coworker. I reported it up to where my manager knew, my director knew, the coworker’s manager knew and our HR representative knew. Nothing happened. I was warned that things will get very serious if continued,” one Googler wrote. “I definitely felt the theme of ‘protect the man’ as we so often hear about. No one protected me, the victim. I thought Google was different.”” The claim there was that Google wasn’t doing enough to protect social justice at work (and also in the products they make). The claim here being that Google doesn’t respond convincingly to these allegations.

In a message posted to many internal Google mailing lists Monday, Meredith Whittaker, who leads Google’s Open Research, said that after the company disbanded its external AI ethics council on April 4, she was told that her role would be “changed dramatically.” Whittaker said she was told that, in order to stay at the company, she would have to “abandon” her work on AI ethics and her role at AI Now Institute, a research center she cofounded at New York University.

Now, it is easy to fit these events into a broader narrative of the whistleblower crackdown, but it is clear that perspective plays a huge role in how you view these events. The disbanding of the External AI Ethics Council (which Wonk Bridge has discussed in a previous podcast) was also largely influenced by the Council’s misalignment with the values of a majority of Googlers. Meredith Whittaker may have tried to be balanced in her running of the Council, but that didn’t sit too well with the rest of the company body.

Claire Stapleton, another walkout organizer and a 12-year veteran of the company, said in the email that two months after the protest she was told she would be demoted from her role as marketing manager at YouTube and lose half her reports. After escalating the issue to human resources, she said she faced further retaliation. “My manager started ignoring me, my work was given to other people, and I was told to go on medical leave, even though I’m not sick,” Stapleton wrote. After she hired a lawyer, the company conducted an investigation and seemed to reverse her demotion. “While my work has been restored, the environment remains hostile and I consider quitting nearly every day,” she wrote.

Google as a Public Service Provider

The fates of Claire Stapleton, Meredith Whittaker and Zachary Vorhies all demonstrate a common moral dilemma posed by large and influential corporations; the balance between the corporate interest, the sum-total interest of employees, and of the “public interest”. These interests are often in conflict with each other, as is the case around the question of: “How should we manage access to controversial and/or potentially fake content”.

The reason why corporations like Google are not well-placed to answer such questions, is because they are unable to align their interests with the public interest in any accountable way. Well-functioning democracies are better placed to provide answers as their interests align directly to the public interest (in theory). Elected officials are mandated by “the People” to represent “the People” and fulfil the “Public Interest” in a representative capacity. As long as trust in elected officials and their capacity to fulfil the public interest is strong, the social contract continues to align the institutional and public interest.

As we look to our most influential actors (governments, large corporations, influential people) to show us the way through the COVID-19 crisis, the Public’s reaction will largely depend on the key question of whether they see their interests as aligned with the institutions in question. When Google and like corporations involve themselves in seemingly gratuitous philanthropy, they should not be surprised by negative push-back. It is an objectively good thing for Google to use its vast wealth of data to help curb the growth of Coronavirus. But the Public will still doubt whether it is a good thing for Google to actually do this.

 

Useful sources
Google Document (Data) Dumphttps://citizentruth.org/google-whistleblower-reveals-database-of-censored-blacklisted-websites/https://www.realclearpolitics.com/2019/06/25/google_whistleblower_reveals_plot_for_us_elections_478526.htmlVorhies Whistleblower story

https://www.projectveritas.com/news/google-machine-learning-fairness-whistleblower-goes-public-says-burden-lifted-off-of-my-soul/

Project Maven

https://venturebeat.com/2019/04/24/how-google-treats-meredith-whittaker-is-important-to-potential-ai-whistleblowers/

Retaliation against employees who whistleblow

https://www.vice.com/en/article/vb53wy/45-google-employees-explain-how-they-were-retaliated-against-for-reporting-abuse

An academic paper explaining the impact of search-engine manipulation on election outcomes.

https://www.pnas.org/content/112/33/E4512