Categories
Editorial Repost

The Accidental Tyranny of User Interfaces

The potential of technology to empower is being subverted by tyrannical user interface design, enabled by our data and attention.

My thesis here is that an obsession with easy, “intuitive” and perhaps even efficient user interfaces is creating a layer of soft tyranny. This layer is not unlike what I might create were I a dictator, seeking to soften up the public prior to an immense abuse of liberty in the future, by getting them so used to comical restrictions on their use of things that such bullying becomes normalised.

A note of clarification: I am not a trained user interface designer. I am just a user with opinions. I don’t write the following from the perspective of someone who thinks that they could do better; my father, a talented programmer, taught me early on that everyone thinks that they could build a good user interface, but very few actually have the talent and the attitude to do so.

As such, all the examples that I shall discuss are systems where the current user interface is much worse than a previous one. I’m not saying that I could do it better; they did it better, just in the past.

This is not new. Ted Nelson identifies how in Xerox’s much lauded Palo Alto Research Center (the birthplace of personal computing), the user was given a graphical user interface, but in return gave up the ability to form their own connections between programs, which were thereafter trapped inside “windows” — the immense potential of computing for abstraction and connection was dumbed down to “simulated paper.” If you’d like to learn more about his ideas on how computing could and should have developed, see his YouTube series, Computing for Cynics.

Moore’s law describes that computers are becoming exponentially more powerful as time passes, meanwhile our user interfaces — to the extent that they make ourselves act stupidly and humiliate ourselves — are making us more and more powerless.

YouTube’s Android app features perhaps the most egregious set of insulting user interface decisions. The first relates to individual entries for search results, subscriptions or other lists. Such a list contains a series of video previews that contain (today) an image still from the video, the title, the name of the channel, a view count, and the publishing date.

What if I want to go straight to the channel? This was possible, once. What if I want to highlight and select some of the text from the preview? I can’t. Instead, the entire preview, rather than acting like an integrated combination of graphics, text and hypertext, is just one big, pretty, stupid, button.

This is reminiscent of one of my favorite Dilbert cartoons. A computer salesperson presents Dilbert with their latest model, explaining that their latest user interface is so simple, friendly and intuitive that it only has one button, which they press for you when it’s shipped from the factory. We used to have choices. Now we are railroaded.

Image for post

Do you remember when you could lock your phone or use another app, and listen to YouTube in the background? Not any more. YouTube took away this — my fingers hovered over the keyboard there for a moment, and I nearly typed “feature” — YouTube continuing to play in the background is not a feature, it should be the normal operation of an app of that type; the fact that it closes when you switch apps is a devious anti-feature.

YouTube, combining Soviet-Style absurdity and high-capitalist banality, offers to give you back a properly functioning app, in return for upgrading to Premium. I’m not arguing against companies making available additional features in return for an upgrade. Moreover, my father explained how certain models of IBM computers came with advanced hardware built-in — upgrading would get you a visit from an engineer to activate hardware you already had.

IBM sells you a car, you pay for the upgrade, but realize that you already had the upgraded hardware, they just suppressed it; YouTube sells you a car, then years later turns it into a clown-car, and offers you the privilege of paying extra to make it into a normal car. Imagine a custard pie hitting a human face, forever.

Obviously this simile breaks down in that the commercial relationship between YouTube and me is very different to the one between a paying customer and IBM. If you use the free version of YouTube, you pay the company in eyeballs and data — this sort of relationship lacks the clarity of a conventional transaction, and the recipient of a product or service that is supposedly free leaves themselves open to all manner of abuses and slights, being without the indignation of a paying customer.

WhatsApp used to have a simple, logical UI; this is fast degrading. As with YouTube, WhatsApp thwarts the user’s ability to engage with the contents of the program other than in railroaded ways.

Specifically, one used to be able to select and copy any amount of text from messages. Now, when one tries to select something from an individual message, the whole thing gets selected, and the standard operations are offered: delete, copy, share, etc.

What if I want to select part of a message because I only want to copy that part, or merely to highlight so as to show someone? Not any more. WhatsApp puts a barrier between you and the actual textual content of the messages you send and receive, letting you engage with them only in the ways for which it provides.

On this point I worry that I sound extreme — today I tried this point on a friend who didn’t see why this matters so much to me. Granted, in isolation, this issue is small, but it is one of a genre of such insults that are collectively degrading our tools.

That is is to say that WhatsApp pretends that the messages one the screen belong some special category, subject only to limited operations. No. It’s text. It is one of the fundamental substrates of computing and any self-respecting software company ought to run on the philosophical axiom that users should be able to manipulate it, as text.

Another quasi-aphorism from my father. We were shopping for a card for a friend or relative, in the standard Library of Congress-sized card section in the store. Looking at the choices, comprehensively labelled 60th Birthday, 18th Birthday, Sister’s Wedding, Graduation, Bereavement, etc., he commented, Why do they have to define every possible occasion? Can’t they just make a selection of cards and I write that it’s for someone’s 60th birthday?

This is about the shape of it. The Magnetic North to which UIs appear to be heading is one in which all the things people think you might want to do are defined and are given a button. To refer to the earlier automotive comparison, this would be like a car without a steering wheel or gas pedal. Instead there’s a button for each city to which people think you might want to visit.

There’s a button for Newark but not for New York City? Hit the Button for Newark then walk the rest of the way. What kind of deviant would want to go to New York City anyway, or for that matter what muddle-headed lunatic would desire to go for a drive without having first decided upon the destination?

I work in the Financial District in Manhattan. Our previous building had normal lifts: you call a lift and, inside, select your floor. This building has a newer system: you go to a panel in the lobby and select your floor, the system then tells you the number of the lift that it has called for you. Inside, you find that there no buttons for floors.

This is impractical. Firstly, there is no way to recover if you accidentally get in the wrong lift (more than once, the security guards on the ground floor have seen a colleague and me exit a lift with cups of coffee and laptops, call another, and head straight back upstairs). Meanwhile, one has to remember in order for the system to function. I don’t go to the office to memorize things, I go to the office to work. Who wants to try to hold in mind the number for your lift while trying to talk to a friend?

More importantly, and just like WhatsApp, it’s like getting into a car but finding the steering wheel immovable in the grip of another person, who asks, “Where would you like to go?” What if I get in the lift and change my mind? This says nothing for the atomizing effect this has on people. Before, we might get into a lift and I, being closest to the control panel, would ask “which floor?” Now we’re silent, and there’s one fewer interruption between the glint of your phone, the building, and the glass partitions of your coworking space.

My father set up my first computer when I was 8 or 9 years old. Having successfully installed Red Hat GNU/Linux, we booted for the first time. What we saw was something not unlike this:

Image for post

This is a list of the processes that the operating system has launched successfully. It runs through it every time you start up. I see more or less the same thing now, running the latest version of the same Linux. It’s a beautiful, ballsy thing, and if it ever changes I will be very sad.

Today, our software treats us to what you might call the Ambiguous Loading Icon. Instead of a loading bar, percentage progress, or list, we’re treated to a thing that moves, as if to say we’re working on it, without any indication that anything is happening in the background. This is why I like it when my computer boots and I see the processes launching: there’s no wondering what’s going on in the background, this is the background.

One of the most egregious examples of this is in the (otherwise functional and inexpensive) Google Docs suite, when you ask it to convert a spreadsheet into the Google Sheets format:

Image for post

We’re treated to a screen with the Google Docs logo and a repeating pattern that goes through the four colors of Google’s brand. Is it doing something? Probably. Is it working properly? Maybe. Will it ever be done? Don’t know. Each time that ridiculous gimmick flips colors is a slap in the face for a self-respecting user. Every time I tolerate this, I acclimatize myself to the practice of hiding the actual function and operation of a system from the individual, or perhaps even to the idea that I don’t deserve to know. This the route of totalitarianism.

I’m not pretending that this is easy. I understand that software and user interface design is a compromise between multiple goals: feature richness (which often leads to difficult user interfaces), ease of use (which often involves compromising on features or hiding them), flexibility, and many others.

I might frame it like this:

  1. There exists an infinite set of well-formed logical operations (that is, there is no limit to the number of non-contradictory logical expressions (e.g. A ⊃ B (the set A contains B)) that one can define.
  2. Particular programming languages allow a subset of such expressions, as limited by the capabilities and power of the hardware (even if a function is possible, it might take an impractical amount of time (or forever) to complete).
  3. Systems architects, programmers and others provide for a subset of all possible operations as defined by 2. in their software.
  4. Systems architects, programmers and others create user interfaces that allow us to access a subset of 3. according to their priorities.

They have to draw the line somewhere. It feels like software creators have placed too much emphasis on prettiness and ease of use, very little on freedom, and sometimes almost no emphasis on letting the user know what’s actually happening. I’m not asking for software that provides for the totality of all practical logical operations, I’m asking for software that treats me like an adult.

Some recommendations:

  1. Especially for tools intended for non-experts, there seems to be a hidden assumption that the user should be able to figure it out without training, and figure it out by thrashing around randomly when the company changes the user interface for no reason. A version of this is laudable, but often leads to systems so simplistic that they makes us feckless and impressionable. Perhaps a little training is worth it.
  2. No fig-leaves: hiding a progress message under an animated gimmick was never worth it.
  3. Perhaps the ad-funded model is a mistake, at least in some cases. As in the case of YouTube, it’s challenging to complain about an app for I don’t pay conventionally. The programs for which I do pay, for example Notion, are immensely less patronizing. Those for which I don’t pay conventionally, but aren’t run on ads, like GNU/Linux, Libre Office, Ardour, etc. are created by people who so value things like openness, accessibility, freedom (as in free), that they border on the fanatical. Perhaps we should pay for more stuff and be more exacting in our values. (Free / open source software is funded in myriad ways, too complex to explore.)

All this matters because the interfaces in question do the job of the dictator and the censor, and we embrace it. More than being infuriating, they train us to accept gross restrictions in return for trifling or non-existent ease of use, or are a fig leaf covering what is actually going on.

Most people do what they think is possible, or what they think they are allowed to do. Do you think people wouldn’t use a Twitter-like “share” function on Instagram, if one existed? What about recursive undo/redo functions that form a tree of possible document versions? Real hyperlinks that don’t break when the URL for the destination changes?

We rely on innovators to expand our horizons, while in fact they are defining limited applications of almost unlimited possibilities. Programmers, systems architects, businesspeople and others make choices for us: in doing so they condition in us that which feels possible. When they do so well, they are liberators; when they do so poorly, we are stunted.

Some of these decisions appear to be getting worse over time, and they dominate some of the most popular (and useful) systems; the consciousness-expanding capabilities of technology are being steered into a humiliating pose in a cramped space, not by force, but because the space is superficially pretty, easy to access and because choices are painful.

This article was originally posted on Oliver Meredith Cox

Categories
Repost

Why We Cannot Trust Big Tech to be Apolitical

What Google’s whistleblowers and walk-outs have revealed about working in a politicised information duopoly. How we cannot expect neutrality even in tackling COVID-19.

7:30am in the lobby of an independent news agency, a man of scruffy bearing and sweaty palms keeps looking anxiously at his wristwatch. He wasn’t typically so prone to stress, but the last few weeks have given him good reason to look over his shoulder.

A few weeks ago, he was a nondescript employee in one of the world’s most prominent and influential tech organisations. He had begun to find himself bothered by a series of decisions and practices that sharply clashed with his moral compass. So he quietly collected evidence in preparation for his big day. When that day came however, he became a condemned man; condemned to forever roam under the all-pervasive threat of vengeance from the most powerful technology company in the world. He received an anonymous letter making demands and, demanding cease-and-desist action by a specific date. Then, the police showed up to his door on the specious grounds of concern about his “mental health”. Knowing his life may have been on the line, he created a kill-switch: “kill me and everything I have on you will be released to the public”.

Did you enjoy my screenplay for this summer’s next action-thriller hit? Well I have a confession to make; it’s based on a true story. This is actually the beginning of the tale of Zachary Vorhies — the latest in a long line of Google Whistleblowers.

If you want to skip to the conclusion and its link to COVID-19, please click here. If you want to hear the whole sorry saga then, please, read on.

Google as puppet-master?

For those of you well-acquainted with Big Tech’s ethical mishaps, you may have already heard of the Google Data Dump. It was, in short, a substantial leak of internal Google documents, policies and guidance designed to demonstrate Google’s willingness to deliberately shape the information landscape in favour for a certain conception of reality. Said conception of reality seems to exclude right-wing media outlets, and seeks to promote a socially-liberal agenda. As the old Stalinist adage goes: “It’s not the people who vote that count, but the people who count that vote” — put another way, it’s not the facts that matter so much as how you interpret and arrange those facts (and does Google ever interpret and arrange facts, with their 90% search engine market share!).However shocking the suggestions here may be, we need to read through the coverage of this data dump with a critical eye. As a journalist, whenever I approach such a leak, I like to go through the thought-process of the actors involved. Why did Zachary leak the documents? What drove the decision to leak those documents at a specific date? How did he hear about Project Veritas, why did he provide them with a scoop, and what does the recipient of such data gain?

The leaked documents were shared to Project Veritas, an independent whistleblowing outlet which pledges exuberantly on its front page to “Help Expose Corruption!”. Most of its brand content seems to derive from shocking whistleblower revelations that come with the site’s own flavour of sensationalist titling and conspiratorial imagery.

On the site, Zachary’s story plays comfortably into Project Veritas’ audience-expectations. The audience in question is ambiguous and unknown to the writer, and the reflections made are largely based on the platform’s content-reel. The implications are first allowed to fester, and then spread as part of a bigger conspiracy of liberal Google executives forcing coders to prevent the spread of right-wing populism (as encapsulated by Donald Trump). The data dump itself isn’t intrinsically shocking as much as it is when used to support a particular vision of reality. I discussed this topic with several Google insiders working at the company’s Colorado and Ireland offices. They tell me that most of the information in the data dump is easily accessible and circulated frequently amongst Google employees. They tell me that, while it is frowned-upon to discuss such matters openly, the data dump only began having significant traction whence it landed on the doorsteps of Project Veritas, who knew how to use the data to reinforce a fiery conspiratory narrative.

Google as game designer?

Google’s convenient counter-narrative to these revelations runs along the lines that it’s all because, as Genmai says, “Google got screwed over in 2016”. It is clear that, during the 2016 presidential race, a series of right-wing media outlets managed to navigate through the ludicrously arcane Google and Facebook traffic algorithm, and successfully gamed it to the point at which both publishers had to change the rules of their game. There is a strong sense of enmity at Google about how a handful of Albanian or Macedonian fake news artists managed to “out-hack” Google. Designers at heart, Googlers are uncomfortable with the idea that certain content pieces are able to “outperform” (without directly benefitting Google/Facebook financially). Indeed, the only content that is meant to over-perform is paid/sponsored content.

What these whistleblower scandals and recent walk-outs have proven is that we cannot see Big Tech as monolithic, as a set of corporations acting solely to further the interest of shareholders or of ad revenues. Google is a group of individuals, made up of a plethora of political ideologies and socio-ethnic representations. It is a company full of engineers and designers who are aware of their impact on politics and society through their quasi-duopoly on the information space. With this awareness is a confidence and agency inherited from the “Googler” mindset; a perpetual journey to solve problems, even when alone against all odds. Add these ingredients together and what you have is an unstable cocktail of ideas that sometimes leads to breakthrough innovation, sometimes to conflicts over how best to wield technology to change the world (for the better).

Google as a microcosm of society

Perhaps the tech commentariat should have seen it all coming. Big Tech has accumulated a staggering amount of political power, through information-market dominance and financial success. Cries to regulate Big Tech “monopolies” have reached fever pitch. In the meantime, governments the world-over have urged Big Tech to build solutions to deal with some of the societal and political ramifications of their tools.

Who builds these solutions? Googlers do. The very same Googlers endowed with ideological responsibility, voicing their political and social views so that they may have a say in the way society is ultimately run.

So the more interesting question(s) lies a level below the accusations of the Whistleblowers or of the walkouts:

  • Can we trust people to build/design apolitical/non-ideological tools?
  • Should we, as subjective and emotive people, be always neutral/objective?
  • When did we choose to become subjective, when we become actors in the socio-political world, what accountability and responsibilities come with this decision?

The charge for Big Tech is two-fold, therefore:

  1. We cannot trust Google to be apolitical or non-ideological. Despite claims to impartiality, there is little evidence suggesting a system in place to restrict partiality and individual agency from the tools designed. In fact, there doesn’t seem to be the desire to remain impartial, as demonstrated by the contents of the data dumps and history of industrial actions (walkouts and whistleblowers).
  2. We cannot know Google’s editorial line. We all know that MSNBC leans to the Left. We all know Fox News tends to the Right. Publishers on paper, radio, television and magazine have all remained informative and respected news-outlets, while also recognising their own biases. This is called adopting an editorial line. Masquerading behind their label as a “technology company”, Google and other Big Tech have all relinquished their responsibility to identify inherent bias and to communicate this bias (or take deliberate steps to repeal bias) to its readers/users.

Our next piece on the topic will look at how useful the comparison between editorial lines and product design-bias at Google and other Big Tech companies can be.

Feel free to read through the detailed insights of Wonk Bridge’s read through Project Veritas data leak below. A bientôt!

Project Veritas Case Study

Core claims:

  • Senior executives made claims that they wanted to “Scope the information landscape” to redefine what was “objectively true” ahead of the elections
  • Google found out what he did and sent him an unsolicited letter making a threat and several demands including a “request” to cease & desist, comply by a certain date, and scrape the data (but by then Vorhies already sent the data to a legal entity)
  • “Dead Man’s Switch” launched in case “I was killed”, which would release all the documents. The next day, the Police were called on grounds of “mental health” (something Google does apparently frequently with its whistleblowers)

From the data dump, an oft-cited passage:

“If a representation is factually accurate, can it still be algorithmic unfairness?”. This screenshot from a “Humane tech” (a Google initiative) document was used by Vorhies to say facts were being twisted to manipulate reality into “promoting the far-left wing’s social justice agenda”.

Image for post

From the Data dump

Whether or not it does so deliberately, the leaked blacklists point to a preference for slamming the ban-hammer on right-wing conservative or populist content, if our scope is limited to US content.

Image for post
Screenshot but you can find the rest of the leaked list here

As a publisher, there is no clear reason why it should be objective here, just like how MSNBC and Fox News are pretty clear in their ideological stances too. The issue is that Google presents itself as a neutral tool.

What is the solution to an overly ideological publishing monopoly? Generally, this translated to the creation of competitors, which has not occurred. Perhaps it’s early days, but there are enough Conservative coders and programmers out there and enough right-wing capital in circulation to create a rival to Google. Just speculation here.

https://spectator.org/what-should-conservatives-do-about-google/

Google’s response to the Project Veritas leak is much more damning, however. The case here being that Freedom of Speech and social activism should be permitted in both cases (Google’s and Project Veritas’). a) The removal of the video from YouTube b) threatening of the whistleblower… Does it qualify as abuse of power?

The crackdown on whistleblowers (evidence: organisers of the Google Walkout), organisers of industrial action in the Google Walkout decried similar discrimination in reverse to that of the right-wing conservative Googlers. ““I identify as a LatinX female and I experienced blatant racist and sexist things from my coworker. I reported it up to where my manager knew, my director knew, the coworker’s manager knew and our HR representative knew. Nothing happened. I was warned that things will get very serious if continued,” one Googler wrote. “I definitely felt the theme of ‘protect the man’ as we so often hear about. No one protected me, the victim. I thought Google was different.”” The claim there was that Google wasn’t doing enough to protect social justice at work (and also in the products they make). The claim here being that Google doesn’t respond convincingly to these allegations.

In a message posted to many internal Google mailing lists Monday, Meredith Whittaker, who leads Google’s Open Research, said that after the company disbanded its external AI ethics council on April 4, she was told that her role would be “changed dramatically.” Whittaker said she was told that, in order to stay at the company, she would have to “abandon” her work on AI ethics and her role at AI Now Institute, a research center she cofounded at New York University.

Now, it is easy to fit these events into a broader narrative of the whistleblower crackdown, but it is clear that perspective plays a huge role in how you view these events. The disbanding of the External AI Ethics Council (which Wonk Bridge has discussed in a previous podcast) was also largely influenced by the Council’s misalignment with the values of a majority of Googlers. Meredith Whittaker may have tried to be balanced in her running of the Council, but that didn’t sit too well with the rest of the company body.

Claire Stapleton, another walkout organizer and a 12-year veteran of the company, said in the email that two months after the protest she was told she would be demoted from her role as marketing manager at YouTube and lose half her reports. After escalating the issue to human resources, she said she faced further retaliation. “My manager started ignoring me, my work was given to other people, and I was told to go on medical leave, even though I’m not sick,” Stapleton wrote. After she hired a lawyer, the company conducted an investigation and seemed to reverse her demotion. “While my work has been restored, the environment remains hostile and I consider quitting nearly every day,” she wrote.

Google as a Public Service Provider

The fates of Claire Stapleton, Meredith Whittaker and Zachary Vorhies all demonstrate a common moral dilemma posed by large and influential corporations; the balance between the corporate interest, the sum-total interest of employees, and of the “public interest”. These interests are often in conflict with each other, as is the case around the question of: “How should we manage access to controversial and/or potentially fake content”.

The reason why corporations like Google are not well-placed to answer such questions, is because they are unable to align their interests with the public interest in any accountable way. Well-functioning democracies are better placed to provide answers as their interests align directly to the public interest (in theory). Elected officials are mandated by “the People” to represent “the People” and fulfil the “Public Interest” in a representative capacity. As long as trust in elected officials and their capacity to fulfil the public interest is strong, the social contract continues to align the institutional and public interest.

As we look to our most influential actors (governments, large corporations, influential people) to show us the way through the COVID-19 crisis, the Public’s reaction will largely depend on the key question of whether they see their interests as aligned with the institutions in question. When Google and like corporations involve themselves in seemingly gratuitous philanthropy, they should not be surprised by negative push-back. It is an objectively good thing for Google to use its vast wealth of data to help curb the growth of Coronavirus. But the Public will still doubt whether it is a good thing for Google to actually do this.

 

Useful sources
Google Document (Data) Dumphttps://citizentruth.org/google-whistleblower-reveals-database-of-censored-blacklisted-websites/https://www.realclearpolitics.com/2019/06/25/google_whistleblower_reveals_plot_for_us_elections_478526.htmlVorhies Whistleblower story

https://www.projectveritas.com/news/google-machine-learning-fairness-whistleblower-goes-public-says-burden-lifted-off-of-my-soul/

Project Maven

https://venturebeat.com/2019/04/24/how-google-treats-meredith-whittaker-is-important-to-potential-ai-whistleblowers/

Retaliation against employees who whistleblow

https://www.vice.com/en/article/vb53wy/45-google-employees-explain-how-they-were-retaliated-against-for-reporting-abuse

An academic paper explaining the impact of search-engine manipulation on election outcomes.

https://www.pnas.org/content/112/33/E4512