Categories
Editorial

Just do nothing: An inconvenient digital truth

As addictive and stimulating technology proliferates across society, we are losing our most ancient and coveted ability. Join us as we explore the loss of our ability to do nothing and how stand-up comedians have become the unlikely torch bearer of an inconvenient digital truth.

Image for post

Have you ever tried sitting in a room and doing nothing? And when I mean nothing, I mean absolutely nothing. Chances are you won’t last very long and that’s mainly because the human brain has a ferocious appetite for information stimuli. It’s why meditation is so hard and yet advocated by so many. Fundamentally, we aren’t very good at quieting our brain and the past decade of technological advancement has been anything but helpful.

According to the basic fundamentals of human computer interaction (HCI), there are three main ways or modalities by which we interact with computers:

Visual (Poses, graphics, text, UI, screens, and animations)

Auditory (Music, tones, sound effects, voice)

Physical (Hardware, buttons, haptics, real objects)

Regardless of what computer type you are using — whether it’s a smart phone or laptop — physical inputs and audio/visual outputs dominate HCI. Indeed, these forms of interaction and feedback are the very foundation of how humans have developed computers to function alongside them.

Now take into account another fundamental theme of HCI development because with every successful iteration of technology, there exists a main defining principle: Mainly, people who use technology want to capture their ideas more quickly and more accurately. Keep this in mind for later.

Whether it’s 1839s’ Joseph Jacquard who used programmable weaving looms to create a portrait of himself using 24,000 punched cards or WWII military agencies that invested in the development of the first ‘monitor’ to allow radar operators to plot aircraft movement, the development and evolution of technology is largely predicted by this theme of speed and accuracy.

Image for post
Portrait of Joseph Jacquard next to his iconic weaving loom computer

So let’s go back to my introduction: doing nothing. Like I said, it’s real hard, but my hypothesis is that it’s much harder than it used to be. If you read one of my past articles on human brain development, I explored the idea of the modern brain not being so different to how it was 2,000 years ago. In other words, there simply hasn’t been enough time for evolution to weed out certain mutations of our brain genealogy. Therefore, how we develop as an individual and functioning person, is just as much nurture as it is nature.

Now, I’m aware that my argument will be formed by a series of ‘sweeping & shallow statements’, but I’d like you to picture what most modern societies of both the past and present would have done when confronted with the reality of doing nothing. Whether it’s a pilgrim town in colonial Virginia during the 1600s preparing for the harsh winter or a small present day Tibetan village nestled in the Himalayan mountains going about their usual day, both isolated societies, if not for the menial tasks of survival and hardship, are generally confronted with the reality of doing nothing on the daily.

Image for post
Children playing with toys during 17th Century Colonial America

For the children of both these societies, once most chores are done, they would generally be allowed to go out and play. In doing so, they had to quickly confront the idea of doing nothing. Sure, they had games, they had toys, but the realm in which these tools of time reside are largely within the imagination. In fact, playing with others for a vast majority of mammalian species is an essential form of growth and development.

Today, we are plagued by bright screens, sharp sounds, and intruding notifications. From the very first pager beeping, to the early 2000s MSN Messenger nudge (I can still hear that sound in my head), and the evolution and proliferation of the Facebook notification beep, we have slowly grown accustomed to be alerted by our technology. Most notably, is the proliferation of the newsfeed, which has largely evolved to lure us into a web-like slot machine of personalized and attention grabbing media.

Image for post
MSN Messenger user interface with symbolic ‘knock’ nudge

If you reflect back on your historic usage of Facebook, it generally follows this path: status text (2007) → photo post (2010) → video/story stream (2012). Remember that earlier theme and it’s three modalities? I believe it has dominated the evolution and usage of our most prolific technologies, especially when it comes to sharing aspects of ourselves and others across our various digital networks.

Moreover, this digital game of carrot & stick has greatly been exacerbated by how quickly modern society has shifted its fundamental functions to the current dominant technology. From how we consume our news, to monitor our work, and even order food, every function is now app based and by virtue, notification based. The consequence is that we are quickly being trained to look to our our phones to understand our life.

Image for post

This is not to say that all this is bad. As I’m sure many of you reading this are thinking, social media networks can be a great source of social good. Even a company with as a bad a reputation as Facebook does not deserve the ‘shtick’ (for lack of a better word) it gets. Thanks to Whatsapp and Messenger, you are allowed to communicate with your friends and loved ones no matter where you are. Google helps you gain knowledge and explore your interests by allowing you to quickly scan the web and find the information you are looking for. Did I mention this is all for free?

But there is an inherent danger when we grow too dependent on a certain technology. Texting is great but have you tried actively listening to a conversation? Google searching is fantastic but have you tried reading a book from start to finish? Indeed, most of us joke about our dwindling attention spans but I fear none of us take it very seriously.

If our attention is to be monetized for ads by Silicon Valley, we need to also start seeing it as our currency to how we learn and grow as individuals. The less attention we are willing to give, the less personal development we will get in return. From clickbait journalism, to the inherent shallowness and distraction of social media, the examples for this argument are numerous and worrying.

Image for post

I believe I can speak for most generation Zers when I say we were lucky to have barely avoided the advent of social media while growing up. Because, by and large, as children, we were forced to confront the same idea of doing nothing as most other past societies. We had to use our imaginations and our social skills to play by ourselves and with others. Of course, critics will say we had game consoles like the Playstation and cable television like Comcast, but it wasn’t as enslaving. Today, video platforms like Netflix let you binge, game developers like Electronic Arts let you win, and social media companies like Facebook steal your time.

Because even gaming , which I believe represents superior elements of story-telling and cooperative strategy, has been tweaked for profit by executives and developers to be addicting. Once upon a time, triple-A video games were simply great for their 1 player or 2 player story mode. Like opening a book, you could dive into a world, play, learn, and explore but there was no mechanism to constantly lure you back besides the gameplay itself. It was just as easy to stop as it was to begin. Today, you have loot boxes and pay-to-win features which aren’t truly about the game. It’s about hooking you emotionally and getting you to pay more money.

Image for post
Screenshot of mobile game Jam City.

Yet, I digress, because this article isn’t about the exploitation of gaming as a medium or even how most platforms today function as a social slot machines. No, this article is about how many of us are slowly becoming incapable of doing nothing. It is how we are slowly but surely being re-wired by tech-based companies, whose bottom line is not to make you a better or more informed person, but instead to keep you glued to a screen and push advertisements and paid services.

There’s a quote in a 2001 stand-up act by the late-great comedian George Carlin who I believe really drives this point home. Although he is speaking about the proliferation of overbearing parents, I believe the same logic can be applied to my discussion. Just a quick disclaimer, there is profanity in this video but as you already know, he’s a comedian.

“You know, [talking about overly concerned parents organizing playtime] something that should be spontaneous and free is now being rigidly planned, when does a kid ever get to sit in the yard with a stick anymore, you know, just sit there with a ******* stick, do today’s kids even know what a stick is?”

— George Carlin

The idea of children no longer being taught or given the opportunity to simply sit in the yard with a stick is humorously worrying. Whether it’s hyper vigilant parents who coddle them for their safety or frustrated parents who shove a screen in their face to keep them from being annoying, children today are the victims of societies rush to quicker and more accurate technology. Although the theme of speed and accuracy has served us well, skyrocketing productivity from the punchcard, to the mainframe, to the PC, and now to the smartphone, I believe there is an inherent danger in our chase for quicker and more accurate technology.

I am not writing this article to give solutions. That is not what my primary intent was when I set out to write this. I’m not here to tell you to meditate, or to stop you from using social media, or even to limit the use of your phone. Moreover, I am conscious enough to realise that much of what I’m saying is rooted in personal hypocrisy, because I am just as much a slave to my inability to do nothing as most of you are.

But if there is one message I’d like to get across, it’s that we should embrace the nothing. The idea that maybe we don’t need to be stimulated by our looping relationship with the physical, visual, and audio modalities of modern technology. You can silence your phone and put it in the other room. You can sit in a train and not scroll through a newsfeed. You can stare at a wall and do nothing.

Because if you force your brain to be quiet, you’d be surprised how much it will start saying.

“The thing is, you need to build an ability to be yourself and just not be doing something. The ability to just sit there and be a person. Underneath everything in your life, there is that thing, that forever empty. That knowledge that it’s all for nothing and you’re alone…”

“And sometimes when things clear away and you’re in your car, and you start feeling it — this sadness, life is tremendously sad, just by being in it — That’s why we text and drive, people are willing to risk taking a life and ruining their own because they are unwilling to be alone for a second ”

— Louis C.K.

Categories
Editorial

When the tool uses you: How immersive tech could exploit our illusion of control

From the fax machine making information vulnerable to loss and theft to the internet making malware easy for susceptible users to download, malicious actors have always found a way to exploit our naivety to new technology. What dangers should users, businesses, and governments expect from immersive technology?

Image for post

You’re sitting in a virtual meeting room. Although the marble walls and mahogany table encompassing the space appear vectored and block-like, you feel oddly at ease. As you look around the room, everything feels intuitively wrapped around your eyes. You’re surprised to find how fluid your hands feel as you gesticulate to a nearby avatar. Hovering between the two of you is a larger than life three dimensional model of your proposed project.

Snap back to the reality of your boring home-office. You’re actually on Zoom. Your computer monitor is bright but the glare from the nearby window hurts your eyes. The video-chat interface is cluttered with tiny webcams talking over one another. You’re connected to the internet but you feel disconnected from your team and although you may not see it now, you are living on the verge of a paradigm shift.

The immersive paradigm shift is a moment in time where the line between what we perceive is ‘real’ and what is not will blur indefinitely. This is a world where cameras are programmed to defy reality, bodies swing and walk into nothing, and eyes become sentient portals to a collective imagination.

If you haven’t guessed it by now, of course I’m talking about the trifecta of incoming immersive technology, or rather the much anticipated mass market emergence of augmented reality (AR), virtual reality (VR), and mixed reality (MR). While all three somewhat differ from one another, they share one important aspect: that is, the representation of a new dimension to human computer interaction (HCI).

Image for post

That’s not to say this is our first rodeo. Over the past 25 years we’ve seen technology bring forth dramatic changes to the economic and social fabric of our society. From the internet powering our knowledge economy to mobile computing transforming how we communicate, these significant evolutions are judged not just by their technical sophistication but by their intrinsic ability to transform our lives.

Thanks to advances in computer vision — particularly in object sensing, gesture identification, and tracking — sensor fusion and artificial intelligence has furthered our interaction with computers as well as the machines understanding of the real world.

Moreover, advances in 3D rendering, optics — such as projections, and holograms — and display technologies have made it possible to deliver highly realistic virtual reality experiences. As a result, immersive technologies can now allow us to interact with ourselves and machines in a completely different manner as we will no longer be confined to a 2D screen.

As scary as that may sound, governments and businesses need to be preparing for the various modalities that will be introduced by immersive tech across their products and processes. This moment in time is no different to the shift from fax to email or the introduction of the smartphone. Moving to VR and AR will simply be the next natural step in staying relevant and competitive.

So if immersive technologies are poised to profoundly change the way we work, live, learn, and play, what ramifications should we come to expect? As speech, spatial, and, biometric data are fed into artificial intelligence, new questions will emerge over the extent of our virtual privacy and security. As technology becomes more comfortable and intuitive, we are at risk of going under the illusion of control.

Image for post

Throughout the history of computing, every significant shift in modality has brought with it new and potentially destabilising threats. If we fail to ask the right questions, the problems we will experience adjusting to this new technology may be greater than those posed by the internet and mobile computing combined. Let’s explore some examples:

It’s no secret that augmented reality technologies, which overlay virtual content on users’ perceptions of the physical world, are now a commercial reality. Recent years saw the success of AR powered camera filters such as Instagram stories, with more immersive AR technologies such as head-mounted displays and automotive AR windshields now being shipped or in development.

Image for post

With over 3.4 billion AR capable devices expected to hit the market by 2022, augmented reality is predicted to make the earliest splash amongst consumers. We should expect wearables that will allow us to navigate in the real world through Google Maps and camera applications that will scan the relevant objects surrounding you in a grocery store. Therefore, anticipating and addressing the securityprivacy, and safety issues behind AR is becoming increasingly paramount.

Buggy or malicious AR apps on an individual user’s device are at risk of:

  • Recording privacy-sensitive information from the user’s surroundings | Productivity tools
  • Leak sensitive sensor data (e.g., images of faces or sensitive documents) to untrusted apps | Instagram & Snapchat
  • Disrupt the user’s view of the world (e.g., by occluding oncoming vehicles or pedestrians in the road) | Google Maps

Multi-user AR systems can experience:

  • Vandalism such as with this incidence with augmented reality art in Snapchat
  • Privacy risks that bystanders may face due to non-consensual recording by the devices of nearby AR users

For the most part, AR security research focuses exclusively on individual apps and use cases; mainly because many problems we have already experienced with internet and mobile computing are expected to crossover to the new AR medium.

For instance, when the app store was first launched, many iPhone apps were nefariously designed to siphon and package individual mobile data in the background. Security analysts expect similar issues to arise with AR; only this time it won’t just be our location data they’re after but our more sensitive biometric data. More on that later.

Lastly, AR technologies may also be used by multiple users interacting with shared virtual content and/or in the same physical space. This includes virtual vandalism of shared virtual spaces and unauthorised access to group interactions. However, these risks have not yet been studied or addressed extensively. This will surely change as the technology hits the mainstream.

Image for post
Jeff Koons’ augmented reality Snapchat artwork gets ‘vandalized’

Virtual reality is the use of computer technology to create a simulated virtual environment. As visual creatures, humanity has been dreaming of creating virtual environments since the inception of VR developmental research in the early 60s. At first, commercial uses were mainly in video games and advanced training simulations (NASA) but as the technology advanced, so did our potential for using it.

Since the 2012 launch of the Oculus Rift, digital tools for VR have slowly begun to emerge. From Facebook’s all-in approach with the virtually collaborative social-media-esque Horizon and Valve’s newly released and highly praised virtual zombie game Half-Life: Alyx, there are plenty of examples today to show off the prowess of current-state VR. Indeed, with so many development and hardware companies competing for market share, it may feel like virtual reality has finally arrived.

However, as new tools and applications for virtual reality continue to develop, new questions are emerging over intellectual property rightsSince everything in virtual reality will be a renderised model of something, control over the aesthetics, feel, and look of a certain model may imply some form of ownership. This will become more pressing as the medium extends into other fields such as autonomous vehicles, e-commerce and even medical procedures.

For instance, items bearing a brand, recognisably shaped cars, dangerous weapons, and iconic places, have appeared in video games for years. A great example is Rockstar’s Grand Theft Auto series which has had numerous IP battles after satirically recreating the cities and environments of Miami and Los Angeles.

Image for post

Once we reach a form of ‘near reality’ within a game environment (one that is higher fidelity than the current 2D experience), we should expect intellectual property issues in virtual reality to sky rocket. For instance, a printed image of a painting from Google Images is much less of an IP issue than perhaps a virtual high-quality model of the same painting within a future virtual space.

Couple this with the fact that the depth-sensors in our phones are increasingly more capable of scanning real-life objects and modelling it real-time, means that in the future, anyone will be able create a virtual model of anything, and place it virtually anywhere.

Intellectual property predictions to expect:

  • IP protection of places, and buildings is a growing trend with EU lawmakers continuing to debate whether built structures, which are open to the public, should have rights attached to them. This is known as the so-called “freedom of panorama”.
  • IP protection of experiences such as touching or smelling a particular store, airline or hotel chain is possible with haptic virtual technologies. Although it is difficult to justify protecting an aesthetic today (only Apple has managed with its store layout), this may be more relevant in the future with VR.
  • Featuring a branded item or a well-known person is currently seen as a potential intellectual property infringement. How will this change if it is the player who is inserting self-scanned models rather than the game developer? Who is going to be liable?

This last point is most interesting because it relates to whether the platform or developer is liable even though it may be the user who is placing IP-protected models into the virtual environment. This concept is a similar crossover to the early days of peer to peer technologies with Napster and Limewire when users uploaded IP protected MP3 and video files to shared servers.

In the future, VR should expect similar IP problems that we get today. Faster computers and smarter artificial intelligence programs will allow users and developers to upload virtual objects at an unprecedented ease. Add on to this the idea that virtual reality will someday be as realistic as real-life and we’ll have an interesting problem on our hands.

Unlike virtual reality which immerses the end user in a completely digital environment, or augmented reality which layers digital content on top of a physical environment, mixed reality (MR) occupies a sweet spot between the two by blending digital and real world settings into one.

Image for post

When it comes to mixed reality, biometric and environmental data is an essential yet consequential by-product of sensory technology. This is mainly because developers need access to data to tweak specific functionalities and perfect the comfort and usability behind an immersive tool.

Thus, as immersive tools enter our homes, we are at risk of digitalising and exposing our most personal of information. The potential by-product of these applications siphoning biometric data is fundamentally tied to our security and privacy. Nobody at first knew how much user data the mobile phone was collecting through our apps. Why shouldn’t we expect the same with immersive devices?

The data collected will someday include:

  • Finger prints
  • Voiceprints
  • Hand & face geometry
  • Electrical muscle activity
  • Heart-rate
  • Skin response
  • Eye movement detection
  • Pupil dilation
  • Head position
  • Hand pose recognition
  • Emotion registry
  • Unconscious limb movement tracking

At its core, there is nothing more sensitive and unique than an individual’s biometric data. For instance, heart rate, skin response, and eye movement within a controlled virtual environment can be collected to potentially analyse an individual’s reaction to a virtual advertisement. Thus, a feeling that is meant to be reserved for your own inner-self can someday be downloaded and scrutinised by external corporate entities.

Additionally, it’s important to mention that unauthorised collection of biometric data is prohibited under article 4(14) of the GDPR. However, despite this, questions on the potential consequences of this data being mis-collected or misused remains highly relevant. Advertising will be the first to enter this space but expect greater consequences with the continued advent of the surveillance nation state.

Every major modality shift in technology has brought with it new threats and dangers. From the fax machine making information vulnerable to loss and theft to the internet making malware easy for susceptible users to download, malicious actors will always find a way to exploit our naivety and ignorance.

As users and consumers of digital technology, we need to be aware of the privacy risks involved when hooking ourselves up to sensor-laden devices. Virtual reality can be really useful and fun but remember to make sure you’re biometric data isn’t getting funneled to a third party.

In the business world, immersive technology will force many companies to rethink their internal and external processes — due diligence and hiring the right people will be important. So is taking the necessary steps towards protecting your IP or making sure your virtual products can’t be hacked or ‘vandalised’.

Lastly, governments and public institutions need to prove to the public that they can preempt the various threats immersive tech will bring to business, social well-being, and user privacy. So far, legislators and tech companies have been playing a game of cat and mouse. As we move forward, a firm hand and some much needed transparency will be key.

The future of technology will no doubt be impressive. Someday we will look up to the skies to access our information instead of down to our phones. Yet, the warning here is that comfort is never bliss. Where there is comfort, there is an opportunity for naivety and exploitation. As we gear up for the immersive paradigm shift, please remember to stay informed.

Categories
Editorial

The Neurological Conditioning of Sound

The greatest weapon in a sound designer’s arsenal is the mere fact that we listen first and react second. Join us as we briefly explore the neurological, anthropological, and digital histories behind how we interpret sound and why not everything you hear should sound like the truth.

Image for post

Remember that time when you were alone in that quiet house for the first time and heard a creepy sound? Maybe it was a windy day and the floor creaked and the window bellowed. That sound you heard, was clearly the logical result of wind pushing into a creaky wooden structure, yet the auditory impact is interpreted by the hypothalamus (a small but very important part of your brain that regulates fight or flight) as a threat.

Your thoughts quickly flow into scenarios: is it a ghost? Or perhaps a robber? For the first five seconds these possibilities are all you might consider. They dominate your imagination and thought processing. Until the rational side of your brain — granted some time passed without other similarly scary sounds — convinces you that the sound is nothing to be afraid of. But part of you still believes that, during those first five seconds, you actually saw, or at least heard, a spooky ghost making that sound.

If you are unlucky enough to believe you have witnessed paranormal activity, you can consider yourself conditioned. In humans, conditioning is part of a behavioural repertoire of intelligent survival mechanisms supported by our neurobiological system.These underlying mechanisms promote adaptation to changing ecologies and efficient navigation of natural dangers. In this case, you have been conditioned to be aware of a sound attached to a particular danger.

Conditioning is a big reason why our brains don’t like to be surprised. Otherwise known as the Survival Optimization System (SOS), our response to most danger usually begins with a sound. This is because, as far as the human experience goes, you hear way faster than you see and at over 300,000 kilometers per second, sound gets into the ear so fast that it modifies all other input and sets the stage for it.

Image for post
Tree graph provided by The ecology of human fear: survival optimisation and the nervous system.

We hear first and listen second because in this Darwinian struggle we call life, it’s considerably faster and more effective for our brains to react to the possibility of a threat then to wait for its validity. Thus, the bi-product of a ghostly trauma, is a deep mechanistic rewiring of our neurobiological system to that specific occurence of sound. So for at least the near future, any sounds you hear alone in a quiet house will trigger your brain’s survival mechanisms and neurobiological nervous system to react fearfully to the potential presence of a spooky ghost.

Yet, conditioning doesn’t only happen with things that scare us. As we said before, conditioning is a natural process the brain undergoes when faced with repetitive sensory information. It is a software-like response that codes a defence mechanism into our subconscious reactions.

In psychology, sound conditioning is defined as:
A process in which a stimulus that was previously neutral, comes to evoke a particular response, by being repeatedly paired with another stimulus that normally evokes the response.

A classic example of a sound conditioning experiment is the Pavlov Experiment which sought to establish if salivation in dogs could be caused with the pairing of a bell sound stimulus. Everytime Pavlov rang the bell, he would feed the dogs. After doing this repeatedly, the pairing of food and bell eventually established the dog’s conditioned response of salivation to the sound of the bell. After repeatedly doing this pairing, Pavlov removed the food and when ringing the bell the dog would salivate.

Image for post

What the Pavlov experiment demonstrated is that most intelligent animals, including humans, given sensory repetition, are capable of experiencing a conditioned response to a conditioned stimulus. It’s a big reason why we people listen for cars before crossing the road, why particular songs make us remember the past, and in a more humorous sense, why children run after the ice cream truck.

Throughout human history we have devised alarms that alert us to small and large dangers. As humans gathered into larger groups and more permanent settlements, we artificially conditioned ourselves to respond to alarms that would warn us of incoming danger of all kinds. From early fire alarms alerting 100 people that a building is on fire, to tsunami alarm-systems alerting millions to get to high-ground, the story of alarms is largely the story of civilisation.

We hear dozens of conditioned alarms without even realising it such as car horns, police sirens, school bells, and most pertinently, the digital songs, sounds, and notifications of our everyday consumer technology.

Today, most people will wake up and listen to sounds they have been conditioned to hear. For instance, some may incorporate a specific upbeat song into a wakeup or workout routine. Others may play particular songs that can be associated to the memories of romantic thoughts and relationships. Even outdoor concerts and music venues can function as places of establishment and communication of tribal signatures such as identity and mating readiness.

A popular, catchy summer song (Daft Punk’s Get Lucky comes to mind for 2013) may define an entire summer, not just in one country, but around the world. Together these songs — whether associated most to a workout, romantic date, or summer party — represent rituals of emotional outputs or certain moods.

It’s no secret that sound designers today take great interest in our emotional conditioning to particular and ubiquitous sound. In fact, how past societies have interpreted sound historically is a large part of a sound designers inspiration to understanding how to select the right ding for your app.

For instance, the talking drum of West Africa is an interesting and unusual example. The drum was specifically designed to make a variety of sounds that emulate human speech, giving it a basic but intuitive beat-like vocabulary. This made the drum an effective signalling device for long-distance communication between remote African villages.

Upon drumming a beat, other far away drums would hear the signal and pass on the beat-like message similar to how a torch runner passes on a flame to another torch. Perfectly sophisticated, too; only weeks after Abraham Lincoln’s death, news of the tragedy and its complex implications had penetrated the African interior on the drum.

From an anthropological and sound design perspective, the drum of West Africa was far beyond any other audible communication device of its day. By communicating a wide variety of messages based on rhythm, tone, and strength, its sound was designed perfectly for what it was needed for. It was an ideal, early, and elegant solution to a common problem villages had when communicating. The outcome was that three strong beats meant an attack was coming. This would conditions other villages to be prepared to mobilize together and defend an alliance.

Today sound design has transitioned greatly in its effort to convey messages. We have gone from the drawn out drum beats of the Savannah, to the binary pings of morse code, to the monotonous buzzes of the pager, and now to the myriad of pinging sounds from our smartphones.

The outcome is that we have become conditioned to the smartphone the same way we are conditioned to a fire alarm or West African drum beat. For some, the ding of a social media message can bring forth excitement, butterflies to the stomach, or even a sigh of relief.

The video above contains a recording of an all too familiar sound in our current pandemic times: the Zoom incoming call ringtone. The deliberately interchanging of high-pitch and medium-pitch notes resembles a non-threatening plea or cry for attention, which repeated can quickly turn into an aggressively annoying noise that must be addressed. Sounds are a major tool in the software and hardware developer’s arsenal to usher the types of emotional reactions intended of the user. We respond instinctively to natural sounds — which can trigger any set of emotions. We also respond instinctively to artificial sounds, who are most effective at doing so when they mimic sounds that we are already conditioned to.

Nowhere is emotional conditioning to sound more prevalent than in our current and historic use of social media. Take for example the now-retired Facebook Messenger notification. For some, hearing that sound will transport them back to 2013. Perhaps they will associate that sound with a lost love creating an neurological output of emotional pain. However, for most of us today, the ding of a social media app gears the brain to expect some form of social gratification.

Indeed, before even glancing at our screens to see who it is that liked our last photo or sent us a message, we begin to imagine the realm of possibilities for who may be trying to contact us. Is it a crush? Is it a friend inviting you to that party you wanted to go? With every chat, comes an expectation, and the stronger that expectation is emotionally, the stronger you will be conditioned emotionally to that sound.

When distinct and repeated sensory stimuli, like UI sounds, are paired with feelings, moods, and memories, our brains build bridges between the two.

– Rachel Kraus

As devices, software applications, and apps become omnipresent, the User Interface (UI) sounds they emit — the pings, bings, and bongs vying for our attention — have also started to contribute to the sonic fabric of our lives. And just as a song has the power to take you back to a particular moment in time, the sounds emitted by our connected devices can trigger memories, thoughts, and feelings, too.

A word of advice from someone who has felt the anxiety of a message tone and the sadness of an old song, I believe all of us today should be more aware of how digital sounds can be tuned to condition our emotional lives. Like Pavlov’s dogs, Silicon valley is conditioning our usage of their products to expect social gratification from the various dings and boops of our devices. We need to learn to expect these feelings from the outside world, not from the digital world inside our pockets.

Only then can we begin to clean up the noise and listen to the music.

Image for post

“Who we are is not just the neurons we have,” Santiago Jaramillo, a University of Oregon neuroscientist who studies sound and the brain, said, referring to cells that transmit information. “It’s how they are connected.”