We’ve given computers our minds.
We now need to lend them our bodies.
Computers are ubiquitous. Aided by the internet, they have infiltrated our individual, social, and political lives. With the expansion of artificial intelligence and machine learning, they are poised to have even greater impact. But we are at a crossroads.
Put simply, there is a problem in placing increasing trust in our machines. Never mind that the average, and even the educated human, doesn’t understand computers so well— computers don’t understand humans so well.
Machines are increasingly tackling real-world issues without real-world inputs, and this greatly affects their capacity to be healthy contributors to our society. To address this, we need a massive shift in the way that we—engineers, technologists, sociologists, journalists, ethicists, all of us—approach discussing, designing, researching, and building computational devices.
We need to give them the wisdom of embodiment.
About ten years ago, I woke up one morning and couldn’t walk. I had developed chronic low back and hip problems from working at my computer, and on this day only a few years after I’d finished college, the symptoms were at their worst. Why would my use of technology harm me physically?
The long road of chronic pain management sparked my interest in this relationship. It led eventually to New York University’s Interactive Telecommunications Program, where I recently completed three years of experimentation and research at the intersection of the body and computing.
At NYU, I created both interactive performances and creative tools. I read voraciously. The more I learned, the more I understood the vast gulf between the way people interact with the real world and the way they interact with virtual ones. My intellect began to embrace what the rest of me had realized long ago: by and large, computing focuses on the mind and ignores the body. This affects our physical, mental, and social well-being.
It doesn’t have to be this way. To get past abstractions, picture a Thursday morning in the not-so-distant future.
Your alarm goes off, and you feel refreshed. It’s about an hour later than you thought you would wake, but you don’t worry. The alarm knows your patterns, and rings when you complete your sleep cycle; your calendar has been reworked to accommodate the extra shut-eye. Hungry, you go to the kitchen, where your grocery delivery has been optimized for your biology, the time of year, and your upcoming weekend schedule. You shower, get dressed, then head outside to the driverless car that arrives just in time to whisk you to work. On the way, you rest your head and review some op-eds and policy papers for voting in your local primaries. Nearly imperceptible bands around your wrists allow your fingers to control images on contact lenses and sound through a tiny listening device behind your ear. Your intelligent OS has collected everything you asked for last night, checked its facts and credibility, and built an audiovisual briefing just for you. You complete your ballot by voice and sign it with your unique hand gesture.
Here you have every tech optimist’s muse: a world where technology is seamlessly integrated into our lives, making us more productive, more balanced. Giving us superpowers without detracting from our day-to-day lives, from our ability to do meaningful work, or from spending time with the people we love. Making us better, singly and collectively.
To be sure, if we’ve learned nothing else in our relationship with tech, it’s that there are unanticipated effects and applications, not all of them good. Indeed, this world is also every science fiction writer’s muse, the foundation for dystopia.
The utopian vision depends on technology operating in the best interests of each user and of society as a whole. That’s a tall order, harder to envision than the acknowledged potential for abuse. Nonetheless, let’s play the optimist for a moment. Imagine that we want to begin working toward tech utopia. How might we understand even one user’s best interest?
Conventional design wisdom suggests we study the user, her behavior, her needs.
Today’s computer boils her down to a set of disconnected usage numbers input through a keyboard and mouse, maybe a two-dimensional webcam. But she’s complex. Drawing in an instant on billions of years of evolutionary learning, she processes and stores information from multiple senses, multiple times per second. She then uses the data to inform her personal and societal decisions.
If we want a shot at the tech utopia—one where computers take a responsible role in our individual and shared lives—we need to help computers think more like humans. We’ve given them our minds. We now need to lend them our bodies.
I have a few suggestions to help move us forward.
#1 Define the User Holistically
Don Norman coined the phrase “user-experience design” at Apple in the 1990s because he felt design lacked language to cover “all aspects” of a person’s experience with a system. Since then, he is the first to admit, the term has become overused and the concept underused.
Most design still defines “user” too narrowly—by the mental tasks she is trying to perform, with little or no regard for her body. The result is hours of disembodied computing time, often impairing posture, physical well-being, self-esteem, productivity, and social skills.
Some might argue that fixing this is the sole realm of hardware designers. I say: Hogwash.
Software creators design moment-to-moment experiences for billions of users. Although these creators work within the confines of hardware, they have immense flexibility from there. Facebook, for example, chose to design an endless feed of updates that keeps people hooked for long periods of time. It could have designed a feed that ends in three minutes, then encourages users to look up from their phones, maybe do some neck rolls, or go for a walk. It sounds outlandish in today’s advertising-driven online environment, but I would venture to guess that social media users would prefer an application that fosters healthy behaviors over addictive, voyeuristic ones.
Today we reconstruct our faces to fit two-dimensional cameras, distort our spines to stay connected, and choose screens over sleep. Such side effects must be front and center for hardware and software creators as they design tomorrow’s digital experiences. Pretending technology isn’t deeply connected to physical, mental, and social well-being is not only out of touch, it’s dangerous.
Shifting to seeing users as whole persons will take work. How does embodiment affect users’ needs and abilities? How do hearing, seeing, speaking, feeling, tasting, and smelling affect their desire and ability to perform a task? How does the task move with the user beyond the immediate computing session? What does a user experience look like when you think holistically throughout the user’s 24-hour day? How do designers identify and accommodate seemingly infinite variety among individuals?
#2 Question “Intelligence”
The terms “artificial intelligence” and “machine learning” are sprinkled on everything from marketing to manufacturing, but few people have more than a cursory understanding of their meanings and how they work.
The very definitions are problematic. Machine “intelligence” and “learning” are often linked to the human variety and lumped into a concept of “general intelligence” that ranges from low to high. In fact, in computing, distinct “intelligences” are increasingly specialized—for chess playing or fruit sorting, to name just two.
Conflating computer and human intelligence causes confusion inside and outside of tech circles, and limits both understanding and innovation. We can remedy this by improving the language we use to distinguish machine from human. Practitioners, academics, and thinkers should consider a new framework for discussing the field. Neuroscience and pedagogy— particularly Howard Gardner’s theory of multiple intelligences—are helpful starting places.
Language is part of the issue. Also critical are fundamental differences in how humans and machines learn. To understand the problem, look at embodied cognition.
Embodied cognition is the theory that our bodies inform our minds. It goes beyond mind-body connection to posit that the mind “arises from the nature of our brains, bodies, and bodily experiences.” Computers, in contrast, learn through opaque artificial neurons trained with pre-processed data that lacks physical grounding.
The idea that machine intelligence would benefit greatly from human-like bodily awareness is not new. Alan Turing discussed it as early as the 1950s, embodied cognition entered the AI/ML discussion in the 1980s, and recently researchers at IBM, Facebook, and elsewhere have dipped their toes in this realm. But application of the concept is as difficult as the body is complex, and meaningful progress will take time.
We should use this time to expand our knowledge. Yes, AI/ML researchers and academics will continue to explore embodied learning. People working in fields affected by this research and those who report on it have responsibility too. This is a nascent field with long-term implications for every one of us. Let’s not pretend to understand what is unknown. Rather, let’s get involved: Read, ask questions, understand the technology’s powers and limitations, then question what we’ve learned.
We need to push the limits of discovery now, before the silicon hardens.
Of course, many artificial intelligences (like the aforementioned chess player) don’t need the embodied nuances that humans carry into life. But intelligences taking on social roles do. Today we invite AI/ML to get involved in the criminal justice system, moderate our political discussions, and reform our economy. While it may not be possible or desirable to create machines that fully mimic human embodiment, getting to the heart of the body’s importance in intelligence and learning would give computers a finer understanding of the complex questions we increasingly outsource to them.
#3 Take an Embodied View of Data Collection
There’s a disconnect in what computers were originally built for — solving analytical problems — and what they are now being used for — interacting with the world at large. Machine learning indeed shows promise in helping the computer transition to social actor. But it brings to light a central difficulty: data.
Machines learn from data collected mostly online through intermediaries, like mouse and keyboard. The result is worlds away from the rich embodied data that people absorb in daily life.
If we want machines to think more like humans, we need to narrow the gap.
In computing’s early days, inputs were limited. The leap forward in mobile and wearable technology offers a plethora. We can now detect motion, location, position, depth, vital signs, voice, and so on. We also have increasing connectivity, storage, and computing power. We can begin to change the ways people interface with their computers, and ultimately how computers understand people and their environments.
But an embodied view of data goes beyond collection. Just as we have certain unalienable rights in our real bodies, we must extend those rights to our increasingly extended selves.
Advising people to relinquish more personal data pushes against my comfort zone. I’m wary of data collection, as many of you probably are. Still, I agree with Kevin Kelly’s assertion that “tracking” is a trend on its way in, not out. Rather than fight it, I say we make it better.
If we act with humanity and responsibility, there is, I would argue, much to be gained. The data collected today will lay the groundwork for the programs and environments of the future. The more we know, the more we can make informed decisions and informed software and hardware. Go back to my observations about defining the user holistically. What can we learn outside of keyboard and mouse input? How do environment, mood, touch, movement, temperature, sound, smell, or taste affect the user?
We don’t completely fathom how these factors affect us without the computer as intermediary. How might data obtained with more intelligent collection aid us in better knowing ourselves? To belabor the point: If, as Kelly puts it, “we are not really beings, but ‘becomings,’” how could reimagined information help us on our journey?
#4 Realize We Are in a Paradigm Shift
Personal computers and the internet have been changing the world for three decades, while in just five years, accelerated adoption of smartphones and social media have brought computers to ubiquity. Concurrently, the combination of big data, faster processors, and deep learning have spawned a wave of advancement in AI/ML. These factors converge to create a paradigm shift.
Paradigm shifts require new ways of thinking. But the new can be stymied by the old. Take web design: Mobile accounts for more than 50 percent (and rising) of global web traffic, yet most designers and engineers aim first at desktop, keyboard, and mouse, and second at mobile and touch.
New thinking takes time to develop. It requires a lot of mental capital. The process is messy, ugly, and results sometimes in terrible ideas. It’s hard to force ourselves to go to these depths, especially when what we know has worked for us this far.
But wading through the swamp of possibility is the only way to arrive at true innovation. And only true innovation will ensure that when we look at our technology 20 or 50 years from now, we see systems that reflect our embodied humanity, not just the dark corners of our disembodied minds.
#5 Go for Walks. Lots of Them.
We don’t have a fully logical explanation of the mind-body connection. Maybe that’s because there’s no way to “logically” understand what we intuit through embodiment. What we do know is that our embodiment informs who we are, and how we relate to ourselves, others, and the environments we share.
Our current connected technology takes us out of our embodiment. It translates our world into a place defined by the mind’s output without the underpinnings of the body’s input. To innovate in ways that evolve this relationship, we need to continue to explore who we are outside of our devices.
Preparing to put my initial thoughts to paper several weeks ago, I went for one of the many walks that have shaped this article. That first afternoon, I returned wondering whether I’d always have to get away from my technology to feel present and grounded in the world around me. The answer from tonight’s walk: It will depend on how well that technology is grounded in the world around me.
So: Go for walks, leaving the smart phone and tablet behind. Use the time to understand how your physicality informs you. Bring this understanding back into your work with technology. Think of Virginia Woolf, who used her walks through England’s South Downs to gain space to spread out her mind. We are building the hive-mind of the future. Will it know that space?