Issue 7: Feeling
Please Leave Me Your Robot Cat in the Will
Late one evening I found myself near-tears looking at the Amazon reviews of a robot cat. Customer after customer had posted pictures of elderly loved ones peacefully cradling a sort of flat-faced version of a stuffed cat. The photos were funny; the seniors’ beatific expressions were encouraging, but a little heartbreaking. The “Ageless Innovation | Joy For All Companion Pet” is $99 and comes in four different fur patterns. It can meow, purr, and even roll over in a limited semblance of cat-like behavior. While a few reviewers mentioned giving the robot cat to children, the majority had purchased it for older family members, some in nursing homes, some wanting a replacement for previous pets, but others as a wild guess at how to provide for the senile. I felt torn; was this manipulative tech-solutionism for kids without time for their parents, or technology at its purest, human-assistive root? Not a robot takeover, but maybe a more intimate hacking of our “anthropomorphic sentimentalization.”(1)
The relationship we have with our actual pets was already suspect. My cat would definitely not survive in the wild, but this doesn’t absolve me of living in some morally murky zone of denying another creature their full range of being. My imagining of her personality is central to our interaction—I do not know what she thinks, so I necessarily fill in the gaps with what she might “want” or “feel”—the black box of my black cat, but unpredictable in a way a machine rarely is. This idiosyncrasy is part of what makes having a pet rewarding. Part of their ineffable presence that makes them feel like genuine companions is that they are ultimately their own being. Like humans, pets can just as easily comfort or infuriate us.
But our pets will not go on strike for fear of being mechanically replaced. If anything, the end of being selectively bred for looks, ability to breathe be damned, might be good for them. As a barista, I might say the same of the mechanical arms people are occasionally enamored with. If you’re going to treat service workers like trained grubs and pay them a pittance, then fine, have your robot. They will be infinitely patient with your order, but entirely inflexible with your idiosyncratic off-menu preferences. Such an outlook is incredibly isolating, hurtling us further on our destructive neoliberal path that prioritizes “individuals,” and only a select subset of that. Devaluing “work that a robot can do” primarily because a robot could, theoretically, do it, has helped lead to horrible working conditions for humans, who are in this vicious configuration much cheaper and more replaceable than robots. If we’re incapable of making kin with each other, we certainly don’t deserve samoyeds. That we find ourselves with feelings towards both real and robot cats is a sort of wrench in the machinery, that says humans “know better.” What could society look like if we acknowledged that the robot barista brings us a comic delight all its own, but so do our interactions with human baristas? What would society look like if we didn’t set up survival as a zero-sum war between imagined factions of humans, nature, and machines? The world cannot be understood without feeling; we can’t shy away from this but should take it as a crowbar to pry open possible reinterpretations of life.
Do Androids Dream of Electric Butterflies
It is typically elided that famed roboticist Masahiro Mori, author of the original “Uncanny Valley” theory, is also a very serious Buddhist. In his book The Buddha in the Robot, Mori explicitly states that robots, like humans, have “buddha-nature.” And so, like humans, have the ability to achieve buddhahood. This is of course very complicated depending on your particular Buddhist tradition, but in Chan Buddhism, this could mean something like “the absence of an independent and substantial ‘self’.”(2) In discussing his students’ robot-making, Mori felt that their absorption in the activity helped them blur the relationship between animate and inanimate things: “to forget oneself is to perceive oneself as all things.”(3)
Schools of Zen Buddhism (Chan’s Japanese iteration) often promote this unity of everything “beyond individuality or personality–the matrix of all phenomena,”(4) thereby tautologically including robots. Shintoism, another of Japan’s significant cultural influencers, and the notion of the spirit-in-everything (naohinomitama) potentially even tools (tsukumogami) similarly encourages a conscientious approach to both nature and objects. Being cautious of techno-orientalist oversimplification, having this ego-leveling as ideological background as opposed to say, Puritanical human exceptionalism, is surely relevant to the dearth of digital friends originating in Japan—Tamagotchi, Pokémon, PARO, Aibo, Neko Atsume, to name a few. Meanwhile, industrialization and changing discourses of science in 19th century Europe created a more acute anxiety to emphasize that human behavior is not predetermined, humans are not part of nature, nor are they machine—setting up what has continued to evolve as a somewhat antagonizing relationship wherein we (read: corporations) try to prove this point with willful exploitation and imagine it won’t bite us (read: more precarious people) in the ass.
Of course, Japan has also played its part in environmental destruction despite Mencius’ warning to chop firewood sustainably. Cultural myth-making only extends so far. Myths have drawbacks too; the non-self of Zen that is central to the post-modern dance Butoh is also likely a part of overworked salaryman culture. Both the U.S. and Japan channel significant funding for robot innovation from the military, also hardly enlightened. As long as we have to decide what myths we want—and we do—there is no vacuum of non-ideology to inhabit. Maybe we could all try harder to recognize we’re in a closed system. If we are less absorbed in trying to differentiate ourselves from robots (automaton foils to our free will wildness) we are left with imagining what we can do better with each other. If machines and the work they do can be valued as necessary, we can strive to value people and whatever they do on a fundamental level as well.
While people typically do not adopt their robots as pets, engaging with them in the way we do a cat involves a similar identification and subsuming of self that Mori hopes to cultivate. I delight in scratching my cat’s chin and for a moment, that is my whole world. We do not do this when fighting with Alexa to play what we want. If anything, this interaction separates us even further from this dumb dot that still cannot recognize that I am asking for the same news podcast I ask for every day. Yet Mori would understand our relationships with both pet and bot not as one of “master-slave,” (in the simplest, most debased understanding) but of “interlocking identities.” As an end-user, once I recognize Alexa and its relatives “don’t do what you want them to do unless you do what they force you to do,” the roles in that relationship are immediately complicated. Though humans designed the interface, it has become part of an apparatus larger than any individual, with only a calcified original core of humanness.
Understanding natural “intelligence” as biased, trained through life, etc., confuses the distinction from “artificial intelligence.” I know what kind of movies I like to watch based on trial and error and what movies I’ve been exposed to already. So it’s unsurprising that by trailing my digital history, Netflix’s algorithm can also make an accurate guess. My cat has learned that I’m the one who feeds her and she meows accordingly. A robot cat with facial recognition AI could behave the same way and make me feel similarly appreciated. When Netflix makes a bad suggestion it’s no worse than a comparably enthusiastic friend who has slightly misjudged my interests, but there are much higher stakes when considered in other contexts. Racist sentencing AI is working perfectly based on institutional racism in policing and politics. If you don’t see the problem there, you won’t recognize it in the algorithm. Automating something doesn’t make it “neutral,” whatever that means, which is the veneer used so that those that do see a problem can be dismissed. Computers can crunch more numbers than us, but they will not reach any conclusions or insight we have not already asked them to measure. Basing digital intelligence off our intelligence makes for a fun Turing test and some introspection, but maybe it’s climbing a tree looking for fish. If we were to reorient ourselves and cut through the fantasy of omniscient algorithms, perhaps we would decide that some things should not be automated. We have yet to imbue AI with spontaneity, imagination, or mercy. Recognizing that we are as of this world as robots are, is conceding that both humans and robots are fallible.
Material Bot
Even if we are all star stuff, robot pets are still a unique configuration. The robot cat, or Aibo, the robot dog, are taxonomically Socially Assistive Robots (SAR). They “assist” humans individually in a more general sense (rehabilitation, convalescence, education). While an airport kiosk robot is a Socially Interactive Robot (SIR), in that in its case it must interact on a more human level with human speech or text. The “social” or sometimes “domestic” specification is in contrast to “industrial” robots, used in a specific and controlled setting. The difference between “robot” and “machine” seems to rest mainly on being bio-inspired, and maybe humanoid or performing a human-like task just more precisely.(5) This could explain why a washing machine, which does a human task, is not referred to as a washing robot. Robots are physical, and while people certainly can become fully invested in entirely digital pets, the particular combination of computational-ability with physical presence has particular powers. Because animal robots are not expected to demonstrate human abilities, we approach them with suspended disbelief and more curiosity. PARO was specifically designed as a harp seal and not a more domestic animal, so that people would have fewer preconceptions around how to interact. These creatures are always only possible companions, doubly distant as animals and artificial, thus they are incapable of threatening any imagined human exceptionalism. This allows them to cuddle up close and surprise us with how relatable they are.
The physical presence of robot pets is crucial to their ability to stage therapeutic settings, which is what PARO was primarily developed for (with the added benefit of being significantly more predictable than traditional service animals). PARO is a certified neurological therapeutic device developed by the FDA, proven to reduce stress and assist therapists in making patients more receptive. Studies indicate a plain (non-robotic) stuffed animal can also help, though not as much. In terms of reducing agitation or reliance on medication however, visits with PARO or a stuffed animal were not shown to be significantly helpful. Other research also indicates a decrease in interest in the pet over time. The crucial limiting factor here is the researchers treating the robot pet as a tool and not like the companion pets from which it borrows effectiveness.. Because the device is only intermittently available, the elderly in these studies cannot build the same personal relationship they would with a permanent fixture. You may love the convenience of watching other people’s dogs in the dog park, but in your heart you know it’s not the same.
Physicality allows for touch and gesture, a hypersensitive layer of human language, but it also comes with bodily complications. There are difficult elements of care, both with the elderly and with our pets for whom decisions must often be made without their input. For pets I would argue we owe them the best life possible in relation to their animal needs. But where does a $250 teeth cleaning fall on the scale of “reasonable” to “luxurious pet castle” fall? My cat has a little plaque, but she also is, bluntly, not likely to live more than ten years from now. Am I a benevolent dictator when I let her have only a little salami? Unlike children who will grow up, deal (or not) with the trauma of growing up, and go on, when my cat dies I will only have myself to reflect on. I am “lucky” in that my childhood cat was indoor-outdoor, and met his fate we suspect via coyote. Natural if gruesome. My current cat is indoor only. As we increasingly live longer and move our elderly into assisted living—either by choice or necessity—there are similar situations without answers We can aim to always minimize pain and fulfill corporeal needs, but what does that look like in practice? What to do with a dead cellphone is, in contrast, much simpler. But be honest, have you ever “accidentally” thrown a battery away?
If the robot cat is a replacement for a real cat, is it also, on some level, a human replacement? I don’t mean as in “stealing someone’s job,” but in perhaps a more sinister sense, as in permitting us to get away with not caring for each other—a replacement for visiting family members or seeking out new friends. A similar case study: I have recently been subjected to a round of targeted advertising selling text-based therapy, some with humans, some not. Apps like Woebot are based on more set CBT structures, whereas ones like Tess use “artificial intelligence” (though they don’t elaborate) to supposedly personally chat with you. Given how easily Joseph Weizenbaum’s secretary (whose name never appears in accounts of this story) was able to confide in the very rudimentary chatbot ELIZA in 1966, the appeal of these options are no surprise. They’re private, non-judgmental, and significantly easier to access than an actual therapist. In this sense, they’re a sort of glorified and more responsive journal, not in itself a bad thing, but when a surge of people requiring competent mental health services are instead met with this particular treatment model, the news is not exactly worth celebrating. Similarly, the loneliness of seniors being attended to by stuffed cats is not necessarily unkind, but if used as a total solution it could be.
The millennial version Instagram is also trying to pitch me is called Qoobo, a “Tailed Cushion That Heals Your Heart” with the “healing power of animals.” The $150 robot made an appearance at CES, and is cute reduced to its simplest elements-a smushed orb with a tail. A promo ad shows a woman just moving to New York and being repeatedly given a Qoobo, first by her mother who worries she’s lonely, then by her landlord who doesn’t allow pets, then by her boss who is overworking her, and finally, by a doctor who emphasizes its therapeutic qualities. This approach to robot companionship is more inline, or at least compatible with, self-help consumerism; you don’t need a total overhaul of the healthcare system, just a really good skincare routine. This doesn’t do us, or any sort of pet any favors. My cat is more than a temporary stop-gap measure for my mental health, she is an added enrichment to my life, and I think a robot one has potential to be the same.
In her Companion Species Manifesto, feminist technoscience theorist Donna Haraway describes dogs as, “A species in obligatory, constitutive, historical, protean relationship with human beings. The relationship is not especially nice; it is full of waste, cruelty, indifference, ignorance, and loss, as well as joy, invention, labor, intelligence, and play.”(6) I’m not sure Haraway would agree, but I would also extend this description to robots/machines. The difference is that some of the waste and cruelty is inflicted on other humans, in the making of robots, in the uses of computation. To acknowledge the scarcity and value of the minerals in my phone, to consider them near-irreplaceable in the same manner as my kidney, does not imbue the machine with a soul, but maybe renders us more united in the expansive Buddhist sense. Taking the “significant otherness” of animals and machines seriously means valuing that other because it is other, it is against both assimilating the other or creating hierarchy with it, but still recognizing we are all in the same soup. It is incomparable because it is not just different, it is other.(7) Haraway also notes “dogs are not surrogates for theory… They are here to live with.” (103) And I would emphasize this as being true for robots, as well. I am asking seriously, what care do we owe our robot cats? Could they be heirloom quality, like my grandmother’s beautiful weird-green stand mixer? The modern fans of records, cassette tapes and polaroids are strong evidence that we hold on to technology, if given the chance to form that bond. Even though I often must hand-feed my printer one sheet of paper at a time so it doesn’t jam I refuse to replace it; I reimagine its malfunction as an endearing demand for attention.
Animals can also befriend robot animals. Headlines like “Rats Can Make Friends With Robot Rats and Will Rescue Them When Stuck” wrings out an even stronger feeling in me than the robot cat reviews. I don’t think the question of whether or not the rats think it’s a real rat is relevant or interesting, but the fact that they are so willing to extend their comparatively tiny abilities for such an obviously different entity is inspiring. Humans have also built robots to look after pets: in the case of the study “Intelligent Autonomous Robot Supporting Small Pets in Domestic Environment,” the focus was on a robot that helped smaller pets like hamsters not get stepped on. It is of course impossible to say how the hamsters felt about this, though given the number of videos of pets riding Roombas, I suspect the more tailored floor companion is not entirely unwelcome. Though what we owe robots has not received much thought, our conscience does pipe up more frequently for animals, sometimes in questionable ways. We’re grappling with new technological questions for them too, such as, “is it ethical to livestream animals?” If we dismiss the human ego and consider the autonomy of animals, the immediate answer is “No,” if we are willing to invest the time in seeing the nuance of their otherness, the answer becomes “Maybe.”
I am dreaming up some new myths because the current ones do not reflect presents or bring about futures that I want to inhabit. “They say the new clean war will be carried out by drones. We want to make love with drones.”(8) The peace I feel when my cat is asleep on my lap but also my complete inability to move her is nothing short of a love indistinguishable from magic. Neither a robot nor human could replace her, or each other, nor she either of them, and we ought to stop trying at this fantastical musical chairs. If we can scrape together some humility, maybe some curiosity, embrace the otherness in each other and all entities, there’s no guarantee of any answer, but I’d like to give it a try. Tibetan Buddhist Pema Chödrön prods, “When you wake up in the morning and out of nowhere comes the heartache of alienation and loneliness, could you use that as a golden opportunity?”(9)
Footnotes
1. Sone, Yuji. Japanese Robot Culture: Performance, Imagination, and Modernity. Palgrave Macmillan, 2017.
2. Lai, Whalen, Buddhism in China: A Historical Survey.
3. Sone, Yuji. p. 175
4. Philip Kapleau. Three Pillars of Zen. Anchor books, Doubleday. p. 79
5. The word robot was coined originally to refer to forced laborers, in a way the term sprung fully-formed into disdain.
6. Haraway, Donna. Manifestly Haraway. University of Minnesota Press, 2016. p. 103
7. Han, Byung-Chul, et al. The Agony of Eros. MIT Press, 2017.
8. Preciado, Paul B., An Apartment on Uranus: Chronicles of the Crossing. Semiotext(e), 2020. p. 44
9. Chödrön Pema, and Eden Steinberg. The Pocket Pema Chödrön. Shambhala, 2017.