There’s no way around it. Emotions are a prerequisite for autonomy. Without emotions, an agent (human, machine, or otherwise) cannot automatically goal-set. It would require an external agent or process to define its goals for it. Imagine needing a person to always tell a robot what it should find important in its environment, where to focus and spend its energy, or what outcomes it should work toward. Though not necessarily a bad thing, it does limit the amount of autonomy the robot could have. In a sense, it creates work for us instead of relieving it. Do we really want to micromanage robots or do we want them to be able to figure out what we want and just take care of things for us? Machines are not able to feel emotions, yet. This hurdle must be overcome to get machines to behave the way we’ve been promised for decades. We shouldn’t have to explicitly tell a robot what’s good and what’s bad, what’s wrong and what’s right, what’s desirable and what should be avoided. It should learn the way people do, by living among us and absorbing our culture. The language of culture is emotion, not logic. As a group, we reward and punish behavior that aligns or misaligns with our shared values, respectively. For machines to integrate with us, they must understand the language of our culture. That need emotions.
There are practical business considerations for this need, too. AI used in products will need to have emotions so that people can connect with them more naturally. As machines become more complicated, interfacing with them must become more intuitive. So far, we’ve been pretty good about adapting our behavior to match the limitations of our technology, but at a cost. Consider how nearly every person on the planet is attached to their smartphone; happily bending their necks in awkward positions for long periods of time just to read the latest social media feed (or watch a video of a dog splashing around in a puddle on repeat for several hours—we’ve all done it). We’ve reduced our physical social interactions in favor of virtual ones. Some studies show this linked to higher rates of anxiety and depression. During the COVID-19 pandemic, the hashtag #alonetogether captures the dichotomy between fighting outbreaks and the strong need to interact with each other. As a tool, group video call technology became a hero of mental wellness, but it could only provide so much. Many people still risked illness and death to get together, physically. We are social creatures, after all! So, there’s a limit to how far we should or can adapt ourselves to our technology. It would be better if our technology adapts to us. Enabling AIs with emotion will allow them to become empathetic to our needs, instead of only the other way around (Low battery? Sure, I’ll go out of my way to find a wall plug for you, dear phone).
Having understood the need, now we can turn to the question about it being possible. For some reason, emotions in machines are thought to be even more elusive a goal than intelligence. There is no rationale for this. It is simply a matter of effort. Over the past five or so decades, most of the research into machine intelligence had focused on the parts that produce logic, reasoning, planning, decision making, and optimizations. In each, practical problems with clearly defined goals were used (e.g. with mapping software: what’s the quickest way to drive from home to work; or in manufacturing logistics: what’s the best time to order specific parts so as not overstock leading to unnecessary expenses or under stock which will cause a factory line to halt production), stealing away any need for automating the identification of goals in the first place. Nowadays, there are many researchers that understand the need for emotions as an integral part of intelligence. There is effort put into this today. Just as intelligence operations can be emulated using computers, so can emotional operations. In fact, it may be easier to identify emotional pathways than other information processing pathways that allow intelligence to emerge. If we were to convert the four basic ”happiness hormones” of dopamine, oxytocin, serotonin, and endorphin (DOSE) into appropriate computing representations and couple them with an information processing system that handles each type in a manner functionally equivalent to its human analog, we should be able to approximate some of the emotions humans exhibit. Once a basic, yet sufficiently complex emotional system is created, it can double as a test platform to iterate the solution until its output is indistinguishable from a human’s.
It’s still popular to rate a person’s potential for success using IQ scores. This conventional wisdom has naturally pervaded into the systems built by AI researchers, who themselves may lean towards the higher end of that measure’s bell curve. You can Google for plenty of good arguments why this shouldn’t be the case. However, it turns out that a good measure of a person’s potential for success is their Emotional Quotient, EQ. It’s so much more important than IQ that employers look for candidates that exhibit high EQ. This includes traits like empathy, grit, determination, and the ability to responsibly influence and lead others. Though I don’t have any statistics to know for sure, anecdotally I suspect that a good number of high IQ AI researchers may lack EQ scores as high as their IQ. So their focus becomes what they know and are good at, i.e. designing systems to be “smart”. “Smart” systems alone won’t be able to hold meaningful conversations with us. Rather, it is important to design systems that are able to exhibit emotional intelligence since this is how we actually communicate with each other. The mind is a metaphor machine that associates affects (i.e. emotions) to ideas. Communication between minds requires common schema that share those affect associations. Without machine emotions, we won’t have an efficient way of communicating our goals with our robots, or them setting or communicating their goals with us. Like I said, there’s no way around it.