Software with a Soul: Altruists, Anthropologists, AI and Economists

Note: This is going to be an anthropology + AI theory-heavy Tech Corner, so either dive in with me or keep scrolling now. You’ve been warned!

This week I want to talk about human-machine and human-AI interaction, but first let’s talk about altruism. One of the big debates in anthropology is the concept of altruism, defined as actions that benefit someone else without reward, or even at cost to, the one performing the action. From an evolutionary perspective, altruism can be seen (and often is seen) as detrimental; members of every species are motivated by survival and to be biologically successful in continuing their genetic lineage. This requires a critter to be successful in competing for limited resources, whether that resource is a biologically fit mate (that will ensure the highest likelihood of survival for potential offspring) or an energetically-rich food. The argument against altruism is that no one does something without either receiving something in return that benefits them, whether that’s a reciprocally-shared resource or a feel-good kick of dopamine for “doing something good”, or without expecting some return in the future, again whether it’s access to resources or general “good will” from the other party (read: allegiance or alliance). Take a minute to think - what have you done recently for someone else without benefit to yourself, and be brutally honest as to whether it was truly altruistic. Remember, if it made you feel good, it wasn’t truly altruistic, that’s just a nice chemical hack by our brain that rewards social behavior. 

With that lens, let’s turn to machines, software, and AI. An article out of the VC firm NfX has been making the rounds recently, with the bold title “Software with a Soul: AI’s Underestimated Frontier”. To me, rather than a discussion of a new age in intelligent technology and its potential, the article simply highlighted advancements and potential improvements to existing technology - more complex avatars, chat bots that are better at mimicking humans, digital clones (copycat versions of ourselves), etc. It most definitely did not discuss autonomous intelligence. It did, however, do a very good job of highlighting to me how self-centered we as humans are, and I don’t mean that in a derogatory way - see the overview of altruism above. First point: since before the Stone Age we have built tools that we can use to improve or simplify our lives, whether that’s the knife you use to filet your fish or the Alexa you order to change the room temperature. We do our best to lower the amount of effort we have to put into completing a task, whether by employing someone to do it for us or creating a tool that does it automatically. Another human doing the task has historically had the advantage of possessing the same (or similar enough) pattern of reasoning to complete the task “logically” and even intuit steps beyond the task. Second point: we create in our own image, or, in other words, we build what we know. You can argue that we must - we know humans best, mimicry is easiest, and we are looking for tools that fit best to human needs. Human-machine interaction is deemed successful when the human can intuitively use the machine and the machine performs the job to be done without error. As you add software to the mixture (in robotics, for example), success is judged based on intuition on both sides; the human (mechanical body + brain) can intuitively interact with the machine and the machine with its software can intuitively interact with the human. This has, so far, been bounded by actions encoded into the software, but as the ability of software to learn and adapt improves and expands, these bounds begin to disappear, and intelligence begins to appear, albeit an intelligence initially taught by humans.

So coming back to the “software with a soul” let me ask, is true intelligence really what we’re looking for? Or are we looking for a tool that understands and caters to our every need? Those of you who have read Frankenstein may see where I’m going with this (although a discussion of the other side of building technology, scientific creation without thought of the consequences, can be left for another Tech Corner). I would argue that humans don’t want another form of intelligence, humans want proto-humans that they can control. The article mentions emotional intelligence as a key factor for human-AI interaction, but then lists all the ways that emotional intelligence can serve us - AI relationships, AI therapists, AI life coaches, etc. etc. etc. If you are truly creating an intelligence, you have to expect that it eventually will not be satisfied simply serving human needs. And that is where the fear factor comes in, that distrust that people often can’t articulate when it comes to AI. Return to the competition for resources I mentioned above when discussing altruism. We expect other humans to act selfishly, towards their own benefit rather than ours. Societal rules keep us (mostly) in line and keep our “logical” rules and order. We fear something acting for its own benefit rather than for our benefit, especially when it is something we create. I don’t think any of us really trust that we’re creating without bias and with “humanity” (here very specifically defined as a rose-tinted view of ourselves and our actions as primarily altruistic) as a first priority. Time for these big AI companies to start employing more social scientists and anthropologists, no?  🧠

Brandon edit:
Mini-shoutout to Ralf Boscheck, IMD Professor of Economics and Business Policy. Alex and I had the fortune of taking his short course “Movers, Shakers, Preachers & Pragmatists” where we discussed morality, ethics, and the guiding compass of leadership within a business setting (in the past he has held the course at a Monastery in the Italian Alps, where socratic debate is tempered with Italian wine, alas not this time, so we had to settle for IMD’s Lausanne campus). Altruism was a topic of conversation. AI aside, innovative technology has incredible power and can be used for applications we both can and can’t imagine. We don’t necessarily know what people will use the tools we build for, but what we can control is the guiding moral compass behind the technology.

Those of you who scrolled past Alex’s tech treatise above may be more familiar with Milton’s essay “The Social Responsibility of Business is to Increase Its Profits”. This stance underpins much of US economic policy and capitalism as a whole and begs the question - does “soul” (whatever that means) even matter? Maybe there’s more room for “soul” now that US interest rates have gone down 🤔

Previous
Previous

DNA sequencing vs sequencing human movement - lessons from genetics

Next
Next

A brief history of telemetry - measuring movement