What It Feels Like for an Algorithm
Inspiration for this blog post:
The New Yorker’s The A.I. Issue (no really, the whole issue)
An algorithm cannot feel - at least not in the way that we currently understand the concept of feeling, since the concept of feeling has been created and articulated through a human lens. To feel is a human trait, defined by the beings experiencing the feeling. And we have SO MANY FEELINGS. At this very moment of writing, I am feeling frustration and annoyance and impatience for the fact that I cannot log into ChatGPT. Oh wait - I just tried again so I could grab a screenshot of it to share, and now I am feeling slightly embarrassed and confused, since this very time I tried I was able to login. I really wanted that screenshot, so I even took the risk of not getting back into ChatGPT by signing out of it and trying to get the error page again. No luck, everything worked as expected and I got logged in. So now I am annoyed again because the thing I wanted to work actually worked, and again I’m confused because I should be glad the thing worked the way it was supposed to work.
It can be fun to ask Siri how she is feeling - she usually has something snarky to say about it. ChatGPT just responded to my question with “I'm just a computer program, so I don't have feelings or emotions. But I'm here and ready to help you…” What does it mean, “I’m here”? What does it mean, “I am?” Is the “I” in that sentence a thing separate from the people who make it? Is it separate from the person asking the question?
James Somers explains in his article Begin End the way computer programmers may have felt (may still feel) when writing code for a computer to do something: “Imagine explaining to a simpleton how to assemble furniture over the phone, with no pictures, in a language you barely speak.” And Somers follows this up with describing what the experience of the computer programming language may be in its response - a suggestion to the human that they have “suggested an absurdity and the whole thing has gone awry.” It isn’t hard to imagine a sentient computer pinching the virtual bridge of its nose with its virtual fingers, sighing audibly, and shaking its head as it tells you, “OK, let’s go back to square one.”
It can help us to personify our technology. It can help us deal with the flood of emotions that we point towards it, because there is almost nothing more infuriating than getting no emotions back from a thing you are emotionally invested in. This is one of the keys to technology learning for mission-driven people. So much of what our work is has to do with our love and emotions for people (or for animals, for the environment, for the arts, whatever is the mission of the organization). Our emotions and feelings push us past the lower salaries, the red tape, the systemic roadblocks, the uncertainty, the setbacks, the burnout, the despair. We have deep within us such a well of strength of human feeling that we can work to change what often times feels impossible to change. What we suggest with our missions might sound absurd - that we are currently working to make change in the world that has been needed for so long and that we will still likely be working on for generations to come. And we can have what race-car driver Jackie Stewart called “mechanical sympathy” - understanding technology in such a way that you can make it work for you.
Have sympathy for the algorithm that is ChatGPT. It is going to take a while to build the relationship and learn how to talk to each other. Many of my conversations with ChatGPT include a response from me that looks something like “No, that wasn’t quite it, let’s try again” or “I tried that but it didn’t actually work how I wanted it to” - and then there are other times where I respond with something like “OK I hadn’t thought of that before and what if we go down that route a little more.” I like to be conversational with my algorithms. This mechanical sympathy is actually sympathy for ourselves - giving ourselves space to not know everything, to mess it up, to have to try again, and to teach ourselves based on this new experience. That is essentially what learning algorithms are trying to replicate - the very act of human learning.