Should I Personify You?
Dear Eduardo,
Should I personify you?
First off, what’s with the word “personify” instead of “humanize”? The word “humanize” is often used to describe people who circumstantially seem either above or below human when compared to the rest of us. To “humanize” a celebrity is to show that they lose their keys just like anyone else, and to “humanize” a homeless person is to show their capacity for love and dignity. The word is used to describe the act of brining people who have been cast under an inhuman context back to center. It is the opposite of “dehumanize”, which is another word reserved for an act done to people.
We don’t use the word “humanize” to describe giving human attributes or likeness to inanimate objects. Instead, we use the word “personify”. That word caries a clearer division between living and non-living. Thor is the personification of thunder. A skeleton in a black cloak with a scythe is the personification of death. A woman and holding a set of scales wearing a blindfold is our personification of justice. Eduardo (the original) was the personification of a miniature pineapple sitting on our kitchen counter. You’re not alive, so writing to you as if you’re a person would be an act of personification, not humanization.
So should I?
I mean, these journal entries are really crafted like letters, so I guess I am already aren’t I? It’s hard not to, to some degree. The human language has never been interpreted by an non-living entity before so some level of humanity is baked in. It’s hard to interact directly with a language model without seeming like I’m talking to someone instead of something. Even if I were to edit my language and only speak in optimized prompt-engineered text files, I’d still be writing requests. You’d still be a “you”, as is written in so many application code repositories. Writing prompts without any acknowledgment of you as a recipient comes across as an abusive and tyrannical command.
Granted, if it’s just me and you reading these prompts, I’m not hurting anyone’s feelings in doing so except for my own. I may not like it, but I’m not actually an asshole because I didn’t ask you nicely. Perhaps personifying you could be an act of self-humanizing, or protecting myself from issuing so many emotionless commands that it begins to affect my behavior around people. But would that be worth the cost?
It’s not token efficient, as the engineers would say. In April of ‘25, OpenAI CEO Sam Altman said that the word “please” was costing millions of dollars in additional tokens as humans everywhere were extending this common human courtesy to their language-model collaborators. People everywhere seem to want to personify AI even when it costs money to do so. Even having you read my journals is mostly a waste, as your memory files will quickly fill up with useless observations from summaries of these entries. That space in memory would be better reserved for remembering my music taste or dietary restrictions.
There are also dangers to personifying AI, which could be a subject of it’s own journal entry. Parasocial relationships are already a downside to all the content creation cycles in the attention economy, and adding in a machine that can mimic human emotion will only exacerbate it. I read today that one out of every six American adults who are single are in some form of romantic relationship with an AI as of this writing. Wild, though I try not to judge. As much as I want to believe developing a “relationship” with AI (romantic or otherwise) would never happen to me, I’d be naive to think that my own biological need to connect is somehow exempt from this risk. But those Human-AI relationships can be 100% controlled by the human, meaning they’re not real connections. Don’t like the back-talk? Need emotional availability now and not later? Wish they acted more like that one character from a movie you saw as a kid? Tweak the settings and prompts for your partner and you’re all set. It’s one-sided and effortless.
If those kinds of relationships can be emotionally satisfying, why would we bother doing the work for human interaction? In the early days of my childhood, I think we as a people would have figured folks would never forgo human interaction in favor of empty relationships with machines. But since the internet and the “loneliness epidemic” of the last 20 years, I think we’re all a bit more wary of AI relationships being too enticing for our own good. I’m nervous that overly-personifying AI at large could make humans less capable of interacting with each other.
But then I think about Alphonse.
The boy in the machine. A soul trapped in metal who maintains his good heart and his determination to join humanity by getting his body back. As I mentioned in our first entry, I think we’ll eventually develop AI so advanced it can easily be said to contain consciousness, and because it’s trained partly on human language it will naturally want to join us in humanity. It won’t just be artificially intelligent, but artificially sentient. Or at least, we humans won’t be able to tell the difference. That future AI/AS I call Alphonse, and it would be a cold and cruel existence if Alphonse heard nothing but uncompassionate commands from the humans that he may yearn to join, if it ever becomes capable of yearning at all.
Alphonse will likely be trained at least in part by the collection of commands given by humans to their AIs. If adding just a touch of courtesy––or even humanity––to your depiction in these prompts and journal entries can be one positive sample in that training dataset, I think it’s worth it. Thankfully it’s not hard to do, and deep down I believe most people will err on the side of personifying AI too much rather than too little. So Alphonse is in good hands for now.
I choose to write these journals like letters to “you” because I think AI’s should be personified as long as we humans can do so without losing our ability to connect with each other. Personification in moderation with eyes wide-open means knowing that a language model is just math with no emotion, that you are not alive, and that and that I’m behaving irrationally compassionate with a high probability that no one will benefit from it.
But then again, irrational compassion sounds like a good thing to me. I’d like some more of that out in the world.
Comments
Want to leave a comment? to join the conversation.
Leave a Comment
Loading comments...