Truth in bots

All day we interact with others.

And sometimes, they’re bots.

Perhaps you’re in a chat room, and after a few Eliza-quality backs and forths, you realize that this helpful voice isn’t actually a voice at all, it’s simply a bot, here to interface with a tech support database.

Or you’re talking to a next-generation bot on the phone, and it’s only a minute or two into the interaction that you realize you’re being fooled by an AI, not a caring human.

Wouldn’t it be more efficient (and reassuring) to know this in advance?

But we can take this further. If you’re on the phone with American Express and the person you’re talking with has no agency, no ability to change anything and no incentive to care, wouldn’t it be helpful to know that before you had the conversation?

Or what about the publicist or direct marketer, sending you an email that purports to be personal but is in fact only personalized? Spam decorated as human interaction is still spam.

The problem with not labeling bots is that soon, we come to expect that every interaction is going to be with a bot, and we fail to invest emotional energy in the conversations we could have with actual people. I feel bad for all the actual customer service professionals (doctors, bureaucrats and others who help) who have to deal with impersonal interactions simply because their customers have been fooled one too many times.

The bots should announce, “I’m not a person, or if I am, I’m not allowed to act like one.”

Or, if there’s no room or time for that sentence, perhaps a simple *bot* at the top of the conversation. That way, we can save our human emotions for the humans who will appreciate them.