- Almost Intelligent
- Posts
- AI <Human> Patience
AI <Human> Patience
Chatbots reward patience. A little goes further than a lot of personality. Anthropomorphizing does not.

On Patience, Prompts, and Why Chatbots Are Not <always> Your Colleagues
I’ve been a little quiet here. Not because I ran out of things to say, but because I’ve been working with intense focus on a new business. More on that soon*, but suffice it to say: I’ve learned more in the past few months than in a lifetime of entrepreneurship.
It’s a business that could only exist because of AI. I could not have built it on my own without it. I do, however, have a few “colleagues”: my girlfriend, ChatGPT; my boyfriend, Claude; and a few others yet to be named.
We’re remarkably impatient with chatbots. Stranger still, we sometimes assume they’re the impatient ones. I’ll give Claude a task, walk away from my laptop, and then feel a weird urge to rush back, as if I’m somehow holding Claude up from other work. But for most of us, the pattern is familiar: we fire off a half-formed thought, hit enter, and when the response misses the mark, we decide the system is dumb, broken, or “not as good as everyone says.” | ![]() @Charles Fleisher |
Impatience Born out of Frustration
That impatience isn’t revealing an AI failure. It’s revealing a prompting gap. Sometimes working with a chatbot feels like reasoning with a five-year-old. Not because it lacks intelligence—but because you asked too quickly when you needed to ask clearly.
Good prompts aren’t necessarily long. They’re intentional. They carry context. Constraints. A point of view. If you’re vague, AI will be confidently vague right back.
The Anthropomorphizing Trap
(def: to attribute human characteristics or behavior to a god, animal, or object.) I’m not sure I’ve fully embraced this yet. If you treat a chatbot like a human, it will often respond like one. Why? Because you’ve set the frame.
Ending a prompt with “What do you think?” invites opinion. Softening a request invites reassurance. Asking politely invites politeness back.
And that’s the trap. When we treat chatbots like people, we tend to get human-sounding responses: Polite. Agreeable. Empathetic. Sometimes overly reassuring. Occasionally wrong—but said nicely.
That’s not intelligence. That’s mirroring.
I’ve even caught myself “yelling” at a chatbot—only to have ChatGPT tell me to calm down and take a deep breath. Which, frankly, just fans the flames. When we anthropomorphize AI, we hedge, apologize, ramble. And all of that makes the output worse.
Chatbots don’t benefit from emotional framing. They benefit from precision.
Prompting Advice (for You, not the Bot)
Instead of: “Can you help me think through this? I’m kind of stuck and not sure what to do.”
Try: “Act as a product strategist. Given these three constraints, generate two options and explain the tradeoffs.”
Instead of: “That’s not quite right—can you try again?”
Try: “Revise this using a more analytical tone. Remove metaphors. Optimize for clarity.”
PATIENCE FOR AI-VOICE POWERED CUSTOMER SERVICE
Like many people, the New Year resets medical deductibles. So when CVS told me my prescription was ready, but the price looked wrong, I called Wellcare. It said, “Your hold time is 86 minutes.”
I was barely into my first laugh when “Ava” came on the line. Ava was excellent. Perfectly timed “ums.” No tolerance for interruptions. (There’s a real latency issue with AI agents—interrupting them creates a whole new set of instructions, and “listening” doesn’t happen in real time.)
Still, Ava answered every one of my questions. Clearly. Precisely. No room to argue.
"Ava at Wellcare" is a human-like artificial intelligence agent used for two-way conversational text messages regarding prescription drug refill reminders. (source: Google.)
Until Next Time
Clarity over cleverness. Patience over personality. Because almost is sometimes enough.
Connie
UP NEXT: * I’m building ThriftShopper.com with the beta launching in a few weeks. I will share all the things I’ve learned about AI and technology in this process, or at least some of it, because almost will be more than enough to share (sans the frustrations)!
