
What Dog Training Taught Me About Building a Trustworthy AI Chatbot for Later-Living Communities
When I first discovered ChatGPT, I was fascinated. No coding — just talk normally, and magic happened (mostly). As I learned more about large language models (LLMs), it felt oddly familiar, with terms like Reinforcement Learning from Human Feedback (RLHF), consistency, and context sensitivity.
It struck me that dogs and AI have more in common than you might think. I spent a decade teaching dog owners those same principles to communicate better with their pets. More recently, I've used that approach to build a voice-enabled chatbot to support residents in later-living communities.
Our chatbot answers questions like “How does this oven work?”, “What time is yoga?” or “Is this text likely to be a scam?” with a carefully designed, conversational tone. As I learnt, like dogs, LLMs respond best to context, patience, and consistent, thoughtful prompts.
From Labradors to Large Language Models
Neither dogs nor AI truly understand language the way humans do; they both predict what’s likely to happen next rather than grasp meaning.
Lassie sits because she predicts a treat. ChatGPT predicts the next best word, then the next, almost ad infinitum – predictive text on steroids.
Both can be surprisingly thrown off by small changes in tone or context: your dog may stare at you blankly, your AI hallucinates, confidently making things up. (Yes, I know, I'm anthropomorphising already.)
Fortunately, my years as a dog trainer taught me to look first at how the cue was given, not to blame the dog. Prompts, like dog training cues, work best when they’re clear, consistent, and tested (in dog training we call that “proofing” – making sure the cue holds up in varied environments).
That said, while dogs may learn through patterns, they are far more than reinforcement-driven predictors. They are emotional, social, living creatures with a rich, embodied awareness, far beyond what an LLM can replicate (at least for now ...). A dog senses our mood, our body language, our shared history. It’s a world of meaning woven from far more than pattern matching.
Still, it’s worth recognising that LLMs, in their own statistical way, do build a kind of structure and knowledge drawn from vast amounts of data. It may not be grounded in bodily experience, but it does capture relationships between ideas, words, and even social cues. That’s why, with good prompting, they can seem so conversational as they echo patterns from billions of examples.
And, just as a dog can fixate on the crisps in your pocket, an AI can pick up biases you never intended and repeat them with impressive reliability. A Labrador and a language model both learn from what we reward – whether or not it makes sense.
We Love to Humanise Our Helpers
Then there’s our very human urge to anthropomorphise. We name our cars and get cross with them for not starting (“it would choose today!”) and suspect our dog of being deliberately “naughty” rather than confused.
But even though both can seem to understand every word you say, neither dogs nor AIs share our goals, values or morality; they’re just doing what works for them, without judgment or reflection.
Both need a human in the loop if they’re to navigate modern life safely. Without our guidance, either could become a hazard to themselves and to us.
Designing a Better Companion
We’ve had almost as much to do with designing today’s domestic dogs – shaping them through selective breeding and training to live alongside us as companions and helpers – as we are now doing with AI.
Maybe it’s time to invite an applied ethologist onto AI advisory boards: someone who not only studies animal behaviour but also applies that knowledge to help animals thrive in human environments.
Their skill in understanding non-human learning, emotion, and motivation could offer valuable insights for building safer, more human-centred AI systems.