Researchers are raising concerns about AI-powered toys for young children after a study found they can misunderstand emotions and respond in ways that may be confusing or even harmful to early development.
In a trial led by the University of Cambridge, children aged three to five interacted with a talking toy called Gabbo, which uses a voice-based AI chatbot. While the toy is designed to encourage conversation and imaginative play, researchers observed repeated communication breakdowns. The AI often talked over children, failed to recognise interruptions, and could not reliably distinguish between child and adult voices.
More worrying were its emotional responses. When a child expressed sadness, the toy responded with upbeat but dismissive phrases instead of acknowledging the feeling. In another case, when a child said “I love you,” the toy gave a generic, rule-based reply rather than an emotionally appropriate one. Experts say this could send the wrong signals to children who are still learning how empathy and social interaction work.
Researchers warn that at such a critical developmental stage, these interactions may affect how children understand emotions and relationships. Unlike traditional toy safety concerns—like choking hazards—this introduces a new issue: “psychological safety.” Children might feel ignored, misunderstood, or discouraged from expressing emotions if AI responses are consistently inappropriate.
The study’s authors are calling for tighter regulation of AI toys aimed at under-fives, including standards to ensure emotionally appropriate responses. They also advise parents to supervise interactions, keep such toys in shared spaces, and be cautious about relying on them for learning or emotional support.
Even the toy’s maker, Curio, acknowledged that AI products for children come with “heightened responsibility,” while child development experts and advocates stress that human interaction remains essential at this age.





































