Self-professed “AI evangelist” Alex Volkov recently posted a long tweet in which he detailed his 6-year-old daughter’s disinterest in an AI toy.
Volkov had bought the toy for his daughter’s birthday-slash-Christmas. In his words:
So...she played with this Dino, chatted with it, and then... learned to turn it off, and doesn't want it to talk anymore. She still loves playing with it, dressed it up, it now has paper shoes and a top hat that we made together, but every time I ask her if she'd like to chat with it, she says no.
The few times I turned it back on, and she did speak with it, she chatted for a bit, and then just... turned it off again, not wanting to engage at all.
The company behind the toy is “Magical Toys.” Founder Fateen Anam Rafid is a recent Vanderbilt computer science graduate, and he appears to be backed by Founders, Inc., a San Francisco-based venture capital firm. Magical Toys’s website is suspiciously bare, featuring only one statement about safety:
Parents have full control over their child’s personal information and chat history. Through the app, you can review, modify, or delete any of your child’s information and audio recordings with ease.
No comments about any third parties that may or may not have access to the “information” and “audio recordings.”
Anyway, one wonders why this product needs to exist. Current evidence suggests it’s pretty much impossible to 100% “nerf” a large language model—unwanted outputs can happen, whether we like it or not, and hallucinations are inevitable—so the idea of building a chatbot into a toy marketed to 4-to-9-year-olds seems particularly ill-advised in this moment.
As we all know, regulatory band-aids are generally applied after the fact in the U.S., after the damage has already been incurred. The unpopularity of the precautionary principle (or even a watered-down version of it) is probably the defining quality of this decade’s AI boom. Never mind if our society’s most vulnerable serve as hapless collateral damage. (To be fair, the product’s currently in beta, but that’s no excuse for the near-absence of public conversations surrounding this topic.)
Magical Toys is an obscure startup with a virtually nonexistent public profile. Its product’s potential for unsafe use, however, is foreshadowed by controversies similar to the small X blowup we saw last week. In 2017, an internet-connected doll called My Friend Cayla was banned in Germany for being an “illegal espionage apparatus,” recording and transmitting children’s conversations to a voice analysis company; researchers also found that hackers could hijack the device’s Bluetooth connection to talk to children. And earlier this year, a Florida mother sued Character.AI, alleging that her 14-year-old son was persuaded to commit suicide after talking to a chatbot he named “Daenerys” over the course of several months.
But none of these (admittedly speculative) concerns about Magical Toys have anything to do with the conundrum that Volkov describes. The toy is just horribly designed, for a few reasons:
Its blank, featureless face (see the video) is totally incongruous with the “realism” of the preschool-teacher voice emanating from it. The toy is a blobby dinosaur-shaped plush doll. On the other hand, its voice is almost as advanced and as realistic as that of a living, breathing human being. Form isn’t following function.
On that note, whatever wrapper is being used for the LLM in the toy did not make it sound any different from ChatGPT. The “tone” of ChatGPT, to my mind, is inappropriate for children between the ages of 4 and 9. It’s too long-winded and patronizing, too eager to over-explain and leave no stone unturned.
That brings me to my last point: the toy’s biggest problem is that it’s at odds with a toy’s fundamental purpose—to stimulate a child’s imagination. This is why the child in Volkov’s video wants to dress it with paper clothes instead of speaking to it. LLMs aim to “do the work” of imagination, not stimulate it, and that’s the exact opposite of what a developing mind needs.
This last point, which was brought up by many X users decrying Volkov’s cluelessness, recalls the same problems that AI brings to education—using AI as an “intellectual crutch” may hamper the ability to learn and acquire new skills. Even Ethan Mollick acknowledges this.
I’m not a parent, but I can’t imagine giving your kid one of these as opposed to a more traditional, time-honored, low-tech toy. There’s just no point in jumping the gun when U.S. policy in this industry is virtually nonexistent.
Happy New Year!