Google’s Gemini AI Access for Children Raises Concerns Among Parents

Google’s recent announcement allowing children under 13 to access its Gemini AI apps has raised concerns among parents regarding the safety and appropriateness of introducing such advanced AI tools to younger users.
The tech giant revealed that, via the Family Link app, children will soon be able to access Gemini AI apps on their Android devices. This marks a shift in how AI tools are being made available to young users, with the news generating a considerable amount of negative feedback from concerned parents.
Gemini, Google’s AI chatbot, will be available for activities such as reading stories and helping with homework. However, the company has issued cautionary guidelines alongside the feature. In emails sent to parents, Google acknowledged that “Gemini can make mistakes” and that children might “encounter content you wouldn’t want them to see.”
Despite assurances that children’s data will not be used to train AI models, as is the case with its educational workspace accounts, concerns persist over whether it is safe or appropriate to expose children to generative AI at such an early stage.
Google’s guidance to families included an important recommendation: parents are encouraged to talk to their children and explain that “AI is not human” and to advise them not to share sensitive information with the chatbot.
According to reports, children managing their devices through Family Link will be able to access Gemini independently. Parents can monitor screen time, set app limits, and block specific content. However, a Google spokesperson, Karl Ryan, confirmed to *The Verge* that parents can fully disable access to Gemini through Family Link settings. He added that parents would receive an additional notification when their child first gains access to Gemini.
This shift in AI accessibility is raising important questions about the responsibility of tech companies in ensuring the safety and well-being of young users online.





