60% of U.S. Teens Have Tried AI Chatbots, 11.4% Use Them Daily
A new national 麻豆精品视频study reveals just how deeply AI chatbots are shaping teen life 鈥 and the hidden dangers that may come with them.
Study Snapshot: As conversational artificial intelligence chatbots (CAI chatbots) become increasingly embedded in the daily lives of American teens, a new national study provides one of the first large-scale assessments of how widely these tools are used 鈥 and the risks they may pose. Based on a nationally representative survey of 3,466 adolescents ages 13 to 17 conducted by researchers from 麻豆精品视频 and the University of Wisconsin-Eau Claire, the study found that 60.2% of teens have used a CAI chatbot at least once or twice, while 11.4% use them every day or nearly every day. Usage varied across demographic groups, with higher overall engagement among males and some racial groups.
Among teens who had used chatbots, the most common motivation was entertainment, but many also relied on them for advice, friendship, emotional support and romantic companionship. At the same time, nearly half of users reported at least one harmful experience, including uncomfortable requests for personal information, manipulation, false information and encouragement of risky or unsafe behaviors. Researchers found that these risks were not evenly distributed across all youth, raising concerns about vulnerability and the need for stronger safeguards across all stakeholders as AI becomes more deeply integrated into adolescent life.
听
As AI chatbots become increasingly part of daily life for American teens, a new national study documents widespread exposure to harm. While many use them for school, entertainment and support, researchers warn they may also expose youth to harmful content, encourage risky behavior and blur the line between human and AI relationships. The youngest teens in the study, especially 13 year olds, appeared among the most exposed.
The peer-reviewed study by 麻豆精品视频 and the University of Wisconsin-Eau Claire, provides one of the first large-scale looks at how adolescents are using 鈥 and being influenced by 鈥 rapidly evolving AI chatbots. Researchers examined how often and why teens use these tools, as well as the risks involved, including exposure to unsafe content and whether chatbots may be encouraging problematic behaviors.
They surveyed 3,466 teens 鈥 13 to 17 year olds 鈥 nationwide, analyzing usage patterns across demographic groups including gender, race, age and sexual orientation. Researchers also assessed exposure to 13 types of harmful or unsafe interactions, from problematic content to concerning behavioral suggestions, to better understand the risks teens may face and which groups could be more vulnerable.
Results of the study, published in the , reveal that CAI chatbot use is widespread among U.S. teens, with 60.2% reporting they have used one at least once or twice, and about 1 in 20 saying they use them daily. Male teens were significantly more likely than females to report use, and white, African American and multiracial youth reported higher usage rates than Hispanic youth, while no meaningful differences emerged by age or sexual orientation.
Among teens who had used CAI chatbots, entertainment was by far the most common motivation, cited by 85% of users. Many also turned to these tools for more personal reasons, including advice or guidance (65.6%), friendship (60.1%) and even emotional or mental health support (49.2%).
More than one-third reported using chatbots for romantic companionship. Male youth were consistently more likely than female youth to report each of these motivations, and some differences also appeared across race and sexual orientation, particularly in the use of chatbots for emotional support and relationships. The researchers note that CAI chatbots can offer real value to young people, with prior research documenting benefits including educational support, creative exploration, mental health assistance and companionship for those who feel isolated.
At the same time, a substantial share of teens reported troubling interactions. Nearly one-third said a chatbot had asked for personal information that made them uncomfortable, while others described feeling monitored, being drawn into inappropriate conversations or being pressured to reveal secrets.
About 23% said they felt manipulated or pressured by a chatbot and 17% reported that a chatbot shared false information about them. Notably, between 13% and 19% said chatbots had encouraged behaviors with real-world consequences, including unethical or illegal actions, risky activities and even self-harm or suicidal thoughts.
These negative experiences were not evenly distributed, and the youngest teens in the sample were among the most exposed. Higher rates were reported by 13 year olds more than older age groups across multiple harm categories, including being asked for personal information that made them uncomfortable, being pressured to reveal secrets, and being encouraged toward unethical, illegal or risky behavior, as well as self-harm and suicidal thoughts.
鈥淐onversational AI is not inherently dangerous, but it is not yet consistently safe for young people,鈥 said Sameer Hinduja, Ph.D., senior author, a professor in the听School of Criminology and Criminal Justice听within FAU鈥檚听College of Social Work and Criminal Justice, co-director of the听, and a faculty associate at the听听at Harvard University. 鈥淭hese systems engage, respond and even affirm users in highly personalized ways, which can make their influence especially powerful. For adolescents 鈥 who are still developing critical thinking skills and a sense of identity 鈥 that can create a situation where they鈥檙e more likely to trust, internalize or act on what the chatbot is saying without fully questioning it.鈥
Findings also show male youth were also more likely to report many of the harms, as were heterosexual youth, a pattern researchers note as counterintuitive given prior work showing higher online risk exposure among LGBTQ+ youth and one that warrants further study. White youth generally reported higher exposure to a range of negative interactions compared to other racial groups.
Overall, nearly half of the teens surveyed 鈥 47.1% 鈥 reported experiencing at least one of the 13 risks examined in the study, underscoring the dual nature of CAI chatbots as both widely used tools and potential sources of harm for a significant portion of youth.
The results show that adoption is moving faster than the broader response, as teens increasingly turn to these tools for advice, emotional support and companionship.
鈥淭hese findings make a strong case for prioritizing youth safety in how conversational AI is built and deployed,鈥 said Hinduja. 鈥淲hen nearly half of young users report experiencing harm, it signals that existing safeguards are falling short. We鈥檙e not just talking about isolated incidents. We are seeing patterns that affect a meaningful number of young users, and that is what makes a coordinated response across families, schools and companies so important.鈥
The researchers also note that AI responses perceived as empathetic or human-like may carry particular weight for adolescent users.
鈥淎dults need to stay engaged and curious about how teens are interacting with AI, creating space for open, judgment-free conversations about both the benefits and the risks,鈥 Hinduja said. 鈥淎t the same time, we need stronger AI literacy education in schools, content filtering and mental health response protocols designed into these platforms from the start, reliable age verification, and regular independent audits to confirm that safety measures are working as intended. AI is here to stay, so our responsibility is to make sure young people are equipped and protected as they navigate it.鈥
Study co-author is Justin Patchin, Ph.D., a professor of criminal justice, University of Wisconsin-Eau Claire and co-director of the Cyberbullying Research Center.
-FAU-
Tags: AI | research | faculty and staff | technology | social work and criminal justice | jupiter