SYDNEY, April 2 - A New Zealand-based crisis routing startup is expanding the scope of its intervention services to try to address violent extremism detected in conversations with AI chatbots. The company, ThroughLine, which already provides a system to connect users flagged by platforms for self-harm or other crises to human-run helplines, is exploring a hybrid tool that would combine a chatbot response with referrals to real-world support for individuals displaying extremist tendencies, its founder said.
ThroughLine's founder, Elliot Taylor, a former youth worker, said the initiative is intended to provide a path to deradicalisation support when AI platforms detect signs of violent extremism. The project involves discussions with The Christchurch Call - an anti-extremism initiative formed after New Zealand's worst terrorist attack in 2019 - with the group advising on content and approach while ThroughLine builds the intervention chatbot.
"It's something that we'd like to move toward and to do a better job of covering and then to be able to better support platforms," Taylor said in an interview, while noting that no timeframe has been set for launch.
ThroughLine has emerged as a go-to partner for major AI companies seeking a safety layer that routes at-risk users to help. The company, operated from Taylor's home in rural New Zealand, maintains a continuously checked network of 1,600 helplines across 180 countries. When an AI detects indicators of a potential mental health crisis, the system matches the user with an available human-run service nearby.
In recent years ThroughLine has been engaged by several large AI platform operators to provide this redirection service, and Taylor confirmed the firm has a relationship with ChatGPT owner OpenAI. He said Anthropic and Google have also worked with ThroughLine, though those companies did not immediately respond to requests for comment for this report.
Taylor said the range of issues disclosed by users to AI chatbots has broadened with the growing popularity of these systems, and now includes explorations of extremism in addition to mental health struggles that the company already covers. To address this, the proposed anti-extremism tool would likely pair an expert-informed chatbot trained to respond to signs of extremist thinking with referrals to local mental health and deradicalisation services.
"We're not using the training data of a base LLM," Taylor said, referring to the generic datasets that large language models use to generate text. "We're working with the correct experts." The technology is under testing, he added, but no launch date has been determined.
Representatives of The Christchurch Call have signalled interest in making the tool available to moderators of online gaming forums as well as parents and caregivers wishing to address extremism in youth online communities. Galen Lamphere-Englund, a counterterrorism adviser representing The Christchurch Call, said he hoped the product could be rolled out for those user groups.
Outside experts emphasise that the problem is not only content but also the relationships and dynamics that surround it. Henry Fraser, an AI researcher at Queensland University of Technology, described a chatbot rerouting tool as "a good and necessary idea because it recognises that it's not just content that is the problem, but relationship dynamics." He added that the product's effectiveness will hinge on "how good are follow-up mechanisms and how good are the structures and relationships that they direct people into at addressing the problem."
Taylor acknowledged that follow-up features are a key open question for the design of the anti-extremism tool. Options such as alerts to authorities about dangerous users are still to be determined and would be weighed against the risk of provoking escalated behaviour, he said. Taylor also argued that people in acute distress often disclose things to AI that they might feel too ashamed to say to another person, and that cutting off users during sensitive conversations could leave them without support.
The proposed tool comes amid growing scrutiny of AI platforms' safety practices. Lawsuits alleging that AI companies have failed to stop or even enabled violent acts have increased, and regulators have put pressure on firms to strengthen safety measures. In one high-profile regulatory episode earlier this year, the Canadian government threatened action after information was revealed that a person who carried out a deadly school shooting had been banned by a platform without being reported to authorities.
Heightened moderation of militant content under pressure from law enforcement and regulators has in some cases driven sympathisers toward less regulated alternatives, according to a 2025 study by New York University's Stern Center for Business and Human Rights. The study found that platforms' moderation associated with militancy has seen sympathisers moving to channels such as Telegram.
Taylor warned that if an AI shuts down a conversation in which someone discloses a crisis, that action can remove visibility into the user's needs and leave them unsupported. The ThroughLine founder said the new tool aims to reduce the risk of leaving people without assistance while also addressing online extremism through expert-guided interventions and links to human services.
Summary
ThroughLine, a New Zealand startup that connects users flagged for crisis by AI platforms to human-run helplines, is developing a hybrid chatbot and referral system to intervene when users show violent extremist tendencies. The company is consulting with The Christchurch Call while testing the technology; timing for rollout has not been set.
Key points
- ThroughLine already operates a network of 1,600 helplines in 180 countries and routes users flagged by AI for crises to nearby human services.
- The startup is testing an anti-extremism extension combining a chatbot trained with expert guidance and referrals to real-world support, in discussions with The Christchurch Call.
- The initiative responds to growing concern about AI platforms' handling of violent extremism and follows pressure on firms from lawsuits and regulators.
Risks and uncertainties
- The product's success depends on the quality of follow-up mechanisms and the capacity of referred services to effectively address extremism - a factor that affects mental health and public safety sectors as well as AI platform governance.
- Decisions about potential alerts to authorities carry the risk of triggering escalated behaviour, creating a tension between public safety, law enforcement, and user privacy that platforms and service providers must navigate.
- Heightened moderation by platforms may continue to push militant sympathisers to less regulated channels, complicating detection and intervention efforts and affecting moderation strategies in social media and messaging services.