Current options for adolescents seeking to improve their mental health may be prohibitively expensive in terms of cost and accessibility or offer limited opportunities for engagement. The developers of Kai – an AI-powered personal guide and companion – aim to solve these problems by providing a personalized platform that can help young people take control of their own well-being. Users can interact with Kai from a range of familiar platforms such as WhatsApp and Messenger, receiving regular reminders and motivational exercises.
To learn more about Kai and the potential benefits of using AI as a mental health wellness tool, Technology networks spoke to Alex Frenkel, CEO and co-founder of Kai. In this interview, Alex also explains how the data teens share with Kai is protected and how the platform responds if an urgent situation is communicated.
Ruairi J Mackenzie (RM): Your website states that “Kai is an AI-powered personal companion designed to help relieve anxiety, depression, sleep disturbances and many other psychological stressors by integrating wellness tools, techniques and exercises in accordance with the Acceptance Pledge ACT Therapy Model Does this mean that Kai does or does not offer some form of psychotherapy? can an AI effectively replace a human in providing psychotherapy?If not, do you think offering wellness counseling is an effective substitute for psychotherapy?
Alex Frenkel (AF): Mental health issues are on the rise today, and there are many outlets, resources, and tools available – therapy, life coaching, meditation apps, and more. The problem is that many of these solutions aren’t available or affordable to everyone – half of teens who need mental health treatment never see it. Kai offers a form of psychotherapy but should not replace the necessary alternative methods.
As a personal guide and AI-powered companion, Kai uses a combination of human insight and machine learning to motivate teens to constantly commit to showing up so they can take control of their own well-being. be. It does this by acting as a companion and accountability partner, engaging users with personalized questions and proactively bringing up information and content based on past interactions.
Kai draws on a range of therapeutic modalities, including ACT (Acceptance and Commitment Therapy), CBT (Cognitive Behavioral Therapy), Positive Psychology and Coaching Psychology, to engage with adolescents from conversational and interactive way, so they feel safe and familiar. AI allows every interaction to be tailored to each person’s specific needs.
Kai also leverages messaging APIs and neuro-linguistic programming tools like Google DialogFlow to manage user conversations, understand their intent, and automate responses from its platform.
Our comprehensive psychological training programs are developed from the ground up and packaged in simple, small sessions entirely within Kai’s conversational structure. Ultimately, Kai helps teens become more self-aware and teaches them how to overcome their current challenges to thrive and reach their full potential.
RM: Is there any published clinical evidence that Kai can help relieve anxiety or depression? If not, are you considering such studies?
A F: Currently, we have two research papers that have been accepted for peer review by the Journal of Internet Medical Research. In both articles, we explore the benefits of using AI as a wellness tool for mental health.
Nearly 50% of 11-year-olds in the United States own a cell phone, and that number rises to 85% when looking at the 14-year-old age bracket. Moreover, in the United States, teenagers between the ages of 13 and 18 interact for more than three hours every day with their mobile devices. Given these results, we determined that the mobile device was a very accessible and easy tool for the prevention of mental health.
The first study focuses on adolescent well-being when using an AI-powered ACT tool, while the other tests the appropriateness of an AI-based intervention delivered directly through apps of texts. Each study was conducted over a period of time, sampling more than 50,000 participants. A huge feat for any wellness study, and the results are even more exciting. A study showed that patients’ well-being increased according to the 5-point questionnaire of the World Health Organization’s Well-Being Index. The other indicated that the effectiveness of an AI-based intervention delivered via SMS apps increases treatment intensity and integrates therapeutic strategies into daily life.
Research is a crucial pillar of what we do at Kai. Dana Vertsberger leads our research department there. She and all of our staff continually seek to develop further studies to improve our offering and expand the scope of research on this critical topic.
RM: Human therapists work under strict protective procedures, especially when working with children. If a teenager were to share, for example, an intention to commit suicide, how would Kai react and who would he inform?
A F: Conversations vary in severity across the Kai platform. The system is trained to flag and track when an urgent life-threatening dialogue situation, such as a suicide, is brought to the forefront of a chat. These discussions are then flagged and essential resources such as suicide prevention hotlines are provided to the user for further assistance, but most importantly Kai is there to tell them about it. Additionally, Kai urges users to seek human interaction and assistance when the need meets these standards.
RM: How is the data that teens share with Kai protected? Is the data the teenagers share with Kai used to train his algorithm?
A F: Data security is one of the most important principles at Kai that we guarantee. One of the benefits of Kai is that the user has ultimate control over their data. Users can delete all data and conversations with Kai by simply ordering it.
The platform does not collect personal user data. In fact, users are completely anonymous inside the system. If the user wants to share “high-level” details such as age or gender, they have the option to share that information. However, the only direct question Kai asks a user is “how would you like Kai to call you?”. Additionally, Kai complies with global government rules and regulations regarding patient health data.
Kai continues to learn from data collected from teenagers to train its algorithm to respond in a personalized and personalized way. Right now, we’re seeing an average of 15,000 users engaging with Kai daily – with an increase in data, we’ve seen an increase in platform responses, ensuring that personalized approach. We can track this with human validation. Kai has a team to support human validation to ensure it is flagged when an answer from Kai was “excellent” or should have provided more information. The algorithm continues to learn during the human verification process, providing better answers.
Alex Frenkel spoke to Ruairi J Mackenzie, Senior Science Editor for Technology Networks.