Are AI apps the future of mental health support? Can these applications truly understand and address the complexities of our emotional well-being? What can a user expect when they get on such an application? And perhaps the most provocative question of all—can AI truly replace human therapists?
Join us as we unravel the captivating potential of AI apps in mental health, guided by the expertise of Dr. Samir Parikh, a renowned psychiatrist at Fortis Hospital, and Dr. Megha Gupta, the head of AI at Wysa, a pioneering mental healthcare app.
We discuss the technology, potential, ethical and privacy concerns, and offer a peek in the future of AI in mental health support.
Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:00:00] Hello and welcome to The Big Story. This is Anjali and today we are joined by... Hi guys, I am Garema Sathwani
[00:00:09] I am a health correspondent with the Quint and Anjali Am Pratik have forced me to say this but I am very very excited to be on this podcast
[00:00:19] I would like to say it is only Pratik because Pratik, she has been trained under Pratik because sadly he is not a part of this podcast
[00:00:26] So we will miss him and I will have to take the burden of using metaphors and cracking jokes which is usually his forte
[00:00:36] But today in his absence, I am going to have to do that. Or Garema if you want to take up the challenge of cracking Pratik level jokes, you can
[00:00:44] I am very sorry, I cannot do that. That level is absolutely in it. But I am assuming that he will listen to the form of this podcast.
[00:00:52] Like when the podcast will be live, he will listen. He can remember his interventions here.
[00:01:14] Because as we know, it has a lot of stigma associated with it but thankfully because of social media part of it
[00:01:22] because Gen Z is trying to normalize I am holding air quotes in my hand. It is normalizing talking about AI
[00:01:35] But we are also going to be talking about AI, artificial intelligence because as you and I were discussing it before
[00:01:44] There is not a lot of talk about the intricacies of AI. You will get fanboys or people who discard or you will get people who are very scared that AI will take my job.
[00:01:58] But there are the nuances of how AI relates to things that are part of our everyday life.
[00:02:06] I think to do that you need to get into the details of how it works. We have tried to do that with the first episode of this series
[00:02:15] where we got an AI policy researcher his name is Anupam Gwahi really got into the intricacy of how AI was developed,
[00:02:23] what it is meant for, how it is correctly being used and what are the policy concerns around it.
[00:02:29] So Garmar, this is a question slash at this table to our listeners. Have you had any personal experience with mental health?
[00:02:40] Have you ever sought counselling or gone for therapy?
[00:02:44] No I haven't. Like in my personal capacity I haven't spoken to any counsellors,
[00:02:52] psychologists, psychologists, none of those people. But because I am a health reporter a lot of my stories do concern mental health
[00:03:01] and I have extensively spoken to a lot of these people but with personal experience and with the insights from our experts today
[00:03:10] I think there would be a curve of learning about mental health, about AI and how the two of them intersect.
[00:03:17] And I am one step behind you. I haven't even spoken to a mental health care professional,
[00:03:25] no psychologist I have ever spoken to. So a lot of my information comes from my conversation with friends who have sought mental health care
[00:03:36] and the other source being the infamous social media. Although like I don't take them to see this lead but that is a lot of my source of information
[00:03:47] which is not the best. And I think that today's listeners might be curious about the way they are,
[00:03:53] like me and I as host for this episode because we will probably get to learn a lot about how
[00:04:01] and what these two topics entail and how they come together.
[00:04:05] So let us talk a little bit about who are the guests that we have on this podcast.
[00:04:09] So we are joined by Dr. Samir Parik who is a very eminent psychiatrist who works at a Fortes health care
[00:04:17] and the other guest that we have is Dr. Megha Gupta, who is the head of AI at a mental health care app called Viza
[00:04:26] which uses an AI-based chat box to guide their users through very initial stages of mental health care
[00:04:38] and we are going to talk to them generally about how AI has entered the mental health care space
[00:04:45] and what is it that it can do, cannot do.
[00:04:48] Alright so without pasting any more time let us jump right into the conversation.
[00:04:52] Hello Dr. Parik, hello Dr. Gupta, thank you for taking out the time to speak with us
[00:04:57] and welcome to the big story 2.0.
[00:04:59] So just to start off I would like to ask Dr. Parik
[00:05:04] so you have been working in mental health for quite a while now.
[00:05:08] When did you first see the use of technology in general and then you can answer specifically about the use of AI?
[00:05:17] No I think the right answer to this would be not merely about when would I see it's also a lot of this depends upon what is evidence based,
[00:05:28] it also depends on the law of the land.
[00:05:34] So a very basic example would be let's say pre-COVID even if I would have had to let's say a teleconcultation,
[00:05:42] I couldn't have done it because I was in no way to make a prescription as a psychiatric of a medication
[00:05:49] and then when COVID happened and we all realized the importance and the power and the connect of telemental health
[00:05:55] today even more a few years to that big 2 year phase that one of us went through
[00:06:02] I'm still doing more teleconcultions than physical consuls.
[00:06:06] So if you look at it from my vantage what almost 25 years of being in psychiatric and now how much it has changed just over the pre and post-COVID
[00:06:22] I now do more teleconcultions.
[00:06:25] So I think one needs to also understand this when we talk about digital, when we talk about AI it's all exciting.
[00:06:34] Yes it's innovative, it's game changing especially for a branch like mental health which has for decades
[00:06:44] been under the burden of stigma discrimination lack of access.
[00:06:51] All of this has been there even today, three and four people who would be needing to go to a mental health professional
[00:06:59] do not go.
[00:07:01] At the same time there is a significant disproportionate spread of mental health professionals.
[00:07:09] So for example in high income countries it may be as high as 70 per lakh and in low income countries it may be as low as 1 per lakh.
[00:07:18] And if you look at our country also the percentage let's say in the metros urban areas as compared to as we go away from urban landscape
[00:07:29] the access to professionals becomes lesser and lesser.
[00:07:34] So what's our option? Our option is digital connect, the digital transformation digital India which is a solution.
[00:07:41] And of course with AI also coming in gradually increasing these are new times I mean we very clear about it.
[00:07:50] If you allow ourselves to believe that we arrive when it comes to AI we've not but then that's what AI is it's evolving it's learning
[00:07:57] but I see this in a continuum that let's say way back in 50s 1950s when the first medication came till 70s when better medications came in 1990s which was largely a decade of
[00:08:12] psychiatric meds and then you move forward and today you're also talking about telehealth and you talk about AI as an interface
[00:08:23] which could be the first interface and who knows what happens in years to come.
[00:08:28] So look at it in this way and then you'll be able to have a more ever mutually perspective to how the branch and science is growing.
[00:08:38] So Dr. Pary we're going to talk about all of these things just you know the lack of mental health professionals,
[00:08:44] how this this moved to tele therapy like you just spoke about is very very recent especially after COVID
[00:08:53] but before we do that like I want to know from Dr. Gupta so since like he mentioned before COVID just internet as a medium
[00:09:03] to be used for therapy did not exist so when did you start your research of developing by and coming up with an AI based application.
[00:09:14] So I was not part of the founding team of Wiza but Wiza happened to the Wiza opportunity happened to fall in my lap very luckily
[00:09:25] I was in a transition phase I worked in education earlier and I was looking for something a new exciting opportunity to work on
[00:09:36] and somebody just contacted me and I was really very driven by the vision of Wiza and even I was wowed by OK AI can be used for mental health as well.
[00:09:51] I didn't know much about the mental health space before this you know you hear all these buzz words about mental health but there is still such little awareness right.
[00:10:02] So I was really excited I thought it would be a great learning experience for me and I jumped at the opportunity so this was around two and a half years ago when I came on board
[00:10:17] and we already like Wiza had just got its first round of funding and it was started starting to be seen as you know a mental solution to mental health.
[00:10:32] The COVID years kind of propelled that recognition right as Dr. Parik mentioned that people started to look out for such options
[00:10:44] and also during COVID a lot of people were going through everyday mental health issues they were feeling lonely, depressed, anxious about what would happen.
[00:10:53] So I think it was the right time for Wiza to come in the hands of users and start being used.
[00:11:07] So actually what I'm really interested in knowing is like since Wiza's based on AI chat box would have been trained on a model right and AI like we know is usually database it is processing a lot of data.
[00:11:24] And so I just want to know what were the initial data sets that you used or that you came across that you thought would be useful in creating this chat box.
[00:11:35] So that was we didn't have any public data sets available for this particular domain as you might find for other AI models right there are a lot of platforms like Kaggle where you'll find a lot of public data sets for different use cases but for mental healthcare especially since its private sensitive information such data sets were not readily available.
[00:12:02] So in the beginning you know whatever models we created the training data came from either internal use.
[00:12:12] You know our team themselves being the users and we testing on what what a user is likely to say or synthetic data creation and as we started onboarding users we started getting more use actual user data
[00:12:28] and we started incorporating that as well in our training sets to improve our models. So this model improvement is a never ending process we continuously refine our model so even like now you know almost seven years down the line where we continue to do that and yeah but that's how it started.
[00:12:51] So I was actually going through your website and you mentioned there that you don't actually use the data from anyone who uses the app so how come you're constantly updating your app through this very data could you we don't use yeah we don't collect or use any PII which is personally identifiable information so anything which any information that can be used to uniquely identify a user like name and we can use it.
[00:13:21] So we can address age, sex right all of country language. All of this is redacted or removed from what a users may say this information while talking to the bot but all of this is redacted from their messages before their messages are stored in our database.
[00:13:44] Nobody gets to actually see those messages even as data scientists and machine learning engineers when we work on our models. We have to then fetch this data from the database but it is already PII redacted so we don't get access to any sensitive info so that's what we mean we don't collect any information
[00:14:04] and also like wiser doesn't require the users to sign on when they start using the app. They don't have to create an account, they don't have to provide a name or an email ID right so they can just install the app and start using it right away they use they use a nickname for them they can choose it whatever they want
[00:14:27] and that's how wiser refers them as but they don't have to provide their actual name.
[00:14:35] So this is a general question to you Dr. Parick like we're talking about data but for the longest time psychology as a subject is believed to be very subjective so how quantitative can such data be?
[00:14:50] No you cannot say that that's not true science is science we don't make science an art here psychology is a science psychiatry is a science it's not a form of art there's it's not subjective right the moment we start using these words we're diluting the fact that we're talking about a science so if you go to a psychologist anywhere across the globe and if you are having depressive features then wherever they are so we can see that they are in the same place.
[00:15:20] So if you are not expecting and if the diagnosed you as depression they're diagnosing you as depression because based on evidence standardized guidelines whether you follow the ICD or the DSM or any other school of thought it's going to apply across the board across the world right similarly treatment is again evidence based so again taking example of depression wherever you are in the world.
[00:15:49] So in the world depression the guidelines are very clear you'll start with therapy which is the most common therapy CBD what are the various domains will be using or in some cases again well defined you may use some of the other forms of therapy moderate onwards you will use medications if you're using medications what is what is the start medication if that doesn't work what is the next how do you define resistant depression
[00:16:17] techniques that you can use so on and so forth so to say that psychology is a subjective science is not true but then what was never started as being subjective I mean let's be very clear about whatever if you're going in history then everything could have been subjected before there was evidence
[00:16:34] but when we talk about modern mental health modern psychology and psychiatric let's be very clear about it it's a science and when it comes to psychiatry medical science I'm I'm a medical professional you go to your diabetes endocrinologist for diabetes they work around your insulin for your you know drug goes control you come to me I walk around your zero to the inside your depressivillus I mean that's how it is
[00:17:03] so maybe subjective is the wrong word to use but if we try to look at it as a quantifiable there are something that can be converted into data to be used into AI
[00:17:16] so have we started doing that recently or have you been collecting data for a while to be able to program an AI model
[00:17:25] and you know what you are saying I mean let's say a few decades back when the idea of AI would have not even occurred or the idea of AI in healthcare wouldn't have occurred how would one have done this right
[00:17:38] you need something to also work with something and it's not that you're randomly collecting some papers and forms and you're feeding them into something that's not how it works
[00:17:49] the AI that one is talking about right now is a dynamic interface it's not like a form when you do tick and cross and a sub total comes and you say okay this is it
[00:18:02] understand the various domains and aspects right so when you talk about data of course like I said at the start also science evolves
[00:18:14] anything that has to grow goes for its own evolution where we are today we will not be here 10 years now delay but where we are today we would have never imagined here 10 years back
[00:18:26] so we need to be also open careful responsible and frankly ethical
[00:18:35] so when you say that we need to be open to discussing about mental health and its evolution as a science with even the technology I want to ask you how five years ago people were people
[00:18:52] who were receptive to having AI tools to consult when it came to psychology and how are people reacting to it now our patients more open when it comes to
[00:19:02] to the mental health lines or things like that if you were to say pre-COVID had you thought telemental health can work in and here when I say telemental I'm talking psychiatry because obviously it's a prescription aspect I'm talking over I'm not talking about counseling or psychotherapy I'm talking to you
[00:19:19] I'm talking about the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the mental health and the
[00:19:49] mental health and the mental health and the mental health and the mental health and the mental health. What I've never thought the dad and the mental health and the mental health and the mental health butум I've never thought i5 a final problem and so which normalzo就 peut cough but the mental healthº
[00:20:14] which is slightly different. So please understand the difference between the two as well.
[00:20:19] Don't club it. Mayor, Tele-Mental Health and AI are not the same. They may be happening in
[00:20:25] the digital space but everything that happens on the digital space is not AI. It's not tele-Mental
[00:20:31] Health and thankfully it's not social media. Yeah so I think we really need to create
[00:20:38] that distinction in between interacting with an actual professional doctor over the internet
[00:20:47] which is what Tele-Mental Health would be about as opposed to what Dr. Wopta works on which is
[00:20:55] an AI backed and AI based chat bot of sorts that also helps you through your mental health journey.
[00:21:05] So Dr. Wopta I want to ask you when creating this chat bot, do you have in mind a certain part
[00:21:15] of a person's mental health journey who is let's say seeking therapy that we want the chat
[00:21:21] bot to be able to guide them into this and after a point it moves to an actual professional or
[00:21:29] do you think a chat bot can actually help a person completely? Is it supposed to be a support
[00:21:35] or it's supposed to be the solution? Sure. So what Wiza is catering to is a large spectrum of people
[00:21:44] who sit in the middle of the mental health spectrum and what I mean by that is that we have users
[00:21:51] who are, I mean there are people out there in the world who are looking just for wellness solutions.
[00:21:56] You know they don't have any mental health issues but they want things like meditation apps
[00:22:03] right or sleep stories, something to just keep them mentally healthy but they are not going through
[00:22:13] any particular issues as such. Then there is the other at the other end of the spectrum is the
[00:22:19] diagnostic space where you have clinical users which Dr. Paryik must be dealing with right so
[00:22:28] you know people who are diagnosed with mental health issues like schizophrenia, bipolar disorder,
[00:22:35] clinical depression and so on. We are not into either of these spaces we are in the middle where
[00:22:43] where people are dealing with day-to-day mental health issues like loneliness, negative thoughts about
[00:22:50] themselves, anxiety or you know not being able to sleep. So these are some problems which are
[00:23:01] which everyone deals with and they don't need medication per se right many of them don't and
[00:23:13] many of them can be treated by just knowing how to deal with them yourselves. There are exercises
[00:23:23] as Dr. Paryik mentioned that there are different types of therapies that you know CBT is just one
[00:23:29] of them but so there are different exercises, different tools that you can use to work on your
[00:23:34] mental health yourself. So, WISA is a tool and AI enable tool to cater to these group of users
[00:23:44] and it doesn't mean that you know there is no gatekeeping being done when a user comes and uses
[00:23:53] our app we don't know whether which end of the spectrum they are on right they might very well
[00:23:59] be suffering from clinical depression but there is no way we don't do any of those checks beforehand.
[00:24:06] So, anyone in the world can use WISA but we have safeguards you know build into our app
[00:24:15] so that the AI is capable of detecting if the user is at risk if the user may actually be on the
[00:24:24] clinical end of this spectrum in which case the AI bot itself suggests that they seek professional
[00:24:33] help. We also have human therapists on our app right in house they are not psychiatrist they are
[00:24:43] therapists but if a user needs additional help if the AI is not working out for them so well
[00:24:50] and they always have the option of taking therapy from a human as well.
[00:24:57] So, MAM you made this very interesting point where you said that your AI tool is enabled to
[00:25:03] recognize when a patient might be going through a severe they might be clinically depressed or
[00:25:09] have a severe diagnosis so how is it that your AI tool actually recognizes that this person is in
[00:25:16] need of human intervention actual professional intervention. Okay so just a disclaimer I'm not
[00:25:21] saying that we diagnose we do not diagnose but yes we do look for any indications that this user
[00:25:31] might need more help than what we can offer. So, for example our AI bot is looking out for
[00:25:40] any indications of suicidality or you know tendency to self harm or if the user is talking about abuse
[00:25:51] similarly we have some standardized assessments built in our app that Dr. Parik also mentioned
[00:25:57] that are well established you know they consist of a finite set of questions and with
[00:26:05] you know fixed options so the user can it's a self assessment user can answer those questions and if
[00:26:13] if they are found to be in the medium or high end of the range on that assessment then we
[00:26:23] that is an indication of that the user might need more help. So, the bot will direct them to seek
[00:26:31] that help will encourage them to seek that help so that's how. So, Dr. Parik what would you say are
[00:26:38] some very clear limitations of where AI can be used in mental health and are there any nogo areas?
[00:26:46] No there are no absolutes in life right like I said we are in a state where more evolution is
[00:26:53] going to happen right now where we are we are which means given the deficit of experts
[00:27:03] given the disproportionate availability of experts given the sheer volume of people struggling
[00:27:15] that it's huge. When you put that in a context you need to be able to reach maximum number of people
[00:27:28] and provide them with whatever best we can do at a given moment in time. That's where starting with
[00:27:38] the digital interface. So, if let's say you are living in a part of our country where the nearest
[00:27:46] expert to take medications for something as simple as let's say panic disorder or a depressivleness
[00:27:53] or bipolar you may have to travel a day or two you may need to put in a couple of days of your time
[00:28:03] your caregivers time so on and so forth. And if you had to do more follow ups
[00:28:09] you may not be able to do them because you will not be able to make this
[00:28:16] commute all over again in a frequency that clinically the doctor might be
[00:28:22] needing for let's say the best possible help that they can be given that's where telehealth comes
[00:28:28] in the picture right of course if I see 100 people on telehealth there might be a few who
[00:28:34] might still might need to see them physically various factors where I would feel no this person
[00:28:40] needs to be seen physically but there's also the case that after seeing 100 people in the physical
[00:28:48] domain I may tell at least 60 70 of them you don't need to come every time to my clinic we can do
[00:28:55] an online consult and that will also work all of this increases access makes it affordable
[00:29:02] improves compliance and the goal of treatment is achieved and dropouts in the middle don't happen
[00:29:08] because of external factors right this is one part of it now what is AI doing or is trying to do
[00:29:17] AI is trying to reach more people right let's keep it very simple right now what's our first
[00:29:25] goal reaching more people second goal would be providing for what we can do the best at this moment
[00:29:34] in time until a lot more science grows and evolution happens of this aspect till then what is
[00:29:44] the best that we can do understanding that what we are giving is not the solution nobody who's
[00:29:54] working in AI or working as a clinician like me is going to tell you that rather than going to
[00:30:03] clinical psychologist all a psychiatricist go to let's say any any when I'm not specific
[00:30:11] to make us work here but to any AI nobody's going to say that AI themselves are not going to say
[00:30:18] that to you right but we also have to understand 100 the thousands of people can't do that
[00:30:25] and because they're all they're hesitant to do that or reach as an issue so how about giving them
[00:30:30] something now and through that if you can give basic guidance basic counseling you can do basic
[00:30:39] level of screening almost like a semi-trage of sorts where you can tell an individual hey we feel
[00:30:47] you may be having some features which are suggestive of let's say anxiety which may not be
[00:30:55] something that we might be able to help you whereas if you do these four five things it'll be good
[00:30:59] for you but we recommend why don't you see a psychologist or see a psychiatrist whichever way it
[00:31:08] goes right the idea needs to be very clear you do what you can do and don't claim to be doing
[00:31:16] what you can't do don't try to do what you cannot possibly do and that information needs to be
[00:31:26] uniluted no statutory warning there it needs to be hazardous so if there is no false pretext
[00:31:34] if there is a clear understanding of what one is doing and there is mechanism at a significant
[00:31:40] success rate I mean nobody will be at 100 on 100 at a significant rate but if there is any marker
[00:31:48] suggestive of immediate intervention or an intervention required biologically or therapeutically
[00:31:54] you go ahead and do that I think that's where we are right now if you try and stretch this and
[00:32:01] try to no people in the future you would have not thought when I was in school there'd be mobile phones
[00:32:11] right and all three of you were not born then so try to understand this so how can we say what will
[00:32:19] happen 10 years 20 years 30 years down the lane look at now look at where we've come from over
[00:32:25] the last three years it's promising it looks like maybe we'll be able to help more people and if
[00:32:32] more people can be helped to me that's what we have achieved if we are able to do that and like I
[00:32:39] maintained dignity ethics values that's very important and that's the only thing that
[00:32:48] if at all you were to ask me if I have a red flower I have a worry my only worry is that humankind
[00:32:55] continues to lose its connect with values every now and then when it comes to adding a zero on the
[00:33:00] check right as long as humankind maintains its value system we'll evolve into better people
[00:33:10] what I actually went to as dr. Parik is as a user of let's say on AI based chatbot today
[00:33:17] what is it that I should know is the limitation of current technology that that I should be aware of
[00:33:24] you know I'll you have slightly different answer is in let me also answer this question though
[00:33:29] but try and understand this I don't think it's only about the user I think it's more the responsibility
[00:33:35] of the platform and let's not confuse between the two so for example you go on a certain social media
[00:33:42] platform right now if you are deciding to use a platform which is meant for sharing pictures
[00:33:50] but you want to use that platform to gather information on medical knowledge and you now decide
[00:33:55] to take medical knowledge from an individual who called themselves let's say influencer without
[00:34:01] having any medical knowledge we have to figure who's more at fault right is it the platform problem
[00:34:09] is it the your problem is it the the self-acclaimed influences problem is also to advise
[00:34:15] following every place also there can't be a generic advice why because there'll be different
[00:34:19] platforms doing different things today we may have a few platforms which are doing some significant
[00:34:26] work but clearly in a couple of years you'll have lots many more platforms one needs to understand
[00:34:32] what one is getting into yes but that happens who consistent information and I strongly believe
[00:34:39] where we are today a lot of owners lies on the platforms more than the user it's the job of
[00:34:46] the platform to inform the user that this is what we do this is what we don't do this is the kind
[00:34:54] of help we can do and this is the limitation of our advice and this is the window within which
[00:35:01] we are working right so when Dr. Megha says that they're working in let's say if the spectrum was
[00:35:07] from 0 to 10 and they're working from let's say 3 to 6 but if we use our things they're working
[00:35:12] from 0 to 10 then somebody needs to tell them hey we are only for 3 to 6 if you're 0 to 3 don't come
[00:35:18] here if you're 6 plus we are going to make sure you go to someone else or you first go to someone else
[00:35:23] if you still if you happen to come with us so that owners lies on the platform I don't think we
[00:35:29] are in a position to generalize this because everybody with different platforms and the way
[00:35:35] they're doing in AI are at very different parts of this evolution so we can't have a generally
[00:35:40] cool right now um Dr. Gupta I'd love for you to add on this but I also have a question to ask
[00:35:48] when Dr. Padi speaks about the responsibility of the platform and the user so of course it's
[00:35:55] the owners of the platform to tell the user what it can and cannot do but um
[00:36:02] when we talk about a platform like this that you know has access to data about health which is
[00:36:11] obviously sensitive is the owners also on the platform to secure that data because just yesterday
[00:36:19] we had this major massive breach of you know the covid vaccination portal that is COVID so is that
[00:36:27] also something the user will have to be concerned about or are these platforms something that users
[00:36:33] can trust absolutely I mean users should definitely be concerned about their privacy
[00:36:41] the privacy of their health data and platforms have to be ethical and responsible to take care
[00:36:50] of that data secure that data and from the users and you know sure every every platform will have
[00:36:59] a privacy policy and terms of service but it's such a long big document like to be honest right how
[00:37:06] many of us read those right we just click on I agree and we move on right but uh so it's not
[00:37:15] just enough to say things in your privacy policy you also have to make sure that while the user is
[00:37:23] using your platform um you are using their data in a way which does not breach their privacy
[00:37:31] which does what you have promised them that you are not collecting any private information any
[00:37:37] personal information from what they share on this platform um and um I mean uh there should
[00:37:45] be appropriate links in within the app so that uh uh user can go and check what kind of data is being
[00:37:53] collected what are what are the policies of the platform um on what they share who can read it who
[00:38:01] can access it um so for us uh for why is i as an AI but apart from having all these policies in our
[00:38:11] uh privacy policy or as in under the FAQ section on our website uh right even during the
[00:38:17] conversation a user might ask such questions that you know who is reading my conversations who
[00:38:23] is reading my messages what do you do with my data and we have our models trained on detecting
[00:38:29] such questions and uh responding appropriately to them um and uh in today's world especially in
[00:38:38] the healthcare domain if you don't comply to uh these policies right there's thankfully like
[00:38:46] lot of compliance and lot of regulations are coming up people are becoming aware of how important
[00:38:51] it is to secure such data if you don't comply to those regulations you know um you would soon be
[00:38:58] out of business you won't be able to continue and users are becoming aware that their data needs
[00:39:05] to be protected uh right so they also look for um uh such they'll whenever they go to a platform
[00:39:14] they look for uh uh confirmation of this fact that their data is not being used uh you know without
[00:39:22] their permission so yeah it goes both ways but uh uh yes i would say that it is definitely the
[00:39:31] platform's responsibility because you cannot assume that the user would know about this user would
[00:39:35] check this so platform should definitely take care of that as well so uh obviously data privacy is
[00:39:42] one but Dr. Parik what would you say are some other ethical concerns with the user FAQ?
[00:39:50] No i think overuse would be an ethical concern to me right oversell would be an ethical concern i think
[00:39:56] um staying within the domain of what your particular AI platform is able to do and standing by it
[00:40:05] and within that framework i think is very important because the risk always is promising more
[00:40:11] trying to do more than what is possible to do and that's where a concern is i do feel that
[00:40:19] basic sensitivity towards how uh we react mental health has largely been stigmatized
[00:40:26] discriminated against for decades this is not a now of a nominal
[00:40:32] and the kind of responsibility that anyone has whether you are a clinician working in a physical
[00:40:41] space or a clinician mentally professional in the digital space or an AI platform we all have a
[00:40:50] responsibility that the basic respect dignity privacy confidentiality hope needs to be well taken care
[00:40:59] of because you make a mistake at one level with one individual that's a jenga effect
[00:41:08] this individual may never come back people around this individual may also feel
[00:41:15] this is not working and again the whole cycle of discrimination and stigma starts
[00:41:20] doing that's what is important i don't think and i i completely agree that there will be more
[00:41:27] and more guidelines there will be more and more uh aspects that people will have to
[00:41:33] comply with and that's how it should be i just feel um it's good to know that we are trying
[00:41:43] to reach for solutions i feel that the humbling aspect of in spite of all that we've been doing
[00:41:53] for such a long time we've probably made an impact of a droplet in an ocean and the ocean is
[00:42:02] where the sheer problem of mental health lies so until we continue to do what we are doing
[00:42:09] it won't work and that's why we'll have setbacks we'll have problems if our purpose is on track
[00:42:18] i always believed if your values are on track and if you work with basic sincerity
[00:42:25] um economic aspects will follow i mean as long as you don't follow the numbers and let values be
[00:42:34] compromised it's all fine you let your values guide you you get your numbers that's the way
[00:42:40] i look at it and that's the only way uh to ensure that the science of mental health which can make
[00:42:47] absolutely life-altering experiences for people and take out hundreds and thousands of people across
[00:42:54] the globe uh who've been silent sufferers themselves in their families uh it's a collective
[00:43:01] responsibility to ensure that we do something and uh equity and equality are not just words
[00:43:07] the disparity is reality right we need to beat the disparity we need to beat the stigma
[00:43:13] and let people get help to the best that they can wherever they are in the quickest possible
[00:43:21] manner the most affordable manner where access is also an integral part of it and that's the way
[00:43:27] to look at it that's very true sir you there's a lot of important points which gives me a lot of
[00:43:34] questions to ask you actually so one is that you're uh you've mentioned this twice throughout that um
[00:43:41] these platforms should not oversell or not over promise what they cannot offer
[00:43:47] so have you personally seen any such examples where people have been their apps or you know
[00:43:53] AI tools have been promising more than they can deliver i mean wide wide wide just uh talk about AI
[00:43:59] right look around yourselves look around and advertisement of a cosmetic also it's of lines are
[00:44:06] been crossed day in and day out all around us so and that's why i said if you are doing it for the
[00:44:13] right reason in the right manner then it's not an issue human errors are human but negligence deliberately
[00:44:23] misrepresenting misleading that's unacceptable and that's why there are guidelines that i'm happy to
[00:44:30] see that there are evolving uh aspects to comply norms and there'll be only more and more norms which
[00:44:39] should be the case i mean everything should have a peer review mechanism everything needs to have a
[00:44:45] self regulatory body and that needs to be an external regulatory body because if there is a problem
[00:44:50] that it needs to be addressed that's life you have it for medical practitioners as well rather than
[00:44:55] looking at who did what's wrong i always believe that who does their own right well has to be the
[00:45:02] approach life that's actually why true sir and uh i'd like to go back to the previous answer where
[00:45:10] you had said that um uh you talked about medication and i wanted to ask dr goptha uh can these AI
[00:45:19] tools actually prescribe medication to um any client or customers or any users actually as you
[00:45:25] call them for your apps um because a lot of these that's an unfair question to ask you mega no
[00:45:32] they can't because there are guidelines who prescribe who doesn't and the only people who can
[00:45:37] prescribe medications are registered medical practitioners non medical practitioners can not prescribe
[00:45:44] medications yeah but garima the way AI could help there AI cannot prescribe medication uh but
[00:45:54] you know in many use cases AI is being used to just collect a patient's uh stats physiological data
[00:46:02] or whatever symptom uh symptomatic data and uh being able to triage the user right being able
[00:46:10] to identify patterns and predict that okay this user might be uh might need to be seen by the doctor
[00:46:18] right so it can take away some of the burden of the clinician but no it cannot uh cannot
[00:46:26] diagnose cannot prescribe medication ultimately there has to be a professional in the loop human
[00:46:33] professional in the loop AI can say okay this patient probably needs attention doctor can you please
[00:46:39] look at this file right look at this patient they might need more attention than others and
[00:46:46] then the doctor could probably prioritize their treatment but ultimately it has to be the human
[00:46:54] professional who has to do that and uh what about therapy can AI provide full-fledged therapy uh
[00:47:01] to this moment in time this is again a question which is a difficult one to answer
[00:47:06] uh because therapy is not a generic umbrella right there are various kinds of therapies
[00:47:15] so to say that can AI provide for therapy right now again is probably not the fairest of the questions
[00:47:21] because we are now trying to enter a domain where we don't know is the answer right now
[00:47:29] now right now we have some evidence that AI is doing some contribution and we need to build on it
[00:47:37] see where it takes us and rather than trying to over achieve we shouldn't lose out on what we are
[00:47:45] in a position to achieve please understand the importance of that and that would be to increase access
[00:47:51] increase access give basic intervention which may be significant for a lot of people struggling with
[00:48:01] issues and to be able to guide them to the earliest intervention to the next level even if that much
[00:48:10] we are able to achieve consistently for a period in time i think we would have got a remarkable job
[00:48:16] we don't need to jump 2030 steps here this particular step needs to be consolidated before
[00:48:23] we want to do the next big thing right both of you also mentioned that while right now AI cannot
[00:48:32] provide therapy to people um this is something that we can build on in the future so i actually wanted
[00:48:39] to ask you whether AI applications can be modeled with some sense of empathy or you know sensitivity
[00:48:49] when it comes to marginalized people for instance if i am a religious minority or if i am a gender
[00:48:55] or sexual minority i would go to a psychiatrist or the psychologist who would understand that
[00:49:01] i might also have struggles because of my identity is this something that AI can also you know
[00:49:08] help us with you know before mega answers on AI let me put a few clarifications here
[00:49:15] right if your statement is that i would like to go to a psychiatrist's psychologist who would
[00:49:22] understand that i am from let's say marginalized or struggling background
[00:49:27] that if does not exist if that if exists you're talking about an incompetent profession
[00:49:35] simple as that i am not going to say that i am having a farbative towards this community
[00:49:43] affirmative towards that community i'm affirmative per se i think i think where the question is also
[00:49:49] coming from is because it's a it's an issue that has been flagged with AI before is that
[00:49:55] it is based on pre existing data that in itself might not be very equitable right there is uh
[00:50:04] there is a sort of inequality in how this data is collected who it is collected from
[00:50:09] and in in this process there are certain communities that are marginalized and therefore
[00:50:15] we might not even have the data to be sensitive towards them because this is an issue that has
[00:50:20] been flagged with AI is why we're asking uh doctor Gupta how do we plan on making our AI modules more robust
[00:50:29] in the absence of such data sure so uh to answer part of the question about can AI be empathetic
[00:50:37] absolutely visor is very very empathetic it has been built into the design of visor that it is a
[00:50:43] friendly empathetic non-judgmental chatbot right so irrespective of what issue you're coming in with
[00:50:52] what is your background uh you'll get the same empathetic responses for for your problems right so
[00:51:00] and uh and i'm i'll be talking uh mainly about mental health sticking to mental health care here
[00:51:09] that a lot of therapy you know doesn't depend on uh what exactly happened what situation you are in
[00:51:17] it is more more to do at least for cognitive behavioral therapy or CBT uh it has to do a lot with
[00:51:24] your thoughts and feelings and actions right those are like the that's the triad of CBT so um
[00:51:34] i might be feeling depressed there could be a million reasons for different users right for feeling
[00:51:39] depressed but ultimately what the bot needs to offer you tools and exercises with therapies to know
[00:51:47] how you are feeling uh what made you feel that way and can can you act on it to change that thought
[00:51:56] so that you feel better right so so that takes care of one part of the question that uh often
[00:52:05] this kind of detailed background information may not be needed for therapy right empathy
[00:52:12] has to be offered to everyone irrespective of their background however i do agree that
[00:52:18] it's possible that the AM model is trained on um certain kinds of situations so it's not
[00:52:25] trained to detect some of the challenges that might be cultural that might uh you know
[00:52:32] uh be very specific to the individual in which case even though it's being empathetic it's not
[00:52:39] able to um the user may feel unheard or not understood well enough right so again there are two separate
[00:52:49] problems again the the kind of thing garima you are asking about it would require you to collect
[00:52:57] personal information about the user um which frankly speaking uh i don't think is needed in fact
[00:53:06] you know a lot of r users tell us that they feel they are they feel very inhibited when they talk
[00:53:15] to wiser precisely because wiser doesn't know who they are right and they don't feel judged them
[00:53:23] there is no human sitting behind that screen and their chats are completely anonymous and
[00:53:30] they are able to express even negative thoughts dark thoughts right whatever they want to say
[00:53:36] they are free to express those and wiser will still be empathetic um so yes um and
[00:53:43] actually i think the issue that you are referring to is the issue of bias in training data and
[00:53:50] uh AI models which is um i agree i mean it is it is definitely an issue it is a matter of concern
[00:54:01] particularly for wiser but it is uh since since we don't go into i mean our responses or
[00:54:10] our way to approach cbt for a user doesn't change according to their background uh maybe it
[00:54:17] doesn't affect us so much but um uh definitely bias is an issue and an AI model is only as knowledgeable
[00:54:25] or as good as its training data right so yes if the training data has been collected only from say
[00:54:33] users from the us right um then uh uh it is possible that the kind of problems they are talking about
[00:54:42] are very very different from the kind of problems say a marginalized user from India would be
[00:54:47] talking about however the assumption base assumption remains at least in mental health therapy
[00:54:54] is that the emotions that a human experience is still remain the same across the globe right so
[00:55:03] and since cbt is all about the user taking action themselves to improve their mental health
[00:55:13] even when they talk about their thoughts which might be about a very specific situation
[00:55:18] the bot leaves it up to them bot guides them to reframe their negative thoughts
[00:55:23] right to a positive one it the bot does not need to know what exactly happened
[00:55:29] uh it is definitely possible that you know people from different parts of the world or different
[00:55:35] strata of society talk about express their thoughts about the same thing very differently so if
[00:55:42] even even across age groups right so a young person might express their thoughts very differently
[00:55:49] use a very different language right it all boils down to the natural language here for a chat bot
[00:55:55] so the language used will be can be very different so yes that is one aspect that the AI models
[00:56:01] have to be cognizant of and take care that it should be uh if they know their target user base
[00:56:10] then the the models should take care of including those kind of you know uh addressing
[00:56:18] the language of their target users um uniformly so i'm i'm thinking you're that the more
[00:56:25] people interact with this uh a h at bot soon it's constantly improving itself and
[00:56:31] the more interactions it has it is bound to get better yes yes but you
[00:56:36] then again since it's a healthcare space you don't want to start off on a very weak base right so
[00:56:44] you have to be prepared well you have to expect what the user might say and build your AI models robust
[00:56:51] enough to what you may get but yes it is of course a continuous process of improvement
[00:56:59] based on user data but you have to start with a strong foundation as well uh yeah so
[00:57:06] uh you know when it comes to mental health a lot of times it's said that a doctor can only help
[00:57:13] as much as a patient allows the doctor to help them um so i was just wondering if someone
[00:57:19] if say person A goes to a psychiatrist or the psychologist in person and expresses what is
[00:57:26] stressing them out or what their troubles are the doctor might convince them to you know join
[00:57:33] another session come back um and you know talk about their problems again is there a mechanism
[00:57:40] for the chatbot to do that because what if i log in on the chat bot i try to talk about my problems
[00:57:48] and i see that you know the first two messages are not helping me and i just log off
[00:57:54] so is there a mechanism that the chatbot can deploy to help someone who actually needs help
[00:58:01] one one thing here you must add that this is not specific to mental health this will be
[00:58:07] this applies to healthcare per se you may decide to go to a an endocrineologist who prescribes you
[00:58:14] some lifestyle correction and medication for diabetes if you don't decide to follow them
[00:58:18] and if you are told to go off let's say sugary erotic drinks what you continue to have them
[00:58:23] the doctor and if you visit the doctor then the doctor may keep reminding you but it comes
[00:58:27] down to you but to say that you can be helped only as much as help in that specific to mental
[00:58:32] actually not true mental health on the contrary has illnesses where the individual may not have the
[00:58:37] insight and it's a part of the treatment to help the individual develop insight as well so just
[00:58:42] look at it on that item the other aspect is this is not completely correct that if you were to go
[00:58:49] on to a let's say a physical space and you've gone to a psychiatrist or a psychologist and you
[00:58:54] decided not to go back to them then they are supposed to respect your confidentiality
[00:58:59] and privacy needs because what if you decided to go to somebody else and if they were to give
[00:59:05] your call and say hey you were supposed to come to the river you must come and this individual might
[00:59:09] have started going to someone else or doesn't want to go right now and want to do it later on
[00:59:13] you are crossing a line here right so if somebody decides to leave a chat and if you want the
[00:59:21] chat but to now start chasing the person the hair complete the chat complete the chat
[00:59:25] we're again crossing lines here
[00:59:33] so
[00:59:54] yeah so I would not talk to that aspect of the question but
[01:00:00] I'll mainly speak about the chatbot aspect that a lot of users do come to us they are skeptical
[01:00:07] about you know and AI being able to help them so often they would search on the play store
[01:00:14] they'll find this mental health app they'll download install it out of curiosity and they will
[01:00:20] first test the AI right they will ask questions about general knowledge right because an AI is
[01:00:29] supposed to be smart right so AI is supposed to know everything so they will test the waters first
[01:00:36] and slowly the barriers come down and they realize oh actually this AI is understanding me it's
[01:00:45] giving me pretty natural human-like answers and then they then they start opening up in reality
[01:00:52] they don't open up from like message one right they also want to understand who they are talking
[01:00:59] to better before they open up and I think so that there is no distinction between AI versus a human
[01:01:08] professional in that respect that even if a user doesn't feel comfortable with the human
[01:01:15] professional right they will not open up so it's about establishing that trust establishing
[01:01:22] that level of comfort which is present in with both AI as well as a human profession
[01:01:28] yeah I think that would be a good note to end this podcast on and if you if doctor
[01:01:35] Parikin doctor Gupta you have any closing comments please feel free to take the mic
[01:01:43] I'd just like to say that I think it's very important to build awareness around mental health
[01:01:48] which is thankfully happening and anyone who is working on mental health or anyone who is
[01:01:56] struggling with mental health we should together try to remove the stigma attached to it
[01:02:03] and make it more accessible affordable for everyone that's it I think that they to accept
[01:02:13] that we evolve science evolves what we have right now we should maximize it to the best that we can
[01:02:23] learn from it grow with it not being a rush not being a hurry don't cross a line promise what you can
[01:02:30] don't promise what you can't access to mental health needs to be seen as the fundamental right
[01:02:38] like other fundamental rights of our life because when you struggle with your mental health
[01:02:44] everything else becomes secondary not just for you but for your loved ones
[01:02:49] in spite of what we perceive as a lot of awareness especially all the conversations on social media
[01:02:56] truth is that globally less people seek help than those who don't seek help because of stigma
[01:03:10] that's the truth then there is access which is a concern disparity of availability of experts is a
[01:03:18] concern cost is a concern there are so many aspects if we can bridge some of these whether it's
[01:03:27] or telly health whether it's when evolving AI it will all help us create the impact that future
[01:03:36] generations can benefit from because mental health concerns are real mental illness is real
[01:03:44] silent struggles not getting help is real and it needs to be something that should be a priority for all of us
[01:03:54] thank you so much for your time Dr. Parikand Dr. Upta it was lovely speaking to you
[01:03:59] I'm pretty sure a lot of my very very basic doubts have been cleared in this conversation
[01:04:05] and I have realized a lot of points where I had made some assumptions which were not true and which
[01:04:12] I'm guessing a lot of our listeners might also have a grandma over to you yeah I totally agree with
[01:04:19] Anjali a lot of my doubts have also been cleared where I had been unintentionally clubbing AI
[01:04:27] with telemental health and all of these things this my new details also but very basic things that
[01:04:32] I had previously missed on but thank you so much for taking out the time Dr. Parikand Dr. Gupta
[01:04:39] and thank you so much for being on this podcast and helping us out today thank you thanks for having me
[01:04:46] okay so Garima how do you feel about this conversation and what has changed pre and post
[01:04:52] this one hour um so for me I think a lot of my doubts have been cleared because
[01:04:58] abhi tak me AI or digital tools ko ek umbrella term samaj raithi and a lot of the logistics that come
[01:05:08] with combining AI with mental health on sab ki baadu meh rha kaafi doubts clear weh so that's
[01:05:15] a good thing I think but have your concerns related to AI and it's using mental health have they
[01:05:24] the same um a little bit because guidelines ki kami abhi bhi hai and yet again our
[01:05:32] jiu data sets hai they might come from like a biased place so there are concerns about the
[01:05:39] privacy of our data about how much access it can actually grant to some of the concerns
[01:05:48] but what do you think about it?
[01:05:50] what do you think about it?
[01:05:51] the strict policy does not come into place yeah there are very very strict guidelines that
[01:05:59] these apps have to abide by, japtak phone you got up to obviously it's not like the apps will stop
[01:06:05] doing what they do just that the consequences aren't as big as they should be but I feel like as a
[01:06:14] primary exposure of people struggling with mental health AI can prove to be a very very
[01:06:23] accessible like we spoke about a very very accessible and affordable medium to know just the
[01:06:29] very basics but that is what users should know that it is as basic as that and it is not an
[01:06:38] alternative for a human psychologist at least with the technology that we have currently
[01:06:43] of course as Dr. Patek and Dr. Gupta also said right now what we have is the opportunity to
[01:06:51] build on existing infrastructure with the help of AI and not suddenly transform the world of
[01:06:58] mental health through AI even this opportunity is a big deal in itself because
[01:07:02] it can happen to many resources open up for people but with this also the concern is that
[01:07:10] these AI tools will be available to them there will be access to those who already have
[01:07:17] a phone or laptop or some amount of privilege is to take this thing to take it to
[01:07:25] that is an area where there is a disability but with the way AI is gradually progressing
[01:07:33] that is an area that should be covered hopefully like we've already discussed it has still
[01:07:40] as a lot of stigma attached to it so even if we do put these apps in the hands of these people
[01:07:47] we still need to take very concrete efforts in convincing them that this is something that
[01:07:53] is helpful for you that you should be seeking mental health care if you need
[01:07:59] AI can do a big role in that also like if somebody who is unaware of what words like anxiety
[01:08:05] and depression even mean and if these AI chat bots can explain it to them like a human psychologist
[01:08:12] then it can help in a very different space
[01:08:17] you know the AI chat bots can actually work across different languages different dialects and help
[01:08:24] users in the way that they understand. Very basic breakdown in terms of the jargonized
[01:08:32] so that would also be a huge step because you just tell someone that their nearest
[01:08:40] psychologist is a psychiatricist who is trusted professional. Until then you can tell them
[01:08:47] that is a big thing in today's time but would you go to an AI chat bot to seek mental health care
[01:08:55] maybe out of curiosity but if I would really need help at this point of time I think maybe human
[01:09:03] intervention would be better so here's the thing where I think that I would be using
[01:09:09] an AI chat bot is when I feel like going to a psychologist is a bit too much
[01:09:15] sometimes because that is still what a lot of people do feel and even I sometimes feel that
[01:09:20] do I really need to go to a psychologist for this is it even that big is is a feeling that I get
[01:09:25] at times and if in such cases if I can go to a chat bot who can maybe tell me that listen you need
[01:09:32] to go and see as a psychologist or okay this is what you're feeling maybe it is not as severe
[01:09:38] as you think it is and this is that these are the things that you can do then maybe I think that
[01:09:43] that is one place that I would use an AI chat bot. That makes sense but that also sounds like
[01:09:55] exactly I do actually my personal problem but if I'm telling you chat bot
[01:10:00] that listen and you're just lighting yourself should seek mental health advice then that would be
[01:10:06] a step in the right direction I would say for me personally.
[01:10:15] Okay so very much thing that was an interesting chat I think we can conclude it here thank you so
[01:10:20] much for coming on this podcast armed with research and questions and taking Pratik's place
[01:10:27] for this podcast. Yes I am sure it was not an easy task. Sorry Pratik I hope I lived up to
[01:10:34] your expectations but this really was an informative chat and it was fun too so yeah I'm glad I joined
[01:10:42] thank you for letting me. Yes you are always welcome on the big story all right guys thank you
[01:10:48] for listening everyone who tuned in I hope you learned what we hoped you would please check out
[01:10:54] all the episodes of the big story and especially the first one of the series where we really
[01:10:59] get deep into AI policy and what should come before the development of more and more AI apps and
[01:11:07] what are the concerns that that we were really talking about and how bad can it be so I really hope
[01:11:17] that you tune into that podcast and keep listening to us keep listening to our other shows and keep
[01:11:23] giving us love and support thank you so much bye bye can I add something please do if you learned
[01:11:31] nothing else from this podcast I hope you learned this that we can solve a lot of questions
[01:11:38] through you can be very close to it but never stop asking questions could see what you would like
[01:11:47] the big story is a quint original podcast executive produced by Shari Valya and Zatukapod this
[01:11:55] episode was produced and edited by Anjali Palaur and Pratik Lidu hosted by Anjali Palaur and
[01:12:01] Garima Sadwani the background music is from BMG production and a special thanks to our guests
[01:12:07] Dr Samil Padi and Dr Megha Gupta
[01:12:14] you were listening to the Quinn's podcast


