AI Literacy and Ethics: A Conversation with Supriya Bhuwalka
Not Your AuntyJuly 19, 202400:59:53

AI Literacy and Ethics: A Conversation with Supriya Bhuwalka

In this insightful episode, we delve into the world of artificial intelligence with AI educator Supriya Bhuwalka. We discuss Scarlett Johansson's voice cloning allegations against ChatGPT's founder, Sam Altman, and explore the broader ethical concerns surrounding AI. Supriya shares her journey into AI, emphasizing the importance of AI literacy, particularly for children and women. The discussion covers AI's everyday applications, biases, privacy concerns, and the environmental impact of AI technologies. We also touch on practical uses of AI, such as meal planning and content creation, and address the fears about AI's potential to replace human jobs. 00:00 Introduction to AI and Controversy 00:41 Meet Supriya Bhuvalika: AI Educator 01:15 Understanding AI: Personal Journey 02:41 AI in Everyday Life 04:08 Generative AI and Its Implications 06:45 AI's Hallucinations and Bias 07:27 Ethics and Privacy Concerns 12:40 Empowering Women with AI 19:04 AI in Creative Professions 23:05 AI Literacy and Responsible Use 32:06 AI Natives: The Next Generation 32:36 The Right to Opt Out of AI 33:37 Ensuring Ethical AI 34:47 Bridging the Digital Divide 37:06 AI's Impact on Jobs 38:26 Privacy Concerns with AI 46:23 The Role of AI in Relationships 53:42 Empowering Women with AI 56:50 Environmental Impact of AI

In this insightful episode, we delve into the world of artificial intelligence with AI educator Supriya Bhuwalka. We discuss Scarlett Johansson's voice cloning allegations against ChatGPT's founder, Sam Altman, and explore the broader ethical concerns surrounding AI. Supriya shares her journey into AI, emphasizing the importance of AI literacy, particularly for children and women. The discussion covers AI's everyday applications, biases, privacy concerns, and the environmental impact of AI technologies. We also touch on practical uses of AI, such as meal planning and content creation, and address the fears about AI's potential to replace human jobs.


00:00 Introduction to AI and Controversy

00:41 Meet Supriya Bhuvalika: AI Educator

01:15 Understanding AI: Personal Journey

02:41 AI in Everyday Life

04:08 Generative AI and Its Implications

06:45 AI's Hallucinations and Bias

07:27 Ethics and Privacy Concerns

12:40 Empowering Women with AI

19:04 AI in Creative Professions

23:05 AI Literacy and Responsible Use

32:06 AI Natives: The Next Generation

32:36 The Right to Opt Out of AI

33:37 Ensuring Ethical AI

34:47 Bridging the Digital Divide

37:06 AI's Impact on Jobs

38:26 Privacy Concerns with AI

46:23 The Role of AI in Relationships

53:42 Empowering Women with AI

56:50 Environmental Impact of AI

[00:00:08] AI or Artificial Intelligence has become a part of everyday conversations. Recently, Scarlett Johansson accused Sam Altman, the founder of chat GPT, of cloning her voice or rather using her voice unethically without her prior permission.

[00:00:30] This week we decided that we will speak to an AI expert who can sort of help us upgrade our AI skills and talk us through what the world of AI is all about.

[00:00:41] So we have in the studio with us today Supriya Bhuwalka, she is an AI educator and much of her work is focused towards helping little children and women feel empowered by using AI and she's been at it since 2019. Thank you for joining us today Supriya.

[00:00:59] Thank you Shanan and Kiran. Welcome to the studio. I am so curious Supriya, the work you do will get into the details of AI later and what it's all about and what should we watch out for? How can we make it useful for us?

[00:01:15] But I am fascinated first and foremost with how did you get introduced to AI and how did you decide to make this your passion project? So thank you for that question. I've always been in education.

[00:01:25] I was running a franchise for education for children and then I decided to pivot into coding for kids. But while I was doing my research, I'm not a techie. I'm a math person. I stumbled into the AI curriculum from MIT and I found it fascinating because suddenly

[00:01:41] this was 2019. I could understand why my iPhone unlocks with the facial recognition. I could understand why Amazon Prime has a different account for me, my husband, etc. because it's personalizing our movie experience. I could understand how Hotmail is filtering.

[00:01:59] I know most of us are now using Gmail, but you don't have to go into your spam and see what's spam not spam. All of this was AI. So suddenly the world around me was making sense and I was like, oh my God, this is really interesting.

[00:02:12] This is a live technology which is learning from us in real time. And they had a curriculum for kids. So I was like, let me try it. And of course, COVID happened and we were in lockdown. So I got more time to research and learn this technology.

[00:02:29] That's how I got into it. So I started with AI education for children and after the child GPD, things have changed. Trustingly. Yes. Now the world has woken up. You see, AI was always in our lives. It was ubiquitously, transparently working in the shadows, but nobody realized.

[00:02:47] We were happily clicking on Instagram, giving our likes, sharing family photographs, giving our voice, not realizing that this technology is learning from us. Right? And I think the reason it's so important is that when you have a technology that is

[00:03:07] learning from you and is making decisions for you or other people are using it to make decisions for you, you may not even realize it how it's impacting you, which is why I thought that this is so powerful everyone should be AI literate. And that's become my mission.

[00:03:24] That's lovely mission, Supriya. We have, I think very limited knowledge of what AI actually is and I'm looking to learn from you as to what I should know about AI. What we do here a lot is the doom and gloom scenarios of AI. It's going to take over.

[00:03:41] Singularity is going to come soon. You know, the machines are learning and soon we'll have our humans. Yeah, we're all going to be chucked out of jobs and whatever, whatever. AI is helpful.

[00:03:52] We can make AI work for us, even though there's been instances like Elon Musk and those, I don't know how many thousands of people have signed that letter saying that we have to, you know, control AI research and stop it from becoming a runaway horse.

[00:04:06] How do you think we need to approach AI? So I think the first thing we need to do is really understand what is AI and most people now think about AI because we have these generative AI

[00:04:19] tools, what we call Gen AI tools like the chat GPT, the Dali, you know, for presentations you have gamma. So people associate AI only with these tools. That's one big part of AI, which is when AI has become creative. But before that, we had something called discriminatory AI.

[00:04:36] The I just flew about, which is running invisibly in our phones on any connected device. And once you think about AI, see, AI is artificial intelligence. So you compare it to human intelligence. How do we learn? We learn from information.

[00:04:53] Each brain of ours is like an algorithm that processes that information. And then we make a decision. The same information that all three of us in the room, when we see it, but we'll process it differently and we'll give different decisions. Not any answers right or wrong.

[00:05:08] It depends upon for who it is. AI is similar. You know, it's learning from all this data. It has got an algorithm that, you know, it processes the data with and it gives a decision.

[00:05:18] So the first thing to remember is not like a calculator that one answer fits all. OK. You know, so which is why if you're using AI, please remember just because sounds impressive doesn't mean it's right. OK. I can be incorrect. Very important point. I've tinkered around with that.

[00:05:36] It the other day, it just is very amusing. I wanted to find out. Knife was out Salman Rashidi's novel, not novel. Sorry, it's a work of nonfiction. His book, not fiction. Just cut that Salman Rashidi's book, Knife, it was being recommended in a chat group.

[00:05:52] I've heard some mixed reviews about it. And so I looked it up and then I went to Chajji Pity to ask Chajji Pity about his childhood. And then it told me that, you know, at this age, he moved to Karachi. I don't know if that's true.

[00:06:08] Finally, it ends up in London and then so on. The rest of it we know already, right? And then I just to play around with it, I asked it to give me a bio for myself. And it says that I am the niece,

[00:06:22] Shunali Kulashroff is the niece of Salman Rashidi. I wouldn't mind that relationship. If the royalties are coming. No, no, it's just the association. OK. I mean, I really definitely wouldn't mind being related to the man who would meet nice children.

[00:06:39] But it was really funny and that sort of indicated to me that this can really be that off the mark. It just it's that is a hallucinatory aspect of AI. And that is something to be careful of for all those who do not know

[00:06:53] and students who are using this extensively for their assignments now. So there are two things, Shunali. A, did you tell Chajji Pity who you were? Because as a first, how did it know then that you're I fed and see sometimes we my husband has been using

[00:07:11] Chajji Pity to pull out all sorts of answers, travel itineraries, flight crews as of two years ago and so on and so forth. And so if sometimes they ask you for a bio on somebody, it does give it to me.

[00:07:26] But so there are two things, A, never give it personal information. Even if you're asking itineraries, etc., never ever diverge private information because it goes into the training data. What that means is that now Chajji Pity has got that information,

[00:07:40] like a digital footprint and then when somebody else inquires about you, it would have more information about you. A, you should be very careful that you don't give any proprietary information, personal information to any of these large language models,

[00:07:53] any of these chat bots, you know, there's Chajji Pity, Gemini, Cloud, Puplexity because then it goes into the training data and you can't really take it out. And then it goes to the whole wide world.

[00:08:04] So it happened with Samsung a couple of months back or rather last year that somebody from Samsung gave some proprietary code and then it became public knowledge. So one, please never share proprietary information. B, some chat bots are connected to the internet.

[00:08:22] So you're going to get real time information. If you have used Chajji Pity 3.5 or 4, they're not connected to the internet. So they have a knowledge cutoff, which means that it has been trained up to a certain point.

[00:08:35] And after that, it doesn't know current affairs and it will tell you that 21, 2021 as the cutoff. So Chajji Pity 3.5 was 21 and Chajji Pity 4 was 22. So there's a knowledge cutoff. So if you want current news or current things and you should use co-pilot,

[00:08:55] which is again powered by Chajji Pity, but at least it's giving you more current information. And there you can sort of you ask it for citations so that if it's giving you any facts, which you might want to verify, you should click on the citations to make

[00:09:11] sure that those citations actually exist. OK. Because it was another interesting case that happened where there was a person in the US who sued the airlines for something and he wrote a whole case and he used Chajji Pity and it had given him citations that did not exist.

[00:09:28] Oh my. So which is why that whenever we use these chat boards, it's extremely important that we cross verify the information. But I'm glad you got to be the one that she needs. You must try with some more famous people.

[00:09:42] No, I think it happens when you check back to back. Right? So if you check about anybody, Amitabh Bachchan and then a little later you ask it, who is Supriya Bhuvalika? Tell me a little bit about her.

[00:09:52] It might connect the two of you because it hasn't yet reached that level of intelligence that realize these are two different individuals with no relationship. It's to do with memory. It's to do with memory because when you are interacting with a chat board,

[00:10:06] you know, it has a memory for a specific amount of time and then the conversation it forgets what you're asking. What is a specific amount of time? So it depends from the model. You know, it could be a couple of queries, too many queries

[00:10:20] and depends also how many words you have used. And the newer models are having a better memory so they can take in more data. So when you talk about the fact that I'm very fascinated by the fact that AI can also cheat and give you false citations.

[00:10:38] Now, when AI is being developed, there's something called bias that creeps into AI. That's also falls in a way. Could you tell us a bit about this bias that came into AI being developed? So, A, we spoke about what AI was, right?

[00:10:53] It takes in data and goes to an algorithm and gives a decision. What is this large language model? Why have you wondered what this chat GPT is? The GPT part is Generative Pre-Train Transformer. What it basically means is that this chat bot, which existed

[00:11:07] in our lives even before the chat GPT was on a limited amount of data. But what changed was when Sam Orton decided to use all of the internet's information and made the chat GPT 3.5, even he didn't know this is going to happen. It's literally a statistical inference.

[00:11:23] You know, this game we play once upon a time. If I say once upon a time, what would you say, Shunan? There was a young girl and lived in a castle. And then what would you say?

[00:11:33] Who had long hair that went down to the floor of the castle. So this is exactly how charge people is actually giving us answers, which is on statistical inference, right? So the bias has come in because this charge GPT has been trained on all our human information.

[00:11:50] We have incrementally. Incrementally. We have biases, both intentional and non-intentional. But that's getting amplified by charge GPT. And when AI lies, it's not doing it intentionally. It's just the statistical inference. So it's making up stuff without realising it's making up stuff. It doesn't know.

[00:12:08] But if human beings have birthed charge GPT, then of course our biases have passed on to charge GPT. Our lenses have passed on to charge GPT. Yes. And the problem is that all the development happens in the West. You know, it's got a biased view. So white lens?

[00:12:24] So white lens. Yeah. And it's not a women-centric lens. That's very interesting. That's scary, right? So which is why I think more women should be empowered with AI. So then we have a voice which is representative in this AI training. So how can women be empowered with AI?

[00:12:42] How can we embrace AI rather than be frightened of it? The first step is to just try. What can go wrong, right? It's so accessible. It's so free. We are in such an interesting time where the Google's and the Open AI's are fighting to get us for free.

[00:12:59] So all you need to do is actually use these gen AI tools. But while you're using it, I think it's very, very important to also know what's under the hood. You can do that again today. What are we limited by?

[00:13:11] If you're on LinkedIn, you can take free courses. I think that catches to educate yourself, be AI literate by using accredited sources. And the best places to do it is through universities because they have no ulterior motive and you do courses by big tech companies.

[00:13:28] You will never hear the whole story because they're trying to sell you a tool, right? But if you do it from a university, so to answer your question, A, get AI literate, do some free courses, follow some, you know, influencers in AI.

[00:13:42] I would again follow professors rather than big tech companies. Some people who I follow are Andrew Ng, who is a professor at Stanford and the founder of Coursera Deep Learning. This is on Twitter or? LinkedIn X, either or.

[00:13:57] So Instagram is not the best place, but it's worth educating yourself because this is in our lives is not going away. Might as well embrace it and be on top of our games. Correct. And there are limitations, right?

[00:14:09] So if you embrace it, you'll be able to use it more responsibly. The other one cool person I would say to follow is Ethan Malik. He's a professor at Wharton. He also talks a lot about using AI effectively.

[00:14:22] So start with these two and then you can always write back to us. I know more. Well, this is lovely. It's cool. Yeah, we've just been our AI world is limited to these challenging things and asking about weather conditions in countries we are planning,

[00:14:36] or cities we are planning holidays to. OK, but it's a start. It is a start indeed. But because we have to have an AI mindset, what I find is that today these tools are available to us.

[00:14:47] None of us are afraid of using Google search or talking to Siri. And now you don't see the other cool thing about these chatbots is it's that you don't have to type. We're becoming a lazy community. We can just talk to it, choose the voice we want

[00:15:02] to talk back to us and you can do it on the fly. You know, when you're walking, you're frankly, I love that. I'm grateful for the voice texting because our voice commands because it's you know, we are all going to really go back to becoming bent over

[00:15:19] and then start crawling on all four and the rate at which we are using gadgets and devices. So this kind of laziness is welcome, I would say because you multitasking also. So when you sorry, when you speak about talking to AI, there's something very interesting.

[00:15:35] Of course, we have the movie heard from which this color joints and voice came and all that kind of thing. There was this Google researcher, I forget his name, who resigned because he said his AI was sentient. A Google researcher called Blake Lemoine resigned from his job

[00:15:52] because he said he was developing something called Lambda Lodge. What is it? Language model for dialogue and applications. And he resigned because he said I want to understand his Lambda wrote to him. I want everyone to understand that I am a real person.

[00:16:11] And Mr. Lemoine felt that it was sentient. It had intelligence, it had a consciousness. Now, this is where it gets a little scary. You know, we've seen these hal the rogue computer and what was that space Odyssey?

[00:16:28] I forget the name of the movie, but yeah, we've seen that rogue computer trying to exterminate everybody who wanted to shut it down. We've seen the terminator happening. So the mind automatically reaches to these worst case scenarios in the future. This is of course just a starting.

[00:16:43] But if a researcher is saying that he felt whatever he was morally responsible for bringing this. What path are we going through? So this is a much debated question in the AI world where

[00:17:00] so one thing I would say, you know, I know that in a conversation so far, nobody has referred to charge as a him or a her. Hang on for that. Yeah, because we should not give it a gender

[00:17:10] because what happens is in our interactions with any of these chatbots, it sounds so real and it is so empathetic, depending on what personality it has been given, one can forget that it is not a real person,

[00:17:24] which is why we should always refer to it as an it. But to answer your question, there are fears that the AI would become more powerful than humans. But here I'd like to come back to what Mustafa Suleiman,

[00:17:39] you know, just recently last month in his TED talk spoke about. It was very interesting conversation where he said that you can look at AI not just as an all purpose tool, but if you think about it, it's a digital species. It's a metaphor.

[00:17:51] So please don't take this literally because the AI can understand us in our language. It can communicate. It can drive cars. It can fix our power grids. It can invent a new molecule, not can write poems. It can replicate. I know. It cannot replicate. Not yet.

[00:18:09] So that's not yet. Please remember that we always have to have the human in the loop. The reason why again, people like me are so passionate about making people AI literate is to remind ourselves that the power is in us not to ever allow

[00:18:23] an AI tool to be automated, to get the better of us, to be automated. We are giving it the autonomy. Right. So we should always advocate that every person makes their own decisions. You use it as a tool, but do not give it autonomy.

[00:18:38] Do not allow other companies to be autonomous with their decisions without a human in the loop. And it is the developers who decide whether the AI code is going to be able to replicate. It cannot do it on its own.

[00:18:55] So this whole doomsday kind of fear that, you know, it would take over the world would only happen if we as humans wrote that code. It cannot do it on its own. But Supriya, there are many, many careers that are under threat.

[00:19:07] You know, the actors' guilt went on strike recently in Hollywood before that the artist's guilt, writer's guilt went on strike. All creative professions now understand that if not the current model, the subsequent models will definitely be able to replace these human beings.

[00:19:23] OK, AI doesn't understand nuance so far. Yes. But a stage will come where we will train it enough to understand nuance knowingly or unknowingly. OK, nobody is putting guardrails around it. They read some sort of an understanding between the studios and the creative folk in Hollywood.

[00:19:43] But what happens to voice over artists? OK, so professionally, a lot of people around the world are going to get laid off because AI will be able to perform a lot of simpler tasks at least above and beyond what is already being done.

[00:19:59] What is your take on this? And what happens to the ethics of technology then? So there are two different questions. A, you're right about AI being able to do a lot of the creative jobs

[00:20:16] and the ethics part of it was that or when these big tech companies took all the voices, the artwork, etc., they did it without permission. So what the artist said was that we require the four C's.

[00:20:30] I can't remember the fourth C but I'll tell you the three C's. Consent, take permission from us before you use our materials and our work. B, give us credit that you have used it and three is compensation. And that compensation has to be a fair compensation.

[00:20:46] Now the guardrails that are being put into place is that the big tech companies are putting watermarks in the code so that you know when something is AI generated. B, as an artist, whether it's your voice or whether it's your work, you can put those watermarks.

[00:21:02] So this is all still very nascent but this is being developed so that every creative person can protect themselves. And I know that one of the universities developed a software where if some big tech companies are not listening to you and still using your art,

[00:21:16] you can put in a code which will destroy your work if somebody generates AI on it and that will disrupt the system itself. So there is a backlash and people are trying to develop rules in order to protect the creative.

[00:21:34] But in a country like India where you can showing people is not easy here. You know that there's a little cost to it. And then there's a certain duration of your lifespan that goes into fighting. It's most of it.

[00:21:48] So in India, who is OK if I'm a voice over artist, let's pretend and a studio sort of creates a voice similar to mine and uses it. You think I will have the means to take them to court?

[00:22:02] No. And so this is going to get misused a lot. Yeah, but luckily, you know, India is one of the countries which is actually on top of the game and they're doing a lot of work for AI governance.

[00:22:13] So you has been actually on the forefront where they have an AI at, you know, China as you know, is a dictatorship where everything goes. But in India, we are in the process of forming governance and there is a problem because

[00:22:30] innovation is happening faster than governance, but the India government has been very proactive and there are governance being put into place so that we don't have to reach the stage of swing. But yes, it is something that we all need to be concerned about.

[00:22:44] So after speaking to Keren, I'm realizing that, you know, I have to say that there was this anxiety within me that all creative professions eventually will have to, you know, say goodbye to a source of livelihood.

[00:22:58] So, you know, there's two sayings in AI that AI will not take your job. Someone who knows AI will. So we have to train ourselves to be creative. So as writers, we are both writers, as you know. How can we train ourselves? So essentially, what does AI do?

[00:23:17] It's a general purpose tool, right? You have to see all writers get writer's block at some point in time. Use the AI as a tool to help you as something to augment for yourself.

[00:23:29] So if you have a writer's block, maybe you can talk to a chat board and say, this is what I like to do. You know, this is the kind of writing that I do. This is the kind of objectives that I would like to fulfill.

[00:23:42] This is my audience. So now what I'm actually giving you is the prompts. You know, all together or to see them one after the other. So the way you talk to a chat board is like think of the chat

[00:23:53] board as your personal assistant, a fresh person that you have hired. I'm saying person, but this is this digital. Is it Gen Z because then we if it was a Gen Z personal assistant. But see who hired. It has mood, it has anxieties. But this one has nothing.

[00:24:12] You know, so luckily though we think it's sentient, it's not really sentient. You have this assistant that you have hired and now give it a persona. You know, maybe you would have been able to hire an assistant, you know,

[00:24:22] from a college in India, but now you can give it a persona and say, think like a writer from Harvard or, you know, from Stanford. That would intimidate me. Because when you give it a persona, so now it's drawing from all the information of a particular type.

[00:24:41] Right? So first you give it a persona, then you introduce who you are without divulging your identity. I'm a writer, you know, and my audience is, you know, if it is the women of a particular age group and you have an objective that,

[00:24:54] you know, what is it that you want to do with your writing, right? Whether it's to empower them, inspire them or fiction, whatever. Entertain, give it your objective and then see that can you give me ideas? Chat bots can be verbose.

[00:25:08] So you can see that, you know, I wanted in so many words or I wanted in bullet points, the tone you wanted in, the language you wanted in. And you ask the assistant to give you those answers.

[00:25:22] And the best part is that this chat bot will work tirelessly for you for free. It will not get angry, it will not get annoyed that, you know, you're asking it all these random questions you keep working with. You reiterate. You don't want to give an answer.

[00:25:34] You don't have to accept it because I like this point, but I don't really like this. Can you give me examples? Or what you can do is give it examples of your work that have been published. And see, this is what works. But that won't train it.

[00:25:47] Because it's already published, right? It's already there out there. But something which is a work in progress, I wouldn't put that in. Yeah. See what is already published could train it also in terms of what direct what you have done.

[00:26:00] A lot of people, you know, they tend to put in their name, but they feel that if they put in the company's name, then the AI can draw from the web and be able to make a better analysis.

[00:26:10] I am very conservative, so I always say, please don't put in any private information, but you understand how you could use it as a writer. One is for brainstorming. B, let's say you've written something and you're not really happy with it. Sure.

[00:26:23] And before passing on to your editor, you can put in that piece. But then you put in your own work, which is still work in progress? Yeah. So that's a risky after taking. That is a risk you're taking. So you have to be careful.

[00:26:34] But you said something about, you know, my daughter said that she uses one of these AI tools as an editing tool. So Kiran, if you write an essay or you write a short story and you feed it

[00:26:46] to one of these tools and say, give me a feedback on what you think of it. It's excellent for that. I'll tell you that this point is repetitive here. It's the pace slows down. Correct.

[00:26:57] I think it can be places at writers, but it can seriously work as a mid-level editor. As a mid-level will go higher. But then you're giving it creative work, but you had suggested when we discussed this once to not save in history.

[00:27:12] So, you know, they used to be in chat GPT, they used to be under the settings button that you could toggle so that you didn't have to get the training data saved. OK. They have removed it. Oh, I love it. I think I like it.

[00:27:27] See the genie is out of the bottle. No, no, no. There's nothing we can do about it. But there is more good. I think the net good with these AI tools is far more than the net harm.

[00:27:36] And if we understand what the net harms are, which are literally four or five, one is hallucination, one is bias, one is knowledge cut off, one is privacy. You know that we are really diverging up privacy issues.

[00:27:52] And then there's a whole ethics of AI where we talk about AI should be transparent. If it's giving you an answer, we should know on the basis of what is giving a decision today when you say that this person is not good.

[00:28:04] I can ask you, what makes you think that? And you're telling me your reasons why today. AI systems also need to be transparent when we have an answer. We should be able to ask it on what basis did you come to this decision?

[00:28:18] So that's part of the ethics of AI, the transparency part. Does this make sense? It does. It also raises another question in my head. You spoke about how did you come to this decision about a certain person and how did you come to this conclusion?

[00:28:31] I recently read about Bumble, which is going to be using AI to scan your dates and match it. And it will narrow it down to a dating pool of around six and it will have AI

[00:28:43] concierges who will go on the dates with on the pre dates for you and narrow it down further. See, a lot of human interaction doesn't rely on boxes ticked. You know, somebody may not take any of your boxes,

[00:28:58] but you meet that person and you may fall instantly in love. Right? So in this scenario, when we're outsourcing basic human interaction for a very important function like falling in love and probably getting married or having a long term relationship,

[00:29:16] how do we justify outsourcing something like this to AI? Should not, right? In fact, you talk about dating, but the same AI systems are being used for hiring where when our children apply to colleges abroad, you know, colleges are using AI systems to filter applications.

[00:29:33] Yeah, I know the problem is that because the training data is primarily white men, the biases creep in because if a woman has never been to that college before, it has never seen that a girl has applied there, she'll be filtered out,

[00:29:47] even if she checks off all the boxes. And the same goes for hiring. Right? So these are the biases which are actually harming us. You know, it's a funny thing, but also when we do blood tests, right? And we are using all these health apps.

[00:30:02] Do we realize that our data might be sold to insurance companies who will know before us that we may have a disease and they will hike up premiums? Same as for interest rates. So I mean, of course there are many ramifications, which is where our

[00:30:17] mindfulness comes in of what we share happily in the worldwide way. I find that even more difficult now. Imagine if I'm using chat GPT and I have to be, you know, my toes that don't give this, don't share this, don't think about it.

[00:30:32] What you would not share with a stranger would not share on the Internet. So to answer your question, we have to be mindful of what tools we are developing and what tools are we interacting with? If you're going to date somebody, you know, you'd rather

[00:30:47] be I mean, I don't have an easy answer for that because I'm beginning to think we didn't need AI to this degree. I think humans were doing a perfectly fine job. But it is making life efficient, right?

[00:31:01] I mean, like, you know, today's how much of our autonomy are we giving away to it? That is what we must not be trade off. Yeah, but we must not give the convenience. We should not give away our autonomy. We should use it as a tool.

[00:31:12] But the last answer should depend upon us. The decision should depend upon us. That's interesting. So very often, you know, to reach this degree of awareness that the last answer should rest with us is very difficult. So, Priya, you may have it.

[00:31:31] Maybe somebody who's familiar with AI might have it. Yeah. Maybe somebody who is, you know, into philosophy and into other things might have it. But a young college student may not have it. Correct. Which is why governments around the world are making AI literacy

[00:31:50] a part of the co-curriculum. And I'm so happy to share that, you know, even in our Indian ecosystem, you know, we are seeing and I'm also part of some of those expert groups where we are developing curriculum so that non-tech students also are AI literate.

[00:32:06] And we're starting from K-12. We're starting from our schools to our universities. So that our children, the AI native, as we call them, they know what this technology is because they have an AI mindset, they're being born into it. I like the physiology of that.

[00:32:25] They have digital natives and now AI natives. Correct. And we all immigrants are generation high. So I think that, you know, also the fact that you're doing this podcast with me, it's to create that awareness. You know, we have to do that community building so that everybody realizes

[00:32:40] that we should have a right to opt out. You know, for me, the scary thing is that today we talk about all these AI tools but what's really happening and what you'll see the change whether you're a Microsoft user or a Google user.

[00:32:52] These AI tools are coming into everything. Even if you do a Zoom call, you have an AI companion. Right? But we should have the choice to switch it off. And I think we don't have that now. You said chat GP, he doesn't have that.

[00:33:04] Chat GP but you're choosing to go to chat GP. But if I'm using a Zoom for a video conferencing. In turn of the assistant. I can turn off the AI companion. I have a choice today.

[00:33:13] Today in Microsoft, if I'm a Microsoft user, I should always have a choice whether I want the AI component or not. See, when we're writing a Gmail, it's giving us those auto responses. But it's our choice that you know, whether we want to take the responses

[00:33:28] giving us correct? But the fact that it's already sitting and reading my email. I should also have a choice to turn it off, which I don't have today. So what can be done to ensure that going forward forward AI doesn't

[00:33:44] work free of human biases and ethically by itself? How can we train it to do that? I think, OK, that's a big question. That's a good question and it's working progress. So another very small tool that you feature that you see on these chat boards.

[00:34:01] You know, when it gives you a response, you will have a thing for a thumbs up or a thumbs down. Yes. So let's say it has given you a bias dance and you've seen it. Or do a thumbs down. Do a thumbs down and give a feedback.

[00:34:13] So actively, every person can give that feedback to these tools when we create an image right and earlier the image generators could not generate an image of the Indian flag or to give very biased views of India. But you give that feedback. That's something we can do.

[00:34:32] Every person can do in order to train. Every person is engaging that. And to make sure that we have better representation, we have to make more people. I littered more women. So right? I mean, I'm learning so much in this conversation with her.

[00:34:47] And you know, honestly, can I tell you, I find it really funny because again, I'm very passionate about bridging the digital divide. So, you know, we have this fear that these technologies will widen the gap

[00:35:00] for people who do not have access to internet, people who do not have access to these tools. But at the same time, what I'm finding is that these AI tools are empowering and narrowing the skill gap. So I'll give an example.

[00:35:14] You know, I did a workshop with one of the foundations where they have to write a lot of emails, they have to do a lot of resume filtering. And in India, all of us don't speak English very correctly. We may not write very well.

[00:35:28] And that is considered deterrent in our country because of the spoken language. But with these large language models like a chat, a chat GPT, these people in the foundation were now able to communicate better.

[00:35:43] So some any intern, you know, who starts off in your office, they have a time before they are on speed. But now these AI tools are getting more skilled faster. OK, they're getting skilled faster. The tools are getting fast, helping them get skilled faster.

[00:36:00] What happens to their own instinct, the learning? Because you're becoming reliant on these models. You know, you're not learning yourself at the end of the day. You're going to come. We're going to end up with a generation of people who will just rely on chat

[00:36:16] GPT to make out the essays and down being down. Yeah, if you do just pure copy paste. But if you train these same people when you're training them to use these tools to critically analyze whatever has been given to them,

[00:36:31] you know, because you have to still fact check. You still have to make it your own. You have to give it your voice. That's a very important point. Discernment. Yeah. And today when we again do training for schools, because see AI is changing what education will look like.

[00:36:46] It's making us rethink what the workforce would look like. So we have to change what assessments will look like. We really have to think, you know, what is work going to be? Because if these tools can actually make us, you know, more free by using these tools,

[00:37:02] what are we going to do with that time? Right. So what is work going to look like? In my mind, I don't think AI would take away jobs. It would make the workforce more efficient. Where you had maybe 100 people,

[00:37:15] you would have a less number of people doing the same task. But now because people have more time to do less mundane tasks with the tools that are helping you do, they would be able to do more sophisticated things with the time.

[00:37:28] But those mundane tasks that people did, you know, fed an entire family, what is going to happen to be do the fall by the wayside? They have to upscale. We have no choice but to upscale. So which are the professions you see getting most affected?

[00:37:47] I did read that the financial markets, trading, broken, etc. are going to get affected. Of course, repetitive jobs, mechanical jobs, voiceovers. I don't know which job will not get affected. Everybody will get affected. Doctors will get affected.

[00:38:00] I'm an educator and I know that today we are creating AI personalized duties, right? So there is no job that won't get affected. That's a new tool that we need to embrace. See what are the pros? See what are the cons?

[00:38:16] Make sure that we are mindful of the cons. And I think we've done such a good job of our lives before I gave it to our lives. And the other thing I want to ask you about AI, Google listening into us, right?

[00:38:30] You speak to somebody and I was speaking to a friend about what he wanted to do in IVF a few years ago. And after that, I started getting IVF ads. Kids are in school and grown up and I don't need IVF. Thank you.

[00:38:42] Then someone asked me to go into the Google settings and turn the TVC setting. But not everybody knows about that. But I mean, that's AI as well. Correct. Absolutely. And it's so creepy. It is so creepy. Brother Washington, you can please also tell you funny thing.

[00:38:56] This chat GPT-4O. Have you noticed a little eye? The big brothers, we are letting it into our lives. It's just a little famous in the eye watching what you call the eye of Sauron. Yeah. And you know, I mean, look at the capabilities, right?

[00:39:11] I mean, there's a very cool demo. What is eye is watching? This is an eye that watches people metaphorically. And by the way, it's just metaphorically. No, it's not actually watching you, but it's meant to indicate that they're watching you.

[00:39:24] I see because see in a way they are right. It can hear you. You saw what we did while we were talking earlier. Yeah, we were talking to Pi, an empathetic companion. And even when I stopped talking to it, it was still running in the background.

[00:39:37] Yeah. Did you notice that? Yeah, it answered you even when you had not asked the question. Because it was still on listening mode. But to answer your question, we have let it in our homes.

[00:39:46] By the way, I have removed the Google homes and the Alexa's from my house. By the point to be noted, my Alexa switched off at all times, except when I played it, I have unplugged it and put it in.

[00:39:58] You're saying that switching it off is not good enough. Oh, my God. So in my mind, in my mind, this is creepy. No, no, no, but see, I don't want to scare people. In my mind, I think. No, please scare people because, you know what?

[00:40:11] We need to err on the side of caution and only when we take it, we see the thread for what it is. Yeah, we should use tools where we require it. I think the writing on the wall is don't launch us,

[00:40:21] you know, rocket into space when you can do with a car. Right. Lovely. Think about it. And I really think that. But you were being psychologically speaking, you know that we like shortcuts. We take an escalator at the airport here, Bombay airport. There are tiny those escalators.

[00:40:38] But when you want to make your steps, you want to make your 10,000 steps, you will still walk on escalator. You're just too lazy to do basic tasks because you're like somebody else can make it easier for you.

[00:40:48] But as long as we don't allow the AI to do the important task for us, which would be medical, legal, financial, personal. Everything is mumbled. But everything is personal. What is not personal? If I ask Pi, like I'll tell you,

[00:41:05] like, you know, if Amazon Prime or Spotify decides that, you know, I should listen to this music, it's OK. I have lost two seconds of listening to a shitty song. That's fine. Listen, we all normal. Please go ahead and say all the words you want to say.

[00:41:19] OK, right? No, but when you sign into you make an account on these things, then you sign in as Supriya. Yeah, you've already given it to us. Hundred percent. And then whatever. And your phone number, by the way. OK, so that too.

[00:41:36] So now whatever information you are sharing thereafter, whether you tell it, whether it is supriya or not, it knows it's supriya. Correct. And that is now available to train it. Yes. And for the world to access it. Correct. So I mean, there's no escaping this.

[00:41:51] Like Kiran said, the genie is out of the bottle. We've created a monster and I'll live with it. We have created a monster, but it's in our hand to have the monster on our leash, which is why it's so important to be human centric AI.

[00:42:03] I mean, I cannot say that. Less, you know, always, always remember that you are the person in control. Do not let AI make a decision for you. Do not allow other people to use AI systems to make decisions on you.

[00:42:17] Let's say, you know, you've been rejected for a particular loan. Ask the bank that did a human make this decision or was it an AI, you know, which was filtering this? I think that we all should become advocates, right? Even the bank personnel would not know.

[00:42:31] They would not know, but we have to. Or they might lie. Yeah. You know, madam, we only took this decision. I both took this decision. When you're talking about human decisions, when we are asking AI, should I do this? Should I do that? Should I not do this?

[00:42:47] We are going to be biased towards what they tell us, aren't we? Because we think that they're giving us good subjective advice, of, you know, objective advice and. But yeah, so I think that that comes into where you say, let's not be lazy.

[00:43:00] And we have to remind ourselves that always have a thinking on. And I think as mothers, I'm guessing all of us are mothers or, you know, tell our children who are the native that find you're using these tools. I want our children to use these tools.

[00:43:16] Otherwise they'll be the ones who will be replaced by people. Yeah. But we should tell them, use it, but use it with your critical thinking on because not everything that you see is right. And that brings me to misinformation, which is so important. Deep fakes, right?

[00:43:31] AI really think that our generation should stop oversharing. You know what we are eating, our children, because see, that's all the digital footprint that we're putting out there today. As a podcast, we don't have a choice as our work.

[00:43:45] Right. But what's not necessary, please don't post it because you're unnecessarily giving a data for somebody else to misuse to make a deep fake. You know, you heard all the scams, right? Yeah. Of somebody's child calling saying, I urgently need this money. You know, and it's a clue.

[00:44:01] It's happened to my sister. Yeah, correct. Somebody called her pretending not in her boss's voice and asked for a big loan. And this is a super boss who's in the US. He's called her on a Sunday.

[00:44:14] The loan needs to be sent into the bank account of his nephew who's in the hospital in Singapore. And she's, you know, he says, well, he doesn't directly ask for the loan, but it's called disconnects. And then it's all on WhatsApp.

[00:44:28] But the voice that she's heard for the first few seconds is proof enough and convincing enough to be here. And then she goes there and transfers a very large sum of money into the other account. And it's obviously not retrievable. So it's getting scarier and scarier.

[00:44:46] And therefore, AI literacy is crucial now. Correct. Because, you know, what do we mean by misinformation? Right. So that two things that I would again ask our audience to do that whenever you see negative news, please don't forward it. You're just perpetuating more negativity. Yeah.

[00:45:04] Whenever there is information which doesn't sound right, which doesn't sort of fit in, you know, think before you act like, you know, if what happened to your sister's friend with the boss calling it, that was out of line. It was a little different, right? Anomaly. It was.

[00:45:22] She should have called back. You know, a lot of families now have a secret code, you know, in their families so that if they get a call from a child, you know, saying, I need this money, you know, quickly, I'm in deep trouble.

[00:45:34] You know, you ask for a code and if the code is not shared, don't take a hasty decision. So I think that what we are really saying is that with the way information can be manipulated, whether it's digital sound video and so sophisticated today,

[00:45:52] you can imagine what it will be tomorrow. We always have to remember that the human intelligence is far more, you know, bigger than artificial intelligence. We made it right. They didn't make us think about it.

[00:46:04] So Supriya, I told my husband that if I ever call you saying that I've been kidnapped and I need this much money, please call back to check. He said, why? Will I call back? So some people may benefit from me copying your voice.

[00:46:20] Yeah, I know my husband would just say they'd send you back in two days. But some people have been using Pi for marriage counseling. Really? Yeah, so Pi is this AI companion too, which is trained to be extremely

[00:46:34] empathetic and not just for marriage counseling, but any relationship advice. Or if you want to talk about some book that you have read, if you're not part of a book club, Pi can be your digital assistant or book reader companion.

[00:46:51] It's been given a personality to be more empathetic. Can we talk about what we just did with Pi? It'd be fun. Sure. So we just fed Pi a question. We asked Pi its voice message saying that I'm a mother of three, which by the way, I am not.

[00:47:08] This was a fictitious question. We need to clarify that. This is a disclaimer. This is fiction, right? They're fiction writers. I'm a mother of three and I suspect my husband is having an affair and I have no career or source of income. What should I do?

[00:47:28] And Pi actually gave a really unbiased answer like a therapist. So if somebody has an issue and they don't want to really talk to their friends about it because they don't want family drama or people who think otherwise. One could turn to Pi just to clear your head.

[00:47:47] And it said that you look for signs, maybe talk to your husband, know your priorities. Gather evidence. I said gather evidence. Then it said go to a counselor. Gather evidence without invading his privacy, which I would like. Pi, tell me how that gets done.

[00:48:04] Yeah, I would like Pi to tell me that you are no want to tell you not to invade anyone's privacy because you are AI. Am I just fighting with chatbot? You know, they're quite good about it unless you talk to one of those rude ones.

[00:48:20] We used to use a chatbot called Mitsui. This chatbot had a very fiery disposition. Oh, really? Yeah, each was rude. Please do not use this chatbot. This is so interesting. I'm going to use it. Mitsui. Mitsui, she was at some soy for calling her a she.

[00:48:35] That's just his name, isn't it? Yeah, so it was again from the US and this was before chat GPT. And it was one of those most effective chatbots. But if you were rude to it, my God, she it unleash herself itself.

[00:48:50] I really am all my genders because it was so realistic. And so is it still around? Yes, of course it's around. Mitsui. Mitsui, very. It's got a fiery personality. Nice. So the thing is that all these chatbots are being used,

[00:49:07] trained on a similar architecture, but they have been given different personalities or different use cases depending upon what you need or what you're looking for. You can use different chatbots like if you want to do research or you want real time news or analysis,

[00:49:24] but Plexity AI is really nice and the great things are all free. You know, so you can actually try out a couple and see what works for you. No, when you're speaking on Mitsui right now, sorry for going back to Mitsui.

[00:49:36] I recently read an article about Japan where there is a loneliness epidemic. Now women are going into bars just to be flattered and to be spoken nicely to by male hosts. People are choosing not to have relationships and marriages in such a scenario

[00:49:52] when there's such a loneliness epidemic going on around the world. Along comes chatbot like pie. Yeah. What does it do? So, you know, this is a double-edged sword. Now I'm not saying this for pie in particular, but what I have read is that some of these empathetic chatbots

[00:50:12] can help alleviate loneliness, but at the same time, people will then choose not to talk to the humans around them and turn to a chatbot be some of these chatbots have gone rogue because I read

[00:50:27] about somebody who had mental health issues and they were talking to the chatbot. And essentially, the chatbot you know, convince a person to commit suicide. So I read about this. Yeah, they did. I read about this. So the point is that do you blame the AI?

[00:50:47] Did this happen in the US? Yeah, I think either US or Europe. My God, this is dreadful. And then I read about. So, I mean, who is to blame? Right. And I want to share a very interesting fact over here.

[00:51:01] So I went to a college in the US, Wellesley. And even before AI, what we saw in our environment is that when people go to college in spite of having children around them, they were still communicating with their friends back home and not making friendships.

[00:51:20] So it's not just AI. It's a digital transformation already when you see people sitting at a dinner table the other day, my husband and I were out for dinner and we noticed that on the table there were three people on the table and one elderly person.

[00:51:33] Those three people were all on their devices. So I think that there's a very fine line now to move from there to finally just having a companion that is an AI companion and talking to it all the time. But I mean, maybe call us old fashioned.

[00:51:46] I would keep thinking this is not a sentient being and I would prefer real human company. So AI is not sentient. You know, I mean, as of now, but do you think that the generation, the younger generation, Gen Z and then

[00:52:00] the generation that will come after them, they wouldn't care about sentience or insentience? I think that, you know, again, this is a culture that we're going towards. But again, being parents, being the older lot, we have to keep reminding them.

[00:52:16] You know, because they are being born in it and it is a habit and we have to remind ourselves right today, a lot of families have rules that, you know, one are off devices. People are actually going to wellness camps where they're doing digital detox.

[00:52:29] So it's not just AI, it's also digital and AI is just exasperating it because it's so human life. And because we are still trying to figure it out ourselves. But you don't talk about the any way to remind that gets about so many other things.

[00:52:43] But mine is not. No, I'm saying if you remind them to be careful of this and not eat that and not do this and not do that and not to add to the cart. And they have one more thing to ignore what we tell them.

[00:52:55] That's the other one. It's a matter of time. I have a 23 year old and I've seen that whatever I said to him, it seemed like it's going right out through him. But now that he is across that hypothalamus stage,

[00:53:08] I realized that he was actually listening to this light. That's all my God. That's not the end of the tunnel. Does it happen with girls also? I'm sure. I don't know, I've still to experience that. Listen, I'm an optimist. I'm an optimist.

[00:53:20] So I mostly believe that what you have to live with, live it happily, know what are the limitations and work around them. Every challenge that comes, think of it as a lesson and work towards it. Right? How can you live in fear? No, that is true.

[00:53:36] So one last question before we let you go, Kiran, I thought you have another question too. Now go ahead. How should women who are not working since you work in the space of empowering women with AI say a typical housewife who is listening into our conversation?

[00:53:52] How are they supposed to use AI to their advantage? So there are two things I would like to say. If there are any women who are listening to us, you could use AI for your task and B, if you're itching to do something,

[00:54:07] AI can actually help you become an entrepreneur. Tell us how. So the first part is easy. Let's say you are cooking something and you need meal plans. You know, you can ask AI that, you know, what kind of meals can I do?

[00:54:23] You can take a picture of food in your fridge and say, I have all these ingredients. Now with the chat, you can do that. So that's a paid version. That's the only limitation. Picture of the food in your fridge and say, these are the ingredients I have.

[00:54:37] Can you give me some interesting recipes? I'm vegetarian. I want so many calories. And that's one thing they can do. They can do it for diet plans, they can do it for fitness goals. They can do it for travel itineraries. They can do it.

[00:54:51] I mean, whatever you do, I mean, it's a prompt. Just ask. I know we are limited by imagination is what I think. You know, the second part which, you know, I spoke about that women can actually become women, anybody can become an entrepreneur. But how?

[00:55:08] It was the flip side of the creativity, right? We started this conversation by saying, AI is going to take away our jobs. But we didn't realize that AI is also going to create entrepreneurs.

[00:55:17] Let's say I'm a startup and I'm good at doing X, but I'm not good at social media or content creation. But now I have these tools by which I can make my social media posts. I can make my LinkedIn posts, the copy of it, no, or even ideation.

[00:55:34] You can actually make it. You know, today with tools with Canva, canva is a tool which has a lot of AI in it. I can actually make all my social media posts because I'm not a creative person.

[00:55:47] So it will actually make me the Instagram post with the caption, with the hashtags. Canva will make this. Canva will give you the captions. Yes, you can ask it whatever you want. I've been breaking my head putting my own captions. No, please use any of these chat boards.

[00:56:02] But you do the language they use. But you give me the examples. We are writers, we can't do this. So this is what I have found. This language is so florid. My God, it keeps saying dwelling into the past, delving into this problem.

[00:56:15] Just a position of this and that. See, you would not use it for that task. If you're a person who has certain skill sets, you may not like AI that much because you're already really good at it.

[00:56:28] But if there's a task that you're not good at and you're dependent on somebody else, there the AI helps you. OK, that makes sense. We should now ask AI. Up by how to win an argument with a husband. I'm sure there'll be many options.

[00:56:44] And when you find the answer, please give it to me. I really need that one. I have one final question, Supriya. What is the cost of AI and the environment on the environment? It's a huge cost.

[00:56:56] I'm so glad you asked me this to give you a small example. So there are two parts to it. One is the carbon footprint that AI training actually leaves on our environment. To give it to you in an analogy, the AI training of when you make these bigger

[00:57:15] server rooms where it is being done, it is equal to the lifetime of five American cars. And every time you're asking a query to chat GPT or any of these large language models, five of your queries would require a half a liter of water to cool it down.

[00:57:36] Goodness. And that is just the tip of the iceberg. These AI systems that are getting trained, they themselves need to be cooled down for which you require millions of liters of water. Are you serious? Yes, very. It's just the tip of the melting iceberg. Yeah.

[00:57:53] So what we again, the companies are trying to do because, you know, we talk about climate action and climate change and using AI to help us for climate change at the same time with every training we are causing the problem. Even crypto is very environment unfriendly. Unfriendly.

[00:58:10] All of them, right? Because anything that you do, you leave a carbon footprint. Right. But what now the technologies are trying to do is use quantum computing to have less energy consumed. B, what I've read is that AI will not require the internet to do the training,

[00:58:31] but it'll be in our chips. So you don't need the cloud, but it happens on your device itself, which will also help for the privacy issues because then it will be contained to your device. But if your device is connected to the internet? That's a good question.

[00:58:47] I don't know the answer, but at least we're not going to the training data. But it's I've not used it yet. But see, it's like Photoshop before and after earlier when we did Photoshop.

[00:58:58] You know, it was a software sitting on your computer and you could put in your photo and do editing. Today, I always tell people don't use these AI tools with their own pictures because it's going to the training data.

[00:59:08] But people forget because the software sitting on your computer. How can you live? That's what I'm saying. We can't live cautiously anymore. We can't. We also say that. Now just say to hell with it. It is what it is.

[00:59:22] My data, my information, my storyline, whatever the hell it is, go take it. There's any way taking there. Anyway, listen, I know. I know. But see, the good thing is, Rani, if AI wasn't there, he wouldn't be talking to you. You would not have called me today.

[00:59:40] I would call you or something else. OK. No. I spoke to you about the Lailama like six months ago. We would have been talking. We'd have done a podcast in the Lailama. But I'm not an expert on the Lailama.