00:00:02
Speaker 1: Hello and welcome to the big Story. Two point oh hello, this is the first episode of the second season and we are your hosts. I am Anjali and I am Prateek. So Anjali, what's new in the season? What is so two point about it. So you know how our news culture is quite noisy and there are so many headlines that come every day, this is happening, that is happening. So exactly what is happening,
00:00:29
Speaker 1: that is where the story comes in. So just like season one, we will get experts on the podcast to answer all of our burning questions. But in this season since we wanted to spend more time on longer discussions, we are moving to a fortnightly series instead of a daily show and the episodes are longer as well. We have very free wheeling, you know long, long discussion
00:00:54
Speaker 1: with our experts so that we get the more nuanced and well rounded up that is all about the show. But let's talk about this episode before we go further. I just want to make an appeal to our amazing listeners. Please listen to and check out all the future episodes of big story, all the previous episodes of the big story on your favorite audio streaming platforms like apple podcast, Spotify gone adios
00:01:20
Speaker 1: everywhere. We are on the Queen's website, we're on the Queen's Youtube channel. We have taken over the world and also check out our other podcasts. Yes, we have a movie and tv review show. We have and we have a show on indian politics called C assets. So to check them out as well. Right? So for our first episode we sort of looked around and
00:01:45
Speaker 1: we try to find out what is the one thing that is creating headlines and you might have guessed it. It is, it's artificial intelligence, machine learning. So there is a lot of curiosity about how it works, what it can do, is it good? Is it bad? There is a group of people who believe that it is going to change the world for the better.
00:02:08
Speaker 1: And then there is another group of people saying that all is not take our jobs to replace humans, robots will kill us, we will come back. So there's a lot of talk and so we wanted to answer a lot of noise, like genuine, genuine noise probably generated by
00:02:29
Speaker 1: But yeah, so we thought, let us get to experts from different walks of life but connected to Ai and ask them some questions that I would say we have and a lot of our listeners would have as well, like we start from very basic and then we want to go into some, you know, deep
00:02:49
Speaker 1: questions about how it will impact the society at large. So Anjali, what were some of the questions that you have for our guests? So I think I would like to start by understanding how does that work? Like in very simple language, how does it work? What is the thing behind there also this emergence of these new tools like gPT dolly and lens are in, you know, very common
00:03:17
Speaker 1: use cases like art and essay writing and all of that. So it's actually a really interesting, you know, I think to deep dive upon key AI and Art co relation right? And for that, like we have
00:03:37
Speaker 1: our first guest who is Jemima claims he is a Youtuber and he's also a graphic designer of working real human graphic designer, not an a a graphic designer. He works at weapons. And we want to talk to him about his journey with art in general and how he sees this encroachment of AI in the space of art. Interestingly enough, he actually made a Youtube video about this recently where
00:04:05
Speaker 1: where he sent an AI generated design to a client. So that would be interesting to explore what was that? Like what was the whole vibe of the clients? Did they know that they find out? So that was really nice. So we just want to get a peek into how does the creative world look at ai what are the kind of problems that AI is solving for them? Do they like it? Do they not like it?
00:04:34
Speaker 1: And at the same time we really want to zoom out and talk about a more macro view at AI for which we have Doctor who is a professor at iit Bombay and who works extensively in aI policy.
00:04:48
Speaker 1: He has spoken about this in a lot of places. So, I think he'd be the perfect fit for us to understand what a policy around here should look like, what is really doing to the world. And should we be worried about it's going to be so interesting to have an artist and a turkey talk about. All right. So let's jump right into the corners.
00:05:08
Speaker 1: Let's jump right into it. Welcome. Sammy. Welcome And hello, I want to start today's discussion, first of all by talking about ai ai artificial intelligence Dalil bad. Let us you know, hold back a bit
00:05:31
Speaker 1: and discuss what artificial intelligence is and how it works in a sort of simple language. Right? But I was thinking instead of Anupam telling us what it is because he has been working closely with the eye and he knows the internal, you know, workings of it. How about all of us
00:05:50
Speaker 1: tell him what we think Ai is and how it works. And then he takes our class and we're like, okay, we will do our version of it. And he'll give us marks who wants to start, I am at the bottom of the of this pyramid. So I but no, no, we'll do we'll do we'll do it alphabetically. So Anjali, like I said, I'm at the bottom of this pyramid. So my definition is going to be the most basic. So how I think
00:06:19
Speaker 1: social intelligence works is like a told a while back, it is just a very sophisticated coding where you feel into a program, all the information that that is on the internet. And you ask it to identify patterns. And then when given a prompt with the identification of those patterns create a new result. Now this can be done for
00:06:48
Speaker 1: Yeah, so helping off of what she is saying. I think artificial artificial intelligence, how I understand is is a way to, as the name suggests, artificially create the pattern of thinking of her human brain I would say because Hamas that becomes sort of our data observed and then we recognize some patterns
00:07:20
Speaker 1: process information Intelligence generate over the years. It gets more sophisticated
00:07:30
Speaker 1: and that's how I understand it in a more most basic sense. I mean what do you think? Yeah, obviously you guys went a little too technical I think I feel will correct me if I'm wrong but Ai is basically pattern recognition. Like it's very good in recognizing patterns. That is what Ai is for me in my profession because I'll only talk about my profession here but
00:07:54
Speaker 1: more or less like when we talk about jobs and all everything and if you look at it from a bird's eye view, then I think Ai is a tool that will make us as a species more productive in the future and will make our lives easier. So that's what Ai is according to me. Okay, so
00:08:14
Speaker 2: so first of all, first of all, thank you for having me here
00:08:20
Speaker 2: and uh thank you for coming. So it's very interesting to talk to you
00:08:26
Speaker 2: a number uh definition Angelica
00:08:36
Speaker 1: actually
00:08:37
Speaker 2: actually because let me try to explain, first of all
00:08:50
Speaker 2: instead of explaining ai Kahuta. I want to start with a I can write because there is a lot of cultural opinions on what it is and I think we need to cut a little bit there.
00:09:05
Speaker 2: So the first thing you have to understand that artificial intelligence is a misnomer or rather a market term, it has pretty much nothing to do with intelligence which is where protic, your definition flies out of the window. There is no
00:09:23
Speaker 2: neuroscientists on earth right now. Who knows why are we conscious survey? Are we really intelligent
00:09:32
Speaker 2: and hence there is no technology which can artificially replicate intelligence outward patterns exist, record
00:09:52
Speaker 2: any machine which can autonomously make decisions is called a nail. And generally the technology which is being used in 95 96% of ai is machine learning. Machine learning how to, I would say angelica, definition more or less character. Machine learning data. Machine
00:10:19
Speaker 2: decision
00:10:24
Speaker 2: decision. Later classification. Machine orange classification machine learning
00:10:50
Speaker 2: or who are quite fruit drawing. Bannock is um generative viable daily charge gpt a generative machine learning types but fundamentally all machine learning algorithms, they eat a lot of data which is called training data.
00:11:09
Speaker 1: So sorry first
00:11:13
Speaker 1: are they new like things like dolly and charge. Gpt.
00:11:16
Speaker 2: No they are not, they're not.
00:11:20
Speaker 1: I think 2nd 3rd versions of like dolly actually
00:11:25
Speaker 2: more more than more than 2nd 3rd versions of that technology particular brand names. But
00:11:38
Speaker 2: so it is basically what is called in the technical field a language model exiled language english or patterns or patterns is a language model. Language models, basic technology nanotechnology. Earlier two thousand's major technologies exist
00:12:06
Speaker 2: but joe hardware software but Gpus market manager of hardware Nous software checked earlier research or Joey I just ca laboratories methodology which was something we were working on in our fields. They suddenly started to stop remaining as scientific projects
00:12:34
Speaker 2: and became market commodities which started to be converted into products. I would say that was a man. I mean we were trying to solve some specific problems but as a I became commercialized people also started to realize instead of trying to solve specific but hard problems, it is much easier to throw lots of data models, quantity has a quality of its own or
00:13:03
Speaker 2: it's a research career or what problem are they trying to solve?
00:13:17
Speaker 2: I
00:13:28
Speaker 1: want to I want to pause you for a second because you bring up an interesting point now this and since we were talking about things like dolly and mid journey and stable diffusion.
00:13:40
Speaker 1: Shamim Ur a working professional right. You work with graphics on a day to day basis. So how do you think have these tools
00:13:50
Speaker 1: sort of infiltrated in your industry and casually and probably problem discussed, correct problem exists. Tools solve career in your industry specifically in my industry at this point it's making us more productive. Like I still remember in my initial days, 2007, 8 at the time of seven was I learned in like the very like vintage version
00:14:17
Speaker 1: there. If you have to remove people from a background, it was a very big task like it was like a full day task. Now it's just one click. If I have to make a brochure then if I'm using five images as such, then I can create a brochure in one hour, two or maximum three years at the time that those three years were three days. So as a designer I'm being more productive, my team is being more productive and
00:14:44
Speaker 1: if you talk about all these designing software, they are becoming like intelligence is not what they are but for us they are becoming intelligent in a way that they give us solutions easier and faster. Do do you think they seem intelligent to us looking at it as a layman coffee, truly intelligent Like they they might not be intelligent in the way we are intelligent but they seem pretty into
00:15:14
Speaker 1: background removal. I think also because for so long people have been removing background that is something that the software has also been learning that when somebody wants to remove a background this is the distinction that they are seeing between foreground background and after we just discussed after all of this data has been collected now they've developed a thing where one click and the
00:15:38
Speaker 1: software knows what do you think is the foreground and the background if you see a new update will come and that software will add a new to background removal. Background extension is just one option. Suppose it's very difficult to tell you but you get multiple options in a very short amount of time. But I I don't I don't feel it's it's a threat because see jobs will change the perception of jobs will change.
00:16:05
Speaker 1: I cannot be rigid key. No no I will I will do the best work. A I cannot do the way I will do and I'll only do it myself then I will be kicked out because other design will come he'll use AI and he'll do multiple things in the same amount of time. Is something like that happening. He has. No no I haven't seen because it's all about productivity. Right. Who wants to work till four in
00:16:32
Speaker 1: everybody is a. I being used very prominently. Yeah I'll tell you one live example. I'll tell you suppose you have a holiday client. Okay suppose you are working with a client who sells holidays. It's not possible for us to go and click a hut in the mountain picture all the time. So what we do will create an image like that in a I suppose and then if necessary we'll try to regenerate it because we
00:16:57
Speaker 1: we don't have inspiration before right? But the Ai has as said no the Ai has been fed so many images with a mountain in the heart. He'll give us options because Ai work is still very patchy. Like I I don't see it very like fine fine. Okay so we'll watch this image and we'll get inspiration of the lighting and set up and how the mountains and our this thing. But we'll create it manually so it will look better for it will look better
00:17:26
Speaker 1: and it will be in our control. So that's what Ai is being used now so we can create a safer image. Like it won't get cooperated because we are not stealing the image we are constructing saying you are currently using Ai as a starting bouncing board of
00:17:44
Speaker 1: the first idea mildew. And then we can because you also had made a video about this where you said that you gave an Ai generated design to an actual client. So talk to us about that key battle. What kind of brief did you get
00:18:02
Speaker 1: and how was that whole experience of you know actually going through it and and what was that like see how it works like and first of all, why did you do did you just do it as an experiment? Yes. Yes, yes. It was an experiment because we know okay. We know when we are teaching something. No we are throwing things like in the dark we don't know what will stick. So you show what you can do for them
00:18:30
Speaker 1: And they'll say yeah we can do this also. So you do things to get rejected sometimes if you're lucky then something from that only will get selected okay. But like 99% of time though
00:18:42
Speaker 1: it's like okay, okay these are very good. But we had this in our mind and this is the hook point in the business, right? If they if you get to that point then you have the job and those two logo options. No, that was my way of like reducing the what you call beating because if I tell the initial exactly if I did, you know, I don't know quite possible, likely reject.
00:19:09
Speaker 1: But at least the client actually yes, exactly. Because the best thing about Ai in our businesses like you're not stealing anything. It's not cooperated yet because I will talk about today. What is the idea behind stealing it because the air generated options also know I did some tweaking there also it was not properly. I
00:19:32
Speaker 1: we showed it to them and if they like you. Okay this is the go one and we need 10 poses of this one. At least we have one basic post. So we'll do more changes and you're saying Ai helps in you know, cracking that first step. Yes. There will be one graphic designer who's better in air and there will be one graphic designer who is really who wants to do things the old way.
00:19:55
Speaker 1: Okay, okay. It's a simple digital camera and the camera, the transition was very, very crooked. Okay. The old people who were like real camera users, they were very, they didn't want it to go to digital camera, but who did, who did go to the digital camera? They survived. Right. So this is what it is. The technologists will be evolving and you need to write the way basically you need to be updated. That's that's what I feel
00:20:24
Speaker 1: I was listening. I have one question for Shamim in your video. Also you mentioned that sometimes what an Ai lacks is context. So and I think this is very cultural also that when you have an indian client you have the entire cultural background that this client is coming from.
00:20:46
Speaker 1: And so you know you might not get it in one go what the client wants but you have certain context which a lot of times because the air doesn't know who this person feeding in the prompt is might not be able to give a more accurate image. So how does that work? If you can explain more about context in graphic communication,
00:21:14
Speaker 1: see your question. No, I I want to give it to because your question answer is hidden is Jarvis possible
00:21:22
Speaker 1: and if Jarvis is possible then is Skynet possible, Jarvis and Skynet as possible before he answers this, I just want to interrupt and explain to our listeners what Skynet actually is. So a terminator movie, the Kyonggi Skynet is like the villain of that film and it is like a I would say which gains self awareness and which becomes human
00:21:46
Speaker 1: and at the end it decides that humans are the biggest enemy in the world and they mission is to you know kill humans. And essentially Skynet is the example that is used to illustrate ghetto hockey a human, this is like the first time in popular culture that we
00:22:13
Speaker 1: at the scale. So all of this obviously irobot or go India go robot shankar androgyny, that movie's premier Ai basically humans. So um yes is Jarvis and Skynet possible.
00:22:32
Speaker 1: Yeah then then you have your
00:22:34
Speaker 2: answer again, I used the I I used the word when I started to describe what Ai is and isn't right, I use the word and I used it very specifically
00:22:46
Speaker 2: I said think of ai as a parrot, Jarvis is not a parrot right? A parrot doesn't know what it is saying, A parrot just says things, it has heard patches from things it has heard but I want to answer this in a bigger detail, you know like because you guys are having this conversation and I was listening very closely to try to sort of so there are again things which we have to be very clear about
00:23:15
Speaker 2: Shamim, you used the word or rather the phrase that
00:23:20
Speaker 2: Ai is coming right and you would like to make an analogy. Technology never comes. People choose to use technology and the people who have power and money get to dictate which technology is used in waterway technological part three nautical technologies coming another phrase we keep hearing is genie's out of the bottle.
00:23:46
Speaker 2: Yeah, it's not a genie, it can't be out of the bottle. So bad. There is a technology, it is used in particular ways
00:23:55
Speaker 2: which way's will become popular and which ways will not become popular depend a lot on ai industry or control what use cases they fund, What use cases the research, what use cases they develop an individual level. Sure Arabic professional to achieve
00:24:16
Speaker 2: skills survived but at a macro level and as a citizen facial recognition tech
00:24:36
Speaker 2: are identified privacy technology. So remember at the end of the day her technology wider context apart, policy, political, economics,
00:24:54
Speaker 2: politics or economics. So Tianna who control the technology and the second thing you guys were talking about a plot bad, correct and Anjali said something which was actually correctly segmentation job, bad background removal. Computer vision that problem is called
00:25:12
Speaker 2: segmentation, segmentation, possible segmentation millions of times important point.
00:25:33
Speaker 2: Whatever value any eye generates that value is coming out of training of data which was at some point made by human beings.
00:25:44
Speaker 1: So doesn't this then bring the question of ownership
00:25:48
Speaker 2: exactly here the question comes at no AI can exist without data. A lot of data and
00:25:59
Speaker 2: it is a shame that right now that data is kind of being stolen Internet Italia. It's that right. I'll give you one example. There is a very famous facial recognition tech company in America evaluation. Billions Clearview Clearview dataset.
00:26:21
Speaker 2: They just went on the internet on America digital law enforcement website under trials, photos, Internet, social media, facebook, photo media natalia right. Company company.
00:26:46
Speaker 2: So yeah, ke hame jo bad career gives the job maker surprise society. It's easy. You can't leave them to
00:27:00
Speaker 2: people who are going to make money from it. You have to have that discussion at a societal or at a policy level data protection, europe had a law, it is called G. D. P. R. You might have heard of it. It's a very strong law is locked in europe.
00:27:23
Speaker 2: Your data cannot be even if it is on the internet, some company can't come and randomly.
00:27:28
Speaker 1: So I'm just going to pause you here. So in some way would you think that what Shamim did with his experiment that using somebody else's like using ai generated images or even I have my profile picture as many lands and generate Corinthian. I use a lot of a yard. So would you consider that
00:27:51
Speaker 1: unethical at some levels.
00:27:54
Speaker 2: As a policy academic. I try to stay away from questions of individual ethics, ethics are not something an individual can decide. Si Shamim when he did that Shameka demand met Tonia anarchy. I am using an Ai ai millions of images. Million images does
00:28:25
Speaker 1: in the past
00:28:29
Speaker 2: but as an individual but company
00:29:06
Speaker 1: I actually had another question to ask.
00:29:09
Speaker 1: The Ai tools are using the databases that are available on the internet virtually for free of cost. Right. So then
00:29:19
Speaker 1: is there a problem with the entire money making strategy of
00:29:28
Speaker 2: they have collected the data from the internet without giving any money to the people to collect or fruits. Data model train a Prada
00:29:59
Speaker 2: problem. You had that
00:30:01
Speaker 2: in our culture and when I say our culture I just wrote me in indian culture. I also mean american culture. There is a false narrative or a false narrative here. Regulation innovation or year problem here in America or India and our country's key. We don't have any
00:30:23
Speaker 2: regulation on data or
00:30:26
Speaker 1: even data protection
00:30:29
Speaker 2: data protection bill. Hell on him. But even an imperfect law would have been progress but I'm sorry to survive now. We don't have any law to protect your on my
00:30:41
Speaker 1: data. Progress and what will happen if a law like this comes home ari daily life
00:30:48
Speaker 2: depends on what
00:30:49
Speaker 2: depends on what is written in the law. Like right now last joe draft data protection card criticism law, citizens sorry, responsibilities but state sorry responsibility, Jessica
00:31:08
Speaker 1: again, Shamim should be
00:31:11
Speaker 1: king of
00:31:12
Speaker 2: like professor science or humanities. Would
00:31:35
Speaker 1: you be right
00:31:35
Speaker 1: now If you wouldn't have pivoted to policy. Would you be working in one of these companies now? Yeah
00:31:41
Speaker 2: I would have been working in Silicon Valley in 2019.
00:31:54
Speaker 1: I wanted to go back to that thing like because after listening to my,
00:32:00
Speaker 1: I'm completely with and we are on the same page where we talk about laws and policies because whenever I meet somebody like who wants to be a graphics and who sees me, who meets me somebody I know I'll tell them our professional, our profession is not just a job, it's a superpower because all the fake news, all the hysteria and all everything that is done by people like us. Okay
00:32:23
Speaker 1: and now there's all this like what you call Deepfakes and now the fake news is exactly defects and everything. This is done by Ai. You don't put a law in breathing right? You put a law in cutting a tree because that's something is dangerous, breathing is not dangerous. I don't have problem with Ai. Okay, I have a problem with the hysteria that is being created against a you will eat your job is dangerous this and that.
00:32:50
Speaker 1: So I'm going back to the same how dangerous is let's okay, let's talk about key danger in its worst form, what is the harm they can do? And since we are recording it on a friday the 13th let us go,
00:33:03
Speaker 2: Okay, let's let's actually talk about the danger of the answer is a very contradictory answer to the patients,
00:33:17
Speaker 2: yes Ai is extremely dangerous. No. Skynet is not possible because it is not dangerous in the way people think of its danger then a common person thinks of AI is dangerous.
00:33:49
Speaker 2: You
00:33:50
Speaker 1: make eyes also stupid act,
00:33:53
Speaker 1: you've taken all the power away from
00:33:55
Speaker 2: it. Yes because but I would also say this very stupid thing is extremely dangerous in very stupid ways. A problem here was stupid, basic, basic, basic, basic examples how dangerous he is America
00:34:15
Speaker 2: joe building America program technically what is the chance? Criminal
00:34:37
Speaker 2: judges or juries? Common sense, shoulder case, black community
00:34:59
Speaker 2: but but problem or
00:35:27
Speaker 1: was this, what year was this
00:35:29
Speaker 2: has been going on for decades. This is a huge problem in America example, amazon
00:35:37
Speaker 2: warehouses to amazon has moved over from purely online presence and going to brick and mortar industry of a warehouse. Banaras jamming carried warehouses, amazon, they tried to replicate their whole hiring process by converting it to machine learning. Machine learning
00:36:03
Speaker 2: problem bias people, machine learning, machine learning women, Black,
00:36:45
Speaker 2: okay, similarly,
00:36:59
Speaker 2: but generally in the army police departments,
00:37:14
Speaker 2: Delhi punjab assam, Hyderabad Chennai police departments, facial recognition tech used to start machine learning machine learning probabilistic
00:37:36
Speaker 2: up a I say inaccuracy nickel, it is not a technical fault, inaccurate, Neykova inaccurate, inaccurate right about decision the idea. Okay, inaccurate recently, ex Chief justice sheriff bob,
00:38:05
Speaker 2: he announced that he has decided and the Supreme Court judges think indian courts make resources, procedure,
00:38:27
Speaker 2: procedure simplify justice be a content justice be a product of justice, A product now machine learning, Machine learning, sorry, jobs, jobs spear member vodka,
00:38:54
Speaker 2: intelligent creative jobs. Economic 50 60% jobs, manual laborers, especially especially
00:39:11
Speaker 2: like India these jobs cannot be replaced because the worth of a human being in India is very low. Let's be very clear about jobs, jobs, jobs are but never color job
00:39:35
Speaker 2: correct companies, companies mining or jobs, jobs, jobs employee to employee. You have an entire economic in this country where
00:39:54
Speaker 2: increasing automation and increasing machine learning has enabled platforms, the platforms in a machine learning and because of the popularity and profitability of these platforms as a profitable dot com company will be lost because of that. And because of no policy on gig work whatsoever.
00:40:16
Speaker 2: Our entire workforce is slowly changing into gig work and this is extremely dangerous, short long
00:40:37
Speaker 2: overall jobs.
00:40:44
Speaker 1: But the biggest problem also I would want to get Shamim into this key.
00:40:51
Speaker 1: Even us when we were talking about it, we were only considering all of these higher level jobs like creatives and all of that. But the actual jobs, we aren't talking about those kind of jobs, right? And when we also don't associate them with Ai Ai S C jobs could replace Karenga especially
00:41:17
Speaker 1: after listening to after listening to my whole point of view has changed this problem that has said has never come came to my mind. Exactly yeah. The question coming to my mind is that key, what we should do because mom said key individual contribution is very less so that's why we need laws and policies but how we can
00:41:39
Speaker 2: help like at an individual level
00:41:42
Speaker 2: do what is best for you at an individual level encourage famine because he has a platform. You should encourage people to learn these skills so that at an individual level. But as a, as a policy professor as somebody who looks at the entire society
00:42:11
Speaker 2: masses of individuals individuals hama english, advanced degree police. Exactly, sarkar policies, laws
00:42:38
Speaker 2: at the end of the day, they also come from the popular mass popular topic local insurance, cancel job jobs are
00:43:08
Speaker 2: jobs they call her cell jobs but online content creator
00:43:23
Speaker 2: Hamara come bath, low quality content, high quality content. So the point I'm trying to make especially with Sammy because Sammy has a large audience on youtube right? Loco encouraged debates,
00:43:45
Speaker 2: just scholarly debates. Academy academy a measure debates, policymakers, conference fairness, accountability, transparency, fact, conference ai or policies or actual danger technologies related
00:44:13
Speaker 2: her actual danger, boring sensation
00:44:22
Speaker 1: gradually. Exactly I was just saying because you may be
00:44:43
Speaker 1: in the same the same way medical field maybe like er careful because medical it is said it will be the most useful
00:44:58
Speaker 2: right now.
00:45:00
Speaker 2: Yes, you should be So simple reason. India publishes a lot of data for free. Some open government data whose website, 52% downloads I think health data, health data, jokey health data, machine learning Alana hottest field commercially
00:45:24
Speaker 2: there are big companies wings, telemedicine telemedicine remotely bad chat vodka through doctor. So Covid fella basically doctors training as far as technology and science methodology. Techno solution is um
00:45:52
Speaker 2: techno solution is um is that bad idea that any social problem can be solved with technology Dr Valerie
00:46:09
Speaker 2: problem, chat board media, stable internet connection, smartphone in english.
00:46:27
Speaker 2: Problem solved problem DR problem solved colleges. Education don't call dr medical problem already.
00:46:56
Speaker 2: America makes research american medical data, hospitals machine learning use cases patients patients have to be very patient
00:47:17
Speaker 2: a malnourished, poor or black local data time machine decision to medical decision. There are a lot of time they were medically incorrect decision a
00:47:39
Speaker 1: devil's advocate wala. Point comes
00:47:41
Speaker 1: where people say that the exclusive technology
00:48:01
Speaker 2: main difference healthcare healthcare.
00:48:11
Speaker 2: So then
00:48:12
Speaker 1: would you would you advocate for removal of things like AI in matters of health care and education or more?
00:48:20
Speaker 2: I would advocate for I would advocate for
00:48:24
Speaker 2: no I would advocate for rational decision making on who is owning that rate of O. A. I. Ca. Use seer of profit making kelly. Individuals and private companies decide career yet as a society. Hm Sabbatical decided correct. Use examples of where
00:48:48
Speaker 2: popular
00:49:06
Speaker 2: election useful now but marlo
00:49:31
Speaker 2: you can't as a customer ask for the company to change its policy but as a citizen you can ask the government to change its policy services and
00:49:55
Speaker 2: citizenship
00:50:01
Speaker 1: to sort of conclude as a policy or where government comes in ai is to regulate for what and for what not AI should be used. That is where the government should come in and it should clearly dictate the use of AI
00:50:19
Speaker 1: is okay. So that would also cover all of our creative fields created exactly what in your opinion would be okay use for a
00:50:30
Speaker 2: more than okay as I would say that first of all this proceed this process to decide what the government should do. You should not just hand it to the government like that.
00:50:45
Speaker 2: It has to be a democratic process right down to the public is response dolly
00:51:04
Speaker 2: is document I think
00:51:10
Speaker 1: with this chat also it was our objective. Yes we want to talk about dolly and chad Gpt and all of these are you know certainly nice popular issues but
00:51:23
Speaker 1: as I got Amerco views the car and even from your understanding of it that these are not the biggest issues that we should be worried about. I think it's episode titled We are worried about the wrong. Oh yeah, I would like to add one thing is that yes,
00:51:44
Speaker 1: as I mentioned in my video also. Okay. Somehow I feel key we are being targeted like the creative people are being targeted. Everybody saying your job will go, your job will go today. I understood what AI is and how AI is dangerous. Exactly. So I want to thank you guys thank you for doing this with me because it was such a privilege because um you just opened my mind truly like
00:52:09
Speaker 1: I'm feeling good. Also that the way I use AI is not the most dangerous one. Don't you think this is also like a self romanticization? A lot of even creative people are thinking that homeless victims
00:52:26
Speaker 1: it I don't think that is the case. We are the most privileged people. Um it's as a toy used we have the option to choose it to use it or not use the term. This is a playground for us right
00:52:40
Speaker 2: now. Exactly. But year options taxi driver because Uber has out competed the taxes from the market. So that is very important. This point of
00:52:54
Speaker 2: our ability to confuse the actual reality with our reality are very small group of elite people who consumed
00:53:07
Speaker 2: but I'm very happy protect and Angela that you invited me
00:53:12
Speaker 2: my subcommittee criticized Sada ham academics problem but very few of us talk to the common masses. Some paper papers have papers, there are very few platforms, they have issues seriously but okay so first of all I would really like to thank, went for in
00:53:38
Speaker 2: take me to give me a small chance to translate some of these issues to the common language. I hope I was a little successful at that.
00:53:48
Speaker 1: Thank you. So in closing I would just like to ask um what is the first thing because obviously the recommendation and I would be endless. What is the first thing that people who have just heard this chat should go and read up
00:54:04
Speaker 2: about. So there is a
00:54:07
Speaker 2: paper I would like people to read. The paper is called on the dangers of stochastic parrots, stochastic simply means probabilistic.
00:54:17
Speaker 2: This paper was written by a bunch of researchers in the U. S. It is a very nice introductory paper to understand the dangers of language model, language model charge deputy, anything that generates language. Also, the paper is written I think in a very easy to understand english which anybody can read. You don't need to be a machine learning professional. This would be an interesting paper to start off with very realized that these artifacts are not
00:54:48
Speaker 2: operating in a vacuum that they have a social impact and what those social impacts can be. So this could be an interesting paper to begin. I wouldn't say this is you know the end of the conversation. But the beginning of the conversation for somebody, I am not the first person who calls Ai parrots
00:55:08
Speaker 1: is conversation with one thing that I will carry for the next my life is a I. Tota. The starting question we started with was a I. And art and creative job van logic hold Kotecki creative and where the point where we ended
00:55:35
Speaker 1: actually justice system, labor market. So I would like to close with you as well and going back to our starting questions.
00:55:57
Speaker 1: What do you think after you've heard all of this key your art club exist exist, what I personally feel after this long conversation and after learning so much from them is that key.
00:56:11
Speaker 1: We are being targeted because we can be targeted because some people want a i in everything without any regulation
00:56:19
Speaker 1: and we are the easiest people to talk because they'll get people like me who will say is making a life easier because even this actually is so interesting that you say, because I consider myself a very, you know, tech savvy person every time some google assistant comes. I really, I'm excited, Kara, that's why as soon as ai dolly all of this came I was very excited
00:56:48
Speaker 1: and obviously I am not alarmist about it. I had a very favorable view of it. So I think advertisers
00:57:08
Speaker 2: or any newspaper Mitnick alarmist article article article because I was talking about some actual issues. But Mark, I had
00:57:22
Speaker 2: reproduce the results which had read in some McKenzie report somewhere academic organization that it's a company or a company.
00:57:38
Speaker 2: Yes, sensationalism, but or boring imagination. I wouldn't say
00:57:59
Speaker 2: sensationalism
00:58:16
Speaker 1: trying to do with this podcast. The actual issue
00:58:21
Speaker 1: is like much deeper and honestly speaking, boring but love Yeah, interesting. Which is sad. But I think we tried our level best to make it an interesting conversation and thanks a lot for that. Very
00:58:38
Speaker 2: happy to talk to you guys. Also very happy to meet you. You look at the channel, I find it very nice. So I'm glad you are doing what you're doing.
00:58:46
Speaker 1: Thank you. Thank you so much. Thank you.
00:58:48
Speaker 1: Okay, so Anjali thoughts. First thoughts more than more than exciting. Jasmine. Yeah.
00:58:56
Speaker 1: What what were your expectations before we started? I really was expecting just you know, tell us about how he's not threatened by. I and I'm talking about you should be it might it might get so powerful that will take over your job. But
00:59:12
Speaker 1: I think it went into a whole other direction. Was I was expecting like a modification of a. Because usually when people talk about AI in popular culture entity but now my opinion on AI has changed so much after obviously catchphrase of this episode
00:59:40
Speaker 1: and every time, you know, I feel like oh it's going to take over the world, it's going to do this and that like the image of a cute little at the same time, I feel that we've discovered it's not all that good and that it solely depends on what it is used for. And I think we've drawn a very clear distinction between using our Ai for art and using ai to replace
01:00:06
Speaker 1: human consciousness. I would say. It was interesting the case study that tom talked about the 100 trials in the U. S. Where that is scary but it's way more scarier than all your irobot type stuff and it is such a babe.
01:00:28
Speaker 1: But okay, this can have real world implications. It's definitely something that these giant corporations are looking at and we as citizens have to be aware and we have to be worried about the right a
01:00:42
Speaker 1: and listen to more academicians. Yes. And that just goes beyond I think everything that is popularly discussed, we like we should not be listening to influencers talk about it, but listen to academy, just listen to the big story. Yes, that's because we will get you access to these crazy, crazy intelligent admissions. So I think that's it for this episode.
01:01:10
Speaker 1: It was a great chat chat, let us know in the comments, what you thought about it, what you think about, why do you think under or what animal do you think it is and share this with your friends, share this with your parents, share this with anyone, anyone, anyone who talks about,
01:01:35
Speaker 1: What are we going to talk about in the next episode of the big story. So after the grand opening of Ai and art and
01:01:45
Speaker 1: Policy, let's talk about media and we want to talk about a very interesting aspect of media which is media trials. It's like this unique combination of media and the judiciary, what role they play in each other's functioning and we're going to break down the entire concept of popular media trials that you've seen in the past decade.
01:02:12
Speaker 1: Okay, so, tune into the next episode. We have two amazing guests with us. So this was Prateek and this and this was the big story. Thank you guys, Bye bye.
01:02:26
Speaker 1: The Big Story is a quint original podcast executive produced by Shelley Value. And this episode was hosted, produced and edited by practically do and Anjali payload and it uses theme music from BMG production music, A special thanks to our guests, Dr Anupam Gupta and Sammy Markelis.
01:02:44
Speaker 1: Yeah,
01:02:47
Speaker 1: you were listening to the Quinns podcast.