[00:00:00] Katherine Druckman: Hey everyone. Welcome back to Reality 2.0. I'm Katherine Druckman. Doc Searls and I are talking to Ezekiel Lanza, who, like me, is an Open Source Evangelist at Intel. But we'll get to that in a little bit. We're gonna talk a little bit about AI and conversational interfaces. But before we get started, I wanted to remind everyone to check out our website at reality2cast.com. That's the number two and sign up for our newsletter. Doc has written some really great stuff. We actually, before we started recording, just talked about, uh, his post on, on open wallets, and that's interesting. You might wanna check that out. Also, before we get into it, I wanted to mention something else that I've been working on elsewhere. You may know that Doc, doc and I, uh, Doc every week and me occasionally can be found at Floss Weekly. On twit tv slash floss weekly, I believe. Um, and I hope you're listening to that, but yet another podcast I hope you will add to your podcast player is a brand new one that I'm working on at Intel, and that is called Open at Intel. You can find it now in most of your podcast players. You can also find it at openintel.podbean.com. You can probably find links to it now at open.intel.com as well. And we hope you'll read all about that because Ezekiel and I occasionally contribute there and we would, we would love it if you read our stuff. So, please listen to that. We're starting with some, conversations about open source security and really good stuff from really good people. I am really biased, but I'm pretty happy with it. So, on that note, thank you Ezekiel for joining us again. I will link to the episode where Ezekiel and Tony joined us previously, but we wanted to pick up some of those conversations that came up and maybe were maybe left some open questions. so yeah, thank you. Thanks for coming back [00:01:48] Ezequiel Lanza: No, thank you for having me. Thank you, [00:01:50] Katherine Druckman: cuz you know, Ezekiel doesn't get enough of me in our meetings at work, so, just [00:01:54] Doc Searls: Hmm [00:01:57] Katherine Druckman: But yeah, so, so how this came about actually, Doc and I were talking off offline off of this call about the nature of search, Doc, why don't you tell us the, the initial problem that you faced in, in searching, in the way that you used to versus today. [00:02:12] Doc Searls: So there we have kind of a concurrent problem and that seemed to be completely unrelated, the problem is not well recognized or talked about, and the solution is talked about all the time. The solution, first of all is, is ai. AI is moving very, very quickly right now. but the problem is search. For those of us who have been publishing on the web for a very long time. Search is deprecated. Uh, if, if, what you wrote is not that popular, I don't even know what the, what the, what the parameters are. But there are things that I wrote, five Tiff 15, 20, 25 years ago are on the web. There are many links to them, and they are not found by either Google or Bing and or by DuckDuckgo and others that rely on Google and or Bing. that's one thing, and it seems to me that's a worsening problem. That is a sample of one on my part. on the other hand, suddenly we're in a new era of AI in which. We are like chat, g b t uh, is suddenly, uh, very popular. Um, I went on it several times that I couldn't get on. It's too busy. It is not letting me go on there. Uh, so there's an enormous demand for it. It is ramped up in popularity fast than almost anything else out there. And there's just one thing. There are many different approaches to this and Microsoft and Google have made news by Microsoft's parts saying they're gonna base. on chatGPT or something like it, or leveraging their investment in open AI uh, which is a, a, a company and their new user interface is ask me a question and what you get are basically the same search results it looks like to me, but maybe not. But in any case, it's a whole new experience that they're trying to provide. the, best one that I've found actually is not chatGPT but for just answering a question is perplexity ai and I highly recommend checking them out. It is useful, really useful. In the middle of the night last night, uh, my wife found, um, two sources of custom cushions for a bench that she could not find it all on either being or Google, and it came up with, Two within seconds and it provides sources. Now, this is really, in some ways a better search. Perplexity AI right now for me is a better search. the curiosity that I have, Ezekiel, is what's going on here. I, you may not be able to speak to the search issue, but certainly to the AI side because it's a moving, a moving target. It's a juggernaut of stuff, there's lots of yak about it, but not a whole lot of understanding. I. [00:04:57] Ezequiel Lanza: Yeah. absolutely I mean it's it AI is changing everything as you said but uh I think that the main problem that we can face today with the with the browser with the search engines is that need to figure out how to write your your phrase that we are looking for And they know Google or Bing when you need find something have it indexed or they have their own algorithms to index web pages or whatever and they decide what to to show you Right but now with this chatGPT or even with GPT you have these algorithms that they already know what is everything in in the internet So if you ask for those models they know who you are uh or what is something that you are looking for when you mix that thing with internet access you can really empower a search engine right So I totally agree that it's changing It's completely a new way to look for for something Um this conversational stuff It's now instead of writing a phrase, you can start talking to the browser right So [00:06:20] Katherine Druckman: Like natural human language you mean? [00:06:22] Ezequiel Lanza: yes like natural So when now when you find something in Google you don't like what you see try other phrases, you try something different but have a conversational site you can say okay could you please give me something similar And you have an algorithm and you have a model that can understand what you are saying and they can look for other things, different compared with how or Bing can look for it I think It's really good combination about AI empowered engines and web search the models statics I mean the models are to understand what you are saying, some information but when you give that information and you give the freedom okay go and Google my information or try to look for other stuff uh because perplexity also you.com is another site that they are using Similar than chatGPT when you can talk to the web search it's it's amazing for me because they are Google but they're using Bing They're using all the websites They are searching by their in Wikipedia and other sites Right So I [00:07:42] Katherine Druckman: so the, okay, so these AI searches, they have the benefit of the existing CIR search algorithms like Google and Bing, but they also know the entire internet. So that's the difference between the data model and existing search capability because the, the thing that I'm really curious about is the why of the fact that DOC is able to find what he's looking for, using one and not the other. And I'm also thinking about accuracy here like in Doc's case, in his examples or, or doc's wife's examples, she found something that she was flat, unable to find on the search engines, which is a, you know, that's significant. But assuming you get an answer of some sort on either you.com, perplexity, ai, or Google or Bing, assuming you get an answer in the first place, um, you know, how does accuracy compare? Because I know things like chat, G P T, accuracy can be an issue. You can ask it to write you an answer a story, a report or whatever, on something that is completely nonsensical and has zero basis in, in, in reality or fact. So I wonder if that, how that translates to an AI-based conversational search. If I say, Hey perplexity ai, who, who is Doc Searls? And it gives me this really eloquent and. Seemingly helpful answer. Well, Doc Searls, you know, got his PhD in, in, in, um, physics at MIT and , and he is a, currently a PR practicing physician and congressman the state of Iowa. Like, I mean, that instead [00:09:13] Doc Searls: a guy who flunked outta sixth grade. Yeah. [00:09:15] Katherine Druckman: yeah, but it's not gonna be true. It could be very well written. Um, I just, I just, you know, I wonder how those things interact there. [00:09:23] Ezequiel Lanza: Yeah But to learn I mean uh Google for instance knows that when you do a web search in Google and they know or the accuracy is when you hit in the first in the second link that they recommend you So this is how they know that they did good when re re recommend something for for and with other with perplexity and so on they need to have Absolutely I I can't be completely sure they have this another model that it can measure this accuracy So if you keep or if you keep doing the same question I mean who who is Doc And he said something and you ask again Okay but tell me who is Doc And you do it and you do it the model could understand that maybe it's it's doing wrong what you are doing and this is this there a kind of algorithms that is a of AI that is called reinforcement learning When you try to train an algorithm instead of training with classification data with data that you already have or the model or the scene kind of models they learned uh with rewards For instance when you do something and you say that model what right You give a reward to the model So the model understand that this was a really good guess for instance right Uh And when you working with conversational AI stuff you used to use a lot of reinforcement learning because you need to to double what we are in real time It's right or it's wrong if you need to reaccommodate or need to retrain the initial model or whatever, most of the time it's related with with learning uh they are sensing in real time Uh what is the accuracy of the the questions of the answers and so on it's [00:11:25] Katherine Druckman: which comes back to crowdsourcing, which is something we as users of the internet already understand at a basic level. You know, for things like Wikipedia for example, if somebody puts something inaccurate on Wikipedia, you have, the whole internet to just scrutinize it and change it and correct it. But, you know, but I guess, you know, it goes back to something that we say a lot here, or Doc does, and which is, it's early, So those of us who know that it's giving false. We need to correct it, I suppose, but um, but yeah, it, it is, I guess it's a concern. Anyway, doc, you look like you have a [00:11:55] Doc Searls: I actually an interest. Yeah. Well, yeah, it, it is. It has to be early. The conversation. I was involved with a few, an hour or two ago at Indiana University, which was, it doesn't even matter what that was about, but what of the things that came out of the meeting was that we are at the mainframe age of ai, um, in, for the first several decades of the, of the computing industry or or computing period. Um, there were only mainframes and smaller mini computers that were like mainframes, but there was no personal computing. Personal computing was an oxymoron. even after, radio Shack and, uh, apple and, um, co Osborne and a bunch of other companies made PCs, personal computers, they were considered hobby toys. kind of like ham radio to the, to a real radio, you know, saying nobody's serious to be doing it that way. But then with the IBM PC and the Macintosh, suddenly computing did become personal. We don't, and this is a bit of a subject change, but I think it's important we don't have personal AI yet. it would be really nice to have that. I would love to be able to put all my in an ai. I'd like to be able to put, know, 20 years of taxes. I would be able like to be able to put my own writings in there. All my photography, many my, know, all my insurance policies, all these things that, know, the record of what cars I've owned. All of these are interesting things to. , um, my car, my calendar, my correspondence, and I should be able to do AI on that. I would love to be able to do that. Uh, that's hard, hardly even being thought about now. And, and I, I, I bring that up because in the mainframe age. We do need these things to be worked out in big machines. I, I don't have a problem with that. I don't have a problem with services that are trying to do this for me, but I do have a problem with not knowing exactly how they work and not. My own way of doing the same damn thing I'm envious of, half of, know, big companies are buying this stuff. Like, how can we look at our data lakes and our, and our warehouses and stuff like that and make sense of them and how can we make sense of our customers, which is one of the things that been annoying us for a long time. Um, but I, so I'm, I'm bringing that up. There's kind of an, a different topic, but I just asked a couple of questions of. Perplexity do AI about stuff that I know is arcane is on the web, is not. I mean, you, you have to dig for it. It didn't find it, it made shit up . I mean that's what it did. It made up wrong information, you know? And so, I mean, in particular, I was looking for where was the transmitter of this radio station in 1938? I knew where it was and it said it's where it is now, which it. , that was a wrong answer. Um, but at least it gives you sources and the sources are wrong. , you know, so that's a But we're such a long way from whatever this will be when we fully, I don't think we'll ever fully make it work, but it's sort of, know, not to get back to search, but to use it as an example, um, think we've regressed with search. Search is worse. And I think it's partly because those companies care less about it. They care about something else. [00:15:11] Katherine Druckman: Oh yeah, this was when ad revenue competes with usability. That's a [00:15:14] Ezequiel Lanza: Yeah [00:15:16] Doc Searls: Cory Doctor recalls this, inhi ation and that, that there's a a you know, it, it starts out doing something as really well, and then they want to advertise and. and then they optimize for that and what he calls self-referencing. Uh, Google's been doing that for a long time. Amazon's, he used Amazon as an example. Amazon is self-preferencing enormously. and it's not as good as it used to be in many ways. It's just, it's a, the experience we're using, it sucks. [00:15:43] Katherine Druckman: Um, so again, not to get us off on that same [00:15:47] Doc Searls: Yeah. I, I'm full of [00:15:48] Katherine Druckman: that something that you Well, I, this is relevant and I, I have a feeling that Ezequiel knows more about this than me, but I'm gonna, so I'm gonna throw this out. Um, So in, in terms of personal ai, the idea of having like a personal database that is queryable using this sort of natural language, um, to ask it, oh, you know what? did I keep my X, Y, and Z? You know, where, where can I find the history of this financial data so that I can put it on my tax return or for your your descendants. Uh, tell me, tell me about the time that my great-grandfather wrote an article in this thing called Linux Journal, you know what I mean? And it reminded me there was a scientist. And I cannot remember the name, it saved my life, who created a robotic bust of his wife, and it was AI powered. And the idea was, it was trained on, on his wife's knowledge and identity. And, and this is years ago. And it occurred to me that, as you're talking Doc, I'm imagining the future where we all have little robotic of ourselves. That people in the future, those of us who, who are worth asking questions of, I guess, might be able to. Query our knowledge someday in the future. And I, I, I mean, I think that's there. I can't remember the name of the scientist. I bet Ezequiel might at least know what I'm talking about. But, um, yeah. Anyway. [00:17:04] Ezequiel Lanza: it's something We can definitely do but if the thing is that what we have in our hands or or this chat and all the things that we see there uh the answer for for things that are googleable if I would to get some information or if some private information or something that I, know even I or a company or something that it's private they will be able to answer for that So The good thing is that these models or chatty it's really powerful on conversational on understanding language or whatever So what will for sure in future is you can you can probably fine tune this chatty with your stuff with your data Um but of course if you need to do It's something similar than you've the past There are some algorithms is the G P T is that if you like to train a model write as someone and if have the can train the and the model will know how to speak as ex an ex person Uh so you could do data but the thing is that you need to fill the algorithm with With your data I mean with your balances with your taxes or So this model will be fine tuned just for you And this is the challenge that I see when those models uh because They can be very good with the Google stuffs with replacing and genes search search and genes and that thing They could replace that in the future But when you would like to use for other stuff for merchandising for companies or in another particular environment uh you will need to easily retrain this algorithms to be adaptable Um and I believe that it will In the next years or months Um but because as you see now for instance we have some uh companies that they are offering you to create your resume to create your LinkedIn post or what or whatever You just need to say okay this is what I would like to to to say or this is my topic you are in a way to fine tune the He giving a hint or giving something that the model can say okay this is the path so we we should go Um but and something as you as you said doc is about the the black box probably want understand or we all we have no idea how it works underneath It's it's a block boss blocks can understand how it works but it'll be very difficult if you would like to go in deep At least we we can understand Okay The data that that we have comes from internet from all the sites and from the Wikipedia databases and so on Uh and I can pick that thing could use to my environment This is a question for the future but I believe that it will be soon available for sure [00:20:25] Katherine Druckman: I posted a link for y'all to see, and I'll, I'll link to it in the, in the show notes, but it's the, the scientists or the, the, actually is apparently the founder of XM Satellite Radio, which I didn't realize. is, uh, was the person I'm talking about who created Martine Rothblatt, who created this robot version of her wife. and I mean, if you've seen pictures of it, of it, it's kind of terrifying. But it's also the, the, the, the science and the, the archival ness of it is fascinating and, rel relevant to our conversation. Certainly. I I can see the appeal. I, I, I guess where I'm going with it, I can see the appeal of having this archive of myself. Oh, that's arrogant, isn't it? Anyway, my, this arch archive. I wish, how about, I'll put it this way. I wish I had an archive of my parents, of my grandparents of, I don't so much care about the one, about myself, but I wish I had something like this for people who I am related to, who are no longer with me. [00:21:23] Ezequiel Lanza: Or it could be even images or whatever but you need to have it Yeah, [00:21:28] Doc Searls: yeah, it's the, grandfather I never met. I only know one thing he said in his entire life, I didn't know. assume it was in a Swedish accent because his Swedish was his first language. I don't know him much else about him. My gra his wife, my grandmother, she died when I was a few months old. I know a few stories about her. Had a very big life. I mean, she had five kids and lived in the prairies in North Dakota. Had, you know, probably a lot of stories there that aren't there. This is a little bit of a subject change, but maybe not. We were at an antique store a few days ago in Kentucky and was a tabletop with nothing but piles of old black and white photos, archival photos. They came from people's photo albums that were just dumped there. Some you turn it over and there were names. I actually took a picture of one and I looked it up and as somebody in, in, in. As a picture of him when he was 10 years old, he lived in Wyoming. Um, I didn't follow that any farther, but it, it occurs to me this would be a good use of, of facial recognition, right? Or if somebody could identify these faces as these people, the photos could be, Routed in some way, or discovered in some way by relatives or other people who cared about them. Uh, and I don't think at this stage with all of these, that there's a violation of privacy necessarily involved. Uh, um, unless you think everybody who's already dead and unless they said otherwise, is entitled to privacy. But, um, there's [00:23:02] Katherine Druckman: that actually, uh, I feel like I need to drop a link in here to our a previous episode, remember the [00:23:08] Doc Searls: Which one? [00:23:09] Katherine Druckman: Oh, there was, there was a one where we, we discussed exactly that. It was, uh, the unpacking of a photographic archive and, and [00:23:15] Doc Searls: Oh, right. [00:23:16] Katherine Druckman: Do people have privacy after they're dead? I think that's also the one where I mentioned one of my. Favorite quote from a, an art professor I had once, which was that all photography is about death basically. No matter what it is [00:23:27] Doc Searls: Yeah, no. [00:23:28] Katherine Druckman: is about death because it is capturing this moment and eventually whatever's in it will be dead. Same professor has, Al has also since died. Um, [00:23:37] Doc Searls: Well, this is a, so it occurs to me like, okay, so I, I would love for my archival work to persist. Um, been worried I have 77,000 photos on flicker that have almost disappeared on the web two or three times. But, um, smug mug saved them. Will they save them forever? I don't know. I hope the internet archive will, will the internet archive LA last forever? I don't know. And. there's something in here. [00:24:03] Katherine Druckman: training all the AI data models, I suppose it [00:24:05] Doc Searls: absolutely. I, I, and I was told that, whatever that awful company was that, uh, that, that sells facial recognition to police departments did train on a lot of my photos actually of, of identified people and, but, you know, we're, we're a long way from learning whatever the manners ought to be in the digital world. We don't know yet. And and I think we, you know, we, we proceed by a process of discovery and a lot of the discovery is around things that are not comfortable retrospectively, think, you know, like maybe we shouldn't have done that. but I don't have a big problem with of that, at least a lot of. With AI research at this point, because we've gotta know we've gotta, we've gotta work out a lot of different things. But the black boxed of it, you're right, Ezekiel, know what's going on inside of there by design almost. It's too, it's too complex. You almost need another AI to tell you, and it might lie , right. [00:24:59] Ezequiel Lanza: yes I know No this is why have uh things are are like responsible ai but I mean you use responsible or you put you use other algorithms to to fix this boundary or to limit those boundaries Uh but you if you want to use it is the this [00:25:23] Katherine Druckman: Um, can you tell us, when you say responsible ai, could you kind of unpack that a little bit? [00:25:28] Ezequiel Lanza: Yeah, sure Uh it, is just a way to define the boundaries of an algorithm Let let's suppose that you have uh this G P T algorithms they were trained with all the internet data right In this internet data you have racist comments You have I mean a lot of things related with racist with racism the algorithm learns right So if you probably ask for something The answer could be probably uh not so good for for what we to show for instance Um so this is why you need to put a an an extra layer to verify and to remove those bias Because if you train a for a leader I suppose an example that the only data you have it's uh a particularly town in a country that you have one kind of people So only data that the algorithm will know is this these people or this amount of people So when you ask for something will answer or biased in this amount of people So if [00:26:46] Katherine Druckman: Right. Okay. So you're, yeah. [00:26:48] Ezequiel Lanza: if you like to to use this model in another branch of people or whatever you need to say okay this is not the answer for everybody You need to put Some boundaries to say to avoid I I instead of talking about the debt I mean I don't want an I don't want this algorithm to talk about the debt So this is this is one limit Uh it's basic policies rules and all the things that you say to the algorithm to to be more fair Same applies when you are working in in a bank you are applying for a loan for instance Um the algorithm could take a decision based on the on the zone on the geographical zone you as a bank you can say okay it doesn't matter if a person lives in a region It's just one more feature I mean it's it's not something so important if you need to okay try to extract this feature responsible AI is playing there Uh another role it's how It it's try to make AI fur uh instead of make biased basically [00:28:09] Katherine Druckman: Okay. So the responsible and responsible is really just a, a fairness and, uh, an issue of representation in the data, basically. [00:28:17] Ezequiel Lanza: Yes the [00:28:18] Katherine Druckman: So, [00:28:18] Ezequiel Lanza: that you have it's it's imbalanced normally Uh you you don't have the classes all equally distributed [00:28:27] Katherine Druckman: Right. Yeah, that's, yeah. [00:28:28] Ezequiel Lanza: fair [00:28:29] Katherine Druckman: For example, I, I think a good example of that is, Ca is camera tech, um, you know, uh, let's, what do you call it? Blink detection in cameras. That that's, that's been a controversial subject for a long time because if you, if your face is not the type of face that where, that the AI was trained on, it can't figure out, you know, your face. It can't figure out if your eyes are open or closed. It can't figure out, you know, the shape of your face and all these various things. [00:28:54] Ezequiel Lanza: Yeah [00:28:54] Katherine Druckman: So I guess that's, [00:28:55] Ezequiel Lanza: topic [00:28:56] Katherine Druckman: yeah. [00:28:57] Ezequiel Lanza: that if you are developing something this that you as a person can have it but if you ask the model the model give an answer that is probably not [00:29:10] Katherine Druckman: Right, right. You're imposing your own human bias on a machine, and then the machine just spits it right back at you, out, out at you, and you know, exposing your bias very plainly to your face. [00:29:20] Ezequiel Lanza: Yes [00:29:21] Doc Searls: Yeah it it's interesting that it, it, um, I mean, I think all animals do this, but humans do it. Um, we're profiling at all times. We're always, we, we have to. We have to. And an interesting thing about humans is that we all look different and sound different on purpose. This is part of the design. And, um, but we do fall into classes, you know, there's tall, short, wide, you know, narrow. Um, um, I, I know from looking at the facial recognition is really looking at the geometry of the face. There are the angles of, you know, there are 15 dots or 20 dots or something like that, correspond with different features, which is how, regardless of lighting, often you're like, my iPhone, I'm kind of amazed that it can see my face and near, near darkness, but it does know what the geometry of my face is. though I'm wearing glasses and I have a beard, it's. If I shave this off or take these off, it won't make any difference. That's pretty cool. But it's profiling me. It's, it's got my profile, whatever that is. And, um, but we're, but we're judging everybody all the time. And so we, we will, we need to expect machines to do these things for us as well. what do we do with that? I mean, it's, you, you want it to not be biased about certain things or at least treat certain variables equal. or without prejudice, but prejudice is in fact how we operate , know, to a large degree. And know, and [00:30:49] Katherine Druckman: only heightened, highlighted, I suppose, concentrated and highlighted [00:30:53] Doc Searls: yeah, and [00:30:54] Katherine Druckman: data models we've created [00:30:56] Doc Searls: I was, one of my favorite lines is from a, a guy named Ed McCabe, who was a, still is a round, I think a famous copywriter back in the sixties and seventies. He said in an interview, he said, I have no use for rules. They only rule out the possibility of brilliant exceptions. And that's what makes us human too, is the brilliant exceptions. And you know, one of the great teachers, a guy named John Taylor Gato, said, everybody has an inherent genius of some kind or another, whether it's realized or not. just like we all have a different face and that's. I, I don't think that's re that, that could be replicated. I think there's, that is what, where our soul resides. I, I was thinking about when we were talking about archive. My, I, I'd love it if my archives were, were saved, but what people value about me, one of them is, That I'm funny a lot of the time and I make jokes. And how do, how do how do you make that up? I mean, I, I can't emulate that. That's, that requires irony, right? and contradictory meanings that evoke some other sense. And, um, that's, that can't be replicated. You know? I don't think you get that. You bring that back from the dead. [00:32:12] Ezequiel Lanza: Well [00:32:13] Katherine Druckman: So, oh, go ahead. [00:32:14] Ezequiel Lanza: from the neuro physics I think that is to mother uh they study how we learn with our neurons and on And this is what they once they understand our brain at least with that part they can rep replicate to an algorithm Uh this is why people learning works and so on They learn in the same way as the learns Right But as you said when when with ironies or love or with sentiments We don't even know. how it works brains so how can we empower machines how can we algorithms to emulate that if we know how it works in our brain I see ai uh it's like a kid now of years or three kid Probably five years kid that can't speak now can't speak Um but it's pretty innocent uh because they tell the you said is is said when when you ask for a kid [00:33:23] Katherine Druckman: Oh, it is like a kid. Yeah. [00:33:25] Ezequiel Lanza: um [00:33:26] Katherine Druckman: Yeah. They will call you fat to your face. [00:33:28] Ezequiel Lanza: exactly [00:33:29] Doc Searls: I know Yeah, yeah, [00:33:32] Ezequiel Lanza: they need learn this extra layer when avoid saying [00:33:36] Doc Searls: yeah. I remember. Uh, one of my sons in a store, um, uh, There, there was a, a little person in the store, and he yells out, he's four years old. Hey papa. There's a little person. Look, there's a little person over there. A little person. And of course, the person came up and said, I, you're right. I am a little person. And he was so blown away by that he'd not seen like an adult head on a child sized body that would talk to him. in, in a similar way he said. My aunt, um, he asked her how old she was and she said to him, same age as your grandma. And he said, but you look older, . And, and I was terribly embarrassed by that. And I told him, don't say that, but he is like five years old. You know, they're, they're honest. suppose in a certain way ais are like children too, I suppose, cuz they're not, they're still naive about a lot of things. But I mean, one of the reasons I, I sort of bring up. These areas where like with humor and sentiment that can't be replicated, um, much less understood at the human side. Um, that we, we kind of need to know almost at the start, what can't be done, what is it that can't be done? What is it that will not be like us or will not be modeled on us and we'll be something else. And same time as. how do all these things change us? it's interesting looking at movies. Um, a movie that's four years old or using a completely different phone , You know, or 10 years, uh, there's a flip phone or something like that from 15 years ago. Uh, uh, you know, we're, it's everything. Things are moving along pretty fast. [00:35:21] Ezequiel Lanza: Yeah But but I I think that the with with even with kids or how the brain works in beginning of our existence is that if you realize the first thing that we we learn is to identify objects Uh okay this is a table This is something This is what you do with computer vision at the beginning I mean you start recognizing stuff when you getting older I mean in the in the next year you probably will start to speak We start to read um and I believe are at that stage now we are we are [00:36:03] Doc Searls: Mm-hmm. [00:36:04] Ezequiel Lanza: to to a stage the algorithms I mean and it's pretty as you said three years or four years ago having a conversation with a chatbot it wasn't so good [00:36:17] Doc Searls: Mm. [00:36:19] Ezequiel Lanza: can be can be pretty decent Decent right I Have a conversation Uh you that there is a a machine but you can have a conversation [00:36:33] Katherine Druckman: yeah. You can resolve your customer service issues with Amazon, for example. Pretty well with using a [00:36:38] Doc Searls: It's surprising and [00:36:40] Katherine Druckman: speaking of this, this sort of usability question, going back to the original topic, which was search, ai powered search. I'm wondering, I think Doc and I had this conversation. , if it wasn't with you, it was somebody about how there is sort of a, almost a generational difference or you know, kind of depending on let's say what year you went to college or high school or any of these things, like how you approach online search. Because know, the older you are, the more likely you are to have incorporated a completely different way. , you know, even if, if it was computerized at the time, a, a completely different way of interacting with a search, a card catalog, an on, you know, any kind of library search or anything like that. Um, and then, or you know, if you, if you were in, in college, you know, when, when Google was already well established, you know, then it might be different the, the way that you approach these things. And I, you know, I just wonder. It is a conversational format more usable? Is it really, or is it, is it just sort of a weird, comforting illusion, or can you get better data using other methods? These are things that I think about and I wondered what y'all thought about that [00:37:47] Ezequiel Lanza: if you ask me I mean the that I have chatbots today of course but the previous experience was mean I didn't want to talk to a chatbot in the past uh because experience was pretty bad and pretty bad for years so you now need to change uh that conception that bot can really you uh instead of being something I don't know I agree with you think that. it depends for some things could be really useful to have a conversational thing Um but for things I don't prefer writing the search and [00:38:36] Katherine Druckman: A keyword search or, yeah. [00:38:39] Ezequiel Lanza: Uh but course it would depend uh Five years I didn't talk to my Google assistants now I talk to my Google assistant uh with shame [00:38:52] Katherine Druckman: Do you tell it your secrets? . It's your confidant [00:38:56] Ezequiel Lanza: yes but I think that if if they if is applications that really useful using the conversational stuff or way or that it could be a good I don't would [00:39:37] Katherine Druckman: but we haven't published the experience yet. [00:39:42] Ezequiel Lanza: least [00:39:42] Katherine Druckman: Yeah. [00:39:46] Ezequiel Lanza: Um [00:39:48] Doc Searls: There's also a, there's a kind of inconvenience produced by, well illuminated by cognitive science, which, um, I'm familiar with the linguistic side of that. And linguistic cognitive scientists will tell you that we understand everything metaphor. So, for example, I just asked perplexity ai, what is time? it gave me a, uh, I can't find it now because I have too many of them open. Oh yeah. A continued sequence of existence and events that occurs in an appropriately irreversible succession from the past to the president of the future. And then it points me to time, time magazine. Um, but um, if we look at how we understand time, understand it as. We save it, we waste it, we spend it, we put it aside, we invest it, um, we lose it. it's, we understand it in terms of money, in terms, in terms of thing, something we value. There's another thing we value the irony of, of. Metaphor is that it's always wrong. Time is not money. know, life is not a journey, but we have, is arrival and death is departure choices, crossroads or careers or paths. Um, we can, it's almost impossible to talk about life without talking about travel. And we have a sense of traveling through time, which is money . You know, it. And I don't know. I, and I think it's that, um, an AI to be. What are the, what are the metaphorical framings that, that are used for this concept or that concept? Probably it can, I'm guessing it can, but I'm not certain yet. I'm not sure. I'm not sure anybody's put the effort into making that work. Probably somebody has, know, but there's sort of like second order characteristics that need to be pulled apart and explained before that happens. But I dunno, is that kind of thing work going on? Ezekiel do. [00:41:56] Ezequiel Lanza: think that ironies could Could be Um it's how we think now AI can can be implemented like a conversation and stuff So don't you don't think that you are you don't think AI as a person for instance in some particular cases right So you think that okay I would like to find what is tell me what is time uh in a way that I can understand Um [00:42:25] Doc Searls: Mm-hmm. [00:42:25] Ezequiel Lanza: know that there there are some research papers Research paper papers are about Right but that there are some research papers about ironies um metaphors and so on So I think that the implementation or how we use ai uh in a scenario it's it's could be useful detect ironies or I I don't think that Main point for for any AI in a production environment right If you if you like to put in to help you for something you try to think as a tool right I think [00:43:14] Doc Searls: Mm-hmm. [00:43:14] Ezequiel Lanza: goes Hand to hand with uh I mean goes together with our brains once we figure out how this staff behaves in our brains we can have some algorithms that can mimic that Um [00:43:34] Katherine Druckman: can I really quickly, um, sort it's, it's relevant to, to the line of questioning here, but, um, I kind of wanted to plug your, your, the thing that you wrote about fraud detection, right? You wrote use using AI for fraud detection, which I think is interesting because, you know, relevant to what we're saying right now. Um, you know, we think of AI as creating fraudulent things, right? We think of ai. If you think of a lot of people today, if you think of ai, you think of deep fakes or, you know, creating inaccurate information or, you know, there's a, a skepticism, right? An inherent skepticism in this conversation. And at the same time that that reminded me of, of the thing that you wrote, because it, it's also quite useful. Uncovering truth and, and detecting fraud and detecting inaccuracy and, and detecting patterns that would, would indicate something fraudulent or, in, incorrect. And I, I, I'm thinking in terms of can that same ID idea be applied to Detecting irony, detecting, uh, detecting lies, detecting fakes, detecting in inauthentic information. [00:44:43] Ezequiel Lanza: now uh open Uh C G P T verification can say or you can a text [00:45:00] Katherine Druckman: Oh, yes, yes, yes. [00:45:04] Ezequiel Lanza: I don't know I I've tried it with some stuff that I wrote I [00:45:09] Katherine Druckman: Mm-hmm. [00:45:10] Ezequiel Lanza: uh and it says that it's probably generated by ai so [00:45:16] Katherine Druckman: Yeah. [00:45:16] Ezequiel Lanza: I am [00:45:17] Katherine Druckman: But. [00:45:18] Ezequiel Lanza: a in a in a robot or whatever um yes the fraud part how to detect fraud how it was written by or how it's seems seems to be fraudulent it's a huge topic years for sure because most will start using that [00:45:39] Katherine Druckman: right? Yeah. So the thing that you wrote was fo focusing on financial debt, financial fraud, which I suppose would be easier to detect than, than the other types that I mentioned. But I just wonder if you could use the, if the same, the same, uh, general approach might apply. [00:45:56] Ezequiel Lanza: I mean at the at the end very underneath the algorithms do the [00:46:02] Katherine Druckman: Mm-hmm. [00:46:03] Ezequiel Lanza: for patterns that they know that in a past uh we're fraudulent same case for an If need to teach an algorithm to detect an irony need to feed this with examples of irony [00:46:19] Katherine Druckman: Right. [00:46:21] Ezequiel Lanza: so it's the same with fraud detection uh with all the parameters uh that can't imagine how it's taking that the reality is that you need to feed those algorithms with with examples already know [00:46:38] Katherine Druckman: Yeah. [00:46:38] Ezequiel Lanza: be uh most [00:46:40] Katherine Druckman: Right. So, [00:46:41] Ezequiel Lanza: know it [00:46:42] Katherine Druckman: yeah, that, that's a, yeah. So for AI to be useful, you have to seek an answer that exists. AI cannot necessarily, I mean, it creates things that are new, but it's always based on something that already exists. Right? So that, that, that's always the question. I think with code, especially that's a question, uh, or, or an observation that I, I see a lot is that, yeah, it's great. You know, answering questions for simple problems that already have a lot of solutions, but if you're looking for something new, you know, it's, it's maybe not going to, you know, fit the bill. So, so I think we've covered this from a lot of angles. Ai, generative. conversational ai, that sort of stuff, but yeah, I think as usual, we've probably left a lot of questions unanswered, and I think, uh, we're gonna, we're going to want to revisit this in the future. But in the meantime, thank you everyone for listening. Thank you so much, Ezequiel, for joining us to talk through some of these ideas that we had. Until next time.