Happy Hour #94.mp3 Harpreet: [00:00:00] One less go. What's up, everybody? Welcome to the RSA Data Science Happy Hour. It is Friday, August 26. This is happy hour number 94. That means only six more to go to hit that number 100. I think we might have to make it a big, big party. March would be a good time. Thank you all for being here. I appreciate y'all hanging out. No new episodes of the podcast released this week. Remember that? I'll be coming back with interviews released on the podcast at the beginning of the year. Starting January, we're going to be releasing more episodes that are interviews, but in the meantime, I will be doing interviews live on LinkedIn, streamed on YouTube and all that. So do keep an eye out for those, but they will be released as episodes later. Thank you all so much for being here. Shout out to everybody in the room at the moment. We got Keith McCormick. Keith, good to see you. Serge in the building. Jennifer, what's going on? Eric Russell and Kristen, good to see you all. So Jennifer sent some awesome questions to kind of kickstart the discussion with as you guys know, sometimes I'd be fumbling to find questions to start the discussion with because, you know, I just kind of jump right into it. And then Keith has got another question to. Kick off the discussion with as well. I like this one from Jennifer. What's your superpower? Your superpower. Superpower is the one thing that you do better than others around you, that you do better than anything else you do. That you do easily with little effort that you do freely without being asked or paid to do your superpower. That's a great one. Is being curious a superpower? Could that be considered a superpower, Jennifer? Because if there's anything that has led to any amount of success that I've had is just being curious about things and then also just getting really obsessed about things. So I guess I guess that's the same thing. Is it curious and obsessed with your thoughts, Jennifer? Speaker2: [00:01:55] Absolutely. That's a superpower. You betcha. Harpreet: [00:01:58] So what about you? What is that? What is your superpower? [00:02:00] And then I'd love to hear from everyone here. Then in the meantime, if you guys got questions on LinkedIn, let me know. Keep an eye out on the chat for any questions. If you're watching on YouTube and you got questions, let me know. I'm keeping an eye out on YouTube as well. We'll do a round robin on Jennifer's question that we'll get to Keith's question. But Jennifer. Yeah, what is your superpower? Speaker2: [00:02:18] I would say minus organizing things like my spice rack has to be alphabetized, has to be in a certain order. The pantry I had a real estate agent walk into my pantry and say, Oh, well, you must never use this. No, I'm in there multiple times a day, and it has to be this neat and organized. I have been known to organize other people's closets and pantries as well. So I would say organization. Harpreet: [00:02:48] That is one super weakness of mine is being organized. It's just very, very difficult for me to organize stuff. Eric Sims, let's hear from you. What's your superpower, by the way, if you're listening on YouTube or on LinkedIn, watching the livestream, let me know if you've got questions. We'll do a round robin on this. We'll go to Eric, then we'll go to Keith. We'll hear about Keith Superpower. And then if anybody wants to share their superpower in the Zoom room, feel free to just use the raise your hand icon and I will call on you. Go for it. Seems. Speaker3: [00:03:18] Yeah, I would say I superpower is like being obsessively original. Like, I just get so bored so easily, which is why I have a Millennium Falcon background. It's why I do silly side projects like fortune cookie movies. Speaker4: [00:03:36] And, you know, just like like things like that. Speaker3: [00:03:39] Ever since I was a little kid. Speaker4: [00:03:40] That just that sort of thing. Speaker3: [00:03:42] Off the wall. And the reason that I think it's a superpower is because it. Speaker4: [00:03:47] Allows me to. Speaker3: [00:03:48] Approach. Speaker4: [00:03:49] Old things in new ways and keep things interesting. Harpreet: [00:03:54] Thank you very much, Eric. Let's go to let's let's go to Keith, then Russell, then [00:04:00] Kristian. And then we'll we'll see if anybody else has anything to add. Keith Go for it then. Russell and Christian. Speaker5: [00:04:07] Sure. But you mentioned the definition that it can't be work related or not paid, which. Which makes it a little bit tough for me because I'm with you on the on the Curiosity thing. My favorite part of a project is the beginning. When I ask all kinds of questions like, Are you sure that's what you want to do with the project? Or Let's say we could wave our magic wand and build this wonderful model. What are you going to use it for? I love I love asking questions like that. And that's when you really get at the heart of the issue. But but I'm kind of thinking of it through a work lens. I'm sure I'm like that in my personal life, too, but I mostly think of it the work way. Speaker2: [00:04:45] You can have a work lens on it. That's fine. It's the thing you do if you even if you didn't get paid to do it, which probably for you is asking questions. Speaker5: [00:04:54] Yeah. Don't don't tell clients that necessarily, but I can't help myself at that stage. I really want to know. I want to kind of reverse engineer the problem and all that kind of thing. So. So, yes, it is. It's it is work related, but it's also just a compulsion. Harpreet: [00:05:12] Let's go to Russell and then we'll go. Christian, Serge Kosta. Go for it. Speaker5: [00:05:20] Okay. Thanks. So I'd say my superpower is paying attention or observance. I just. I'm interested in everything around me. And I like to look at the. Speaker3: [00:05:32] Details between. Speaker5: [00:05:33] Most of the people who know me, especially my wife, think I'm an alien because I just notice these odd things and they just can't believe. How did you even notice that? But I find interest in the smallest detail. So yeah, I'd say. Speaker3: [00:05:45] That's my superpower. Speaker5: [00:05:47] Paying attention and observing. Harpreet: [00:05:50] So also super weakness of mine and very inattentive at times. It takes a lot of effort and energy for me to use attention. But Russell, [00:06:00] thank you so much for sharing that. Let's go to Kristin. Speaker5: [00:06:03] Thanks, Harpreet. Speaker3: [00:06:04] Yeah, I would say being enthusiastic. Speaker2: [00:06:07] And animated when I talk. Speaker3: [00:06:09] I've always been like that. Speaker4: [00:06:11] And honestly, even when I'm just with my. Speaker3: [00:06:13] Family and friends, I will randomly blast. Speaker4: [00:06:16] Out a movie, quote, you know, at a social. Speaker2: [00:06:18] Setting, because that's just who I am and I like to be funny. So it's definitely. Speaker3: [00:06:23] Helped in interviews as well having that. So. Harpreet: [00:06:28] Thank you very much, Kristen. Let's go to Surge then Coast Hub. And then after surge and Coast Hub will go. Circle back around to Keith's question. Speaker4: [00:06:39] Okay. So I think mine is. Although I'm curious as well. Mine is creativity. I think I can avoid being creative even to even when it's not pertinent, even when it's not needed. You know, I'm trying to think outside of the box when everything's well in within the box and there's no need for that. But yeah, I just I want to see what else can be missing because yeah, it really bothers me when I'm, you know, the whole picture isn't there and there's something else. And I kind of feel I have to kind of color outside of the lines to get to the core of something. Also, it's helped me with like. I guess. Promotion, marketing, things like that, I guess. And it's always been second nature for me with play, with graphics, things like that. Know I started doing Photoshop, it's like the first version many years ago and I've never stopped using it. Harpreet: [00:07:44] And you did. You did something just the other day I've ever seen. It was with the stable. Was it stable diffusion that you did something with it. Was that. Speaker4: [00:07:52] No, it's stable diffusion. No, it's it's a different diffusion. I forget [00:08:00] the name of diffusion, but yeah. Something a diffusion model. Harpreet: [00:08:03] Talk to us real quick. A quick give us a quick overview of what it was that you what you that you did. Because I thought it was super interesting. And how can people find out more about that? Because I think you did post a link to a really cool notebook as well. Speaker4: [00:08:15] This go diffusion. Harpreet: [00:08:16] Yeah, disco diffusion. Speaker4: [00:08:18] Yeah. So what I did was I, I've been playing around with these generative models for a while, but I kind of felt like it. It worked well if I was doing like a single image, but if I was trying to generate, you know, a continuous like an animation, there was always like quirks to it. So I figured, how do I get like an input that is smooth throughout, you know, and that it has enough like character because there's another way of doing it and it's just basically you're, you're generating each, you're like zooming in into each frame, using the other one as reference. So that to me it's not really interesting. So I figured there has to be something else. And I remembered the first computer generated images I created and the first computer generated animations I created were actually fractals. And I figured fractals are just so nice. They're so natural. They're also so smooth. Why don't I use those as inputs? And so I generated all these of this animation based on these fractal animations. So one is on like a Japanese sent pond, you know, with the lotus blossom and all that. Another one is like a Star Wars theme one. So you see all like the Star Wars elements like spiraling around. And I figured, well, there's just so much creativity that can be drawn from this. So I encourage people to kind of use the Mandelbrot. It could be any kind of fractal, but that Mandelbrot is probably the best one for that and Zoom animations or any kind of animations based on that to generate [00:10:00] their AI generated art. Harpreet: [00:10:04] So yeah, the Mandelbrot set stuff is super interesting. Like, absolutely. I'm a huge fan of the great Benoit Mandelbrot. I think I got one of his books, The Misbehavior Markets, which is really interesting. We applied the his fractal theory to to help markets move. But how does that work? How do you take I guess how how does one generate so okay so you the disco diffusion model you pass the input as you said something from from a fractal and then it goes through some architecture and on the end you've got a piece of AI generated art. How does that work out at a high level? And what are some like keywords that we should go research if the listener is interested, the listener being well. Speaker4: [00:10:47] So the way it works is you have you have each it does each frame individually so that the diffusion model is equipped to take an input image individually so you can feed it an input image. I think it used something called clip guided diffusion, I think at its core. So it's like the regular diffusion, but it's also using an image and the text so it can generate something frame by frame, but so that there's some consistency in everything because then, you know, it could be very creative on the second frame, you know, something completely different than the first frame. You don't want that. So then it uses something called optical flow to kind of make sure that whatever comes up from the second frame kind of follows the first frame and so on. So yeah, that's a basic like computer, computer vision, like old school computer vision thing, which is basically the thought that like the pixels, like even the ones like we're seeing right now through, through your coming through your webcam. Like we could kind of have like little lines drawn from where one pixel was and it moved [00:12:00] like from one frame to another. Right. And so by doing that, you can have like a mapping of any video like the sequence of course that that becomes kind of tricky. Speaker4: [00:12:11] And that's one of the issues because like if you have a big video and it's broken up into many like scenes, like obviously it's going to break the flow between one scene and other. And so you're going to see all these artifacts. Come because, you know, basically one frame is completely different than when right after it. And so that's why the Mandelbrot set is so great, because there's it's completely continuous. There's no no, no way it's going to be broken up into scenes. I also tested continuous shots from movies. I haven't posted it, but I took the scene from a Bond movie where it's just a single shot, like of of Bond walking out of a crowd. And it, you know, I then I then use it as a prop, some coral reefs and, you know, like aquatic life or something like that. And it actually looks like reefs of aquatic life. And then you can make out like it's like a puzzle. Oh, is that a person walking through? You know, but you're not quite sure because it looks like, you know, a piece of fish and a piece of like something from under the sea. Speaker4: [00:13:24] So it's interesting what you can do with this sort of stuff as well. You know, it doesn't have to be a mantra, but that I thought that was great. But you could any continuous shot is well equipped for this. And also, if you have a larger piece of video, you could break up the scenes and do each one individually. That's another kind of trick you could do. I've also another thing I've tested, which is really cool, is you can take a video of yourself and you can apply these filters to it. So you could say, okay, I, you know, Marvel hero, super [00:14:00] hero, and I know you love Marvel. So you'd say, what if I was a marvel superhero? And then you can do some basic video editing where you're like going like this, right? And as you go like this, you all of a sudden become a marvel superhero. Right, right. And so you could do cool things like that where you're kind of combining live action with with with the stuff you've done with with the diffusion models. So it's going to it's really going to put out a business, a lot of digital artists, because it's just so easy to do this stuff. Harpreet: [00:14:30] Honestly, I think it'll make movies a lot cooler, though, that is for sure. Serge, thank you so much. That set like there's a notebook that Serge had linked to in his LinkedIn post. I'll try to dig that up and link to it here and we'll link it to LinkedIn as well. By the way, if you're watching on LinkedIn, you got questions. Please do let me know. Also, if you're watching on LinkedIn and have not yet smashed it like on this video, please go ahead and do so right now. Let's go to speaking of computer vision and superpowers. Let's go to coast to coast. Go for it. Speaker4: [00:15:00] That was fascinating. At some point, I'm actually curious, what what do you what software do you usually use to do, like your effects video editing stuff? I mean, I've always dipped my toes in audio engineering and video and like video editing, but it's mostly just been DaVinci Resolve and just basic video editing, right? Just a bit of a VFX nerd as well. And I never tried it. Right. Where would I start? Something on that? Well, there's there's plenty of tools online, like I use like I mean, I love after effects sometimes. Occasionally where I'm like an adobe guy. I love everything Adobe does, but. For some things, it's just so quick to do it online. Just drop something. There's this service called Tapping Cup Going, something like that, and I've been using that for a while just to do some very simple things like cropping an image or a video and [00:16:00] add some effects to it. Things that, you know, for better or worse, are slightly more complicated to do in other editing tools. Video editing tools, that's it. But for like really professional grade stuff, yeah. Like AfterEffects is one of the best I would say in some would even say the best. But it also depends on what kind of because a lot of other ones come with a lot of plug ins out of the box that for after effects you have to get. So that's, I believe, the main complaint people have about after effects, you know, but I think it is a great tool nonetheless. Speaker4: [00:16:39] Yeah, I've been on that. I've been on the edge of that cliff for quite a while now. I watch a lot of corridor crew and digital videos and just like always on the edge of that cliff going, Oh, but I got a weekend. I want to dive into some visual effects and do some motion tracking stuff and that that alone would be really fun, just doing some, a bit of motion tracking stuff and just add literally like a superhero's outfit to, to me or whatever it is. Right. So super tempting to answer your question on the whole superpowers thing. I guess I've got two superpowers, one which I'd like to trade for something else, if anyone's got suggestions. One is creativity, right? Like I'm a musician and I love everything performing arts. So I like to deal with things through that lens a lot, even if it's got nothing to do with it. Everything from riding around as you may, to, you know, how you present it, like how you present findings at work, right? All of that is performing arts for me, right? So I do look at things through that creative lens and I like to utilize skills from different weird and wonderful areas. And I'd like to keep that superpower. The superpower I'd like to get rid of actually is so [00:18:00] the way I get through a lot of challenges is just, hey, throw more effort at it, lose sleep, cut the sleep and just throw more effort at it. Right. It's been a great superpower for me so far. Speaker4: [00:18:14] It's got me where I have got to so far and I'm happy about that. But I'm starting to see the other side effects of that. Right. So I'm just wondering if it's time I retired that superhero from my ah from my little MCU of my own. So yeah, that's, that's my superpower side of things. Just speaking of the optical flow and computer vision stuff, they're a bit of a tangent. I picked up this book recently, it released and this is not a paid plug at all, but I figured I actually really enjoy it. So like this guy, he's come out with a second edition. I'll put the link in the chat in case this video is backwards. I have a feeling it might be backwards. But Richard Zaleski, it's a Springer book on computer vision. Right? So the first edition had it's pretty, pretty chunky. Right? It's a few hundred pages. The first edition didn't have too much. On the deep learning side, I think it came out of my 2011 pretty decent book. I mean, I read that one cover to cover in undergrad and post-grad again. Second Edition came out earlier this year and it's got this really nice mix of classical computer vision as well as as well as deep learning. Right. All focused towards computer vision. And yeah, I think they they spend a fair chunk of time with the optical flow stuff. And this is to me, this is probably foundational reading for computer vision, right? I mean, you've got to get through. Speaker4: [00:19:46] Zaleski At least once. It's like Forsyth and Ponce is the other like computer vision classic. If you're if you're a robotics guy, just because they enforce them once they put a lot of it into how do we apply computer vision into robotics, [00:20:00] like the whole optical flow stuff, we use that for for literally a dormitory. So instead of using like a GP, instead of using a GPS or a magnetometer or an accelerometer to tell you how you've moved, you take two cameras and then you say, Hey, this pixel over here moved like that much. I know the camera intrinsically. I know, I know. Like the the shape of the lens. I know the field of view. I know all of those things. And I've got some other reference points as well, potentially a depth sensor or something like that as well, if a distance. But I know that this pixel's moved this much, so I must have turned x degrees. Right. And there's been a lot of work done on visual or dormitory that way, using a bunch of feature detectors along the way. It's a bit old school, but they're always kind of building on that and throwing a layer of deep learning on top of that as well. So that's super interesting and I've been diving right back into that just to kind of refresh myself a little bit. So yeah, that's what I've been doing and if anyone's got suggestions on what the trade off that super for. Harpreet: [00:21:11] I mean definitely check out that book for sure. I've got a cute up here code clip. Thank you very much. Shout out to everybody that just joined us. Great. Tokyo is in the building. Good to see you, Greg. Matt plasma in the house as well. Albert Bellamy making some looks like spinach and strawberry salad. Shout out to Fabio Vasquez. Fabio, dude, so good to have you here. Fabio is one of the original OG influencers of mine. When I first started getting active on LinkedIn back in like 2017 or 2018 and it's. Speaker3: [00:21:39] Nice. Harpreet: [00:21:39] To have you here. Speaker3: [00:21:41] I'm so happy. I never. So I think you sent me a message some months ago, but I got here by accident. But I'm happy to see all these pretty faces here, so it's nice to talk to you guys. Harpreet: [00:21:56] Yes, I did. I messaged you in November of 2025. Yo, I [00:22:00] know how busy. Speaker3: [00:22:02] I I'm sorry. Harpreet: [00:22:05] Good to have you here, man. And excited to get you in on some of the discussion. Speaking of discussion, Keith, go for it. Speaker5: [00:22:13] Oh, yeah, I know that the question that we talked about, but before we move on, at the risk of this conversation being a little bit too random, but I think that's part of the fun, right? That's one I wanted to keep going on the computer vision thing just a little bit more and maybe search can can comment on this first and then everybody can jump in. So there was some buzz on LinkedIn this week. It was a thought leader that I'm embarrassed I can't think the name of, but somebody was speculating that Dolly, too, would affect the stock photo market. If you're thinking about marketing and apparently it's $5 billion industry and you just want a landscape with a sun going down or something like that, I mean, maybe there's a market for avocado chairs, too. I don't know. But but you know, if you just wanted to do fairly standard stock photo stuff, but with some specific things. That's going to look realistic. Pretty soon if it doesn't already. So it just seemed like a really interesting theory, and I wondered what everybody thought about it. Harpreet: [00:23:20] I could definitely see that happening. I think I think I have seen that same article that you're referencing at least have clicked on the headline. But yeah, that is really interesting. If a $5 billion industry for stock photos. That's crazy, man. I mean, it's who gets the rights to to these images and like, you know, how do you work with royalty? Like, that's that's interesting. Let's say all the. Speaker5: [00:23:41] Photographers do they you know, they set up little studios and they do a close up on a lemon or, you know, whatever it is. And then somebody somebody needs that, you know. So the idea is that that and it made sense because when you think about it that way too is the whole thing is fascinating. But you wonder what's the the business [00:24:00] application of the retcon in the space suit? Speaker3: [00:24:05] And I guess. Speaker5: [00:24:05] I just I was content to have fun with it and not worry about the business implications. But that LinkedIn post got me thinking. So Serge, I'm curious to get your $0.02 on that or anybody's. Speaker4: [00:24:15] Well, yeah, I think it definitely there's some truth to that. I think it especially for like if you want something that is probably, probably may may only have in any way stock footage like two or three of those. So like if you say, well, I want like a visual of like, you know, like an orange like sliced in half with, I don't know, sitting on a sofa, you know, or something like that. It might not exist anyway, you know, or maybe a variation. It might exist in stock footage and it might be like $80. So what's, you know, you think, okay, well, I'm not going to pay those $80 for that picture or if you if you need to actually post it somewhere where all of a sudden it requires a commercial license, it could be more expensive. That might as well just you know, you pay your subscription to whatever service you have, whether daily or you use the credits. And you you can create endless amounts of variations on that same theme. So I think it it just would make sense to, to, to use that instead. So it's going to create another industry, which is okay. I think the stock footage industry is not going to cease to exist. It's just going to have its original purpose, you know. And originally stock footage was like stuff that really needed to be licensed, like sports images, images of, you know, you know, Hollywood actors and a ball or something that were exclusive or something. And you needed it for like news purposes to document what happened and those events. And that was it, right? Harpreet: [00:26:00] Let's [00:26:00] go to Fabio. Speaker3: [00:26:02] Go for it. Yeah, of course. So I've been testing every platform out there for a while. I have access to value. Luckily, I have access to. To meet. Journey. I don't know if you meet Journey. I like even better than Valletta. And what I see is that so? Going to a search was saying in the future it's going to be very hard to differentiate what's what's what is real and what was created by a machine. And that will have some conflicts. I mean, if we go I mean, I think someone asked in the in the chat about how good is the system for for faces and stuff like that right now is bad. Most of the tools are not good, but it's going to change real fast. And in the next five years we want to be able to create images that we want, have a way to do the differentiate what's true and what's not true. Dalley Because it's from up and I have a lot of things in there. What's the name of this like the, the, the policies and so on that prevent this to happen? And they, they know that they don't want to create two realistic things. And they say that, I mean, if they create super, super realistic stuff, but from what I've seen so far, the fusion models are going to be changing the way we do things. And right now, the only thing we do is just have fun on a computer and saying, I want a dog jump in in the bed. But this is going. To a place we have no idea. I mean, can you imagine having a conversation like this [00:28:00] ten years ago? It's going to be I mean, it was not that easy, right, to have it on YouTube and on everywhere. Speaker3: [00:28:07] So I can imagine things happen. And from the guys from Openai, I don't know if you saw Codex, an amazing platform they're building. That's going to be changing not only the way we do images, but the way we code, the way we think about things. And I think we are on the path right now of of what we call hyper automation. These these hyper automation is going to be good for some, bad for others. I always tell my students, go to companies building products. I mean, if you don't want to be out of job in ten years, go to companies that are building products or to build a product if you want to be only a user of these technologies. You're going to be lost in the space in five, ten years. So I think right now there is a lot of interesting things going on in there right now, mostly for fun, but I think in the next years we'll see a lot of things happen in the way we do things in data science and machine learning and in everyday stuff. I mean, there is even right now things for videos, there is a tool I don't remember and I, I catch some of the discussion earlier that is doing automation on, on, on the, on the, the creation of videos. I mean, can you imagine a dolly for videos that will be amazing, right? Like you say, I want a video of a person typing. And I mean, that can save a lot of time for people. So again, right now, for me, it's just a fun application of of computer vision. But I am sure that in the next years we have no idea where this is [00:30:00] going, and it's going to be a very interesting thing. Harpreet: [00:30:03] It's hard to make predictions, especially about the future for sure. Thank you so much, Flavio. Let's go to let's go to Greg, then coast up, then back to Serge. By the way, you guys watching on LinkedIn, on YouTube. You got questions? Do let me know. Happy to take your questions, Greg. Then Coast Hub, then surge. Go for it. Speaker3: [00:30:22] In terms of, Ali, I'm just I'm a little bit more interested about how, you know, marketing teams, design teams will work going forward. Right. So, for example, if they're having issues with license photos because they're generating so much content to get people to buy their products, talking to use this tool to kind of create images that they don't have to worry about, licensing that they will own and change and push people's behaviors into buying their products. So I think people create a value are probably also thinking about doing that right where, you know, startups or other companies will have access to value and pay a low fee to be able to push content out there. And then you have the independent content creators, right? I don't have to go out there and do a stock photo provider to purchase images for my articles. Can I use Dolly to create those images and then put them in my article for people to consume? Right, without having to worry about proprietary stuff? I think this is those are applications that I'm very interested in seeing grow from a mass adoption perspective. And also I was I remember reading a post from Cassie from Google, Cassie Corso called I Don't Want To, and she did mention something about Dolly, which I which I agree with, which is, you know, this is an AI tool that we're using because I'm trying to answer [00:32:00] it. Speaker3: [00:32:00] I think, Keith, you mentioned who takes responsibility for these created images. So I think there's a shared responsibility to me. You have the creator of Dolly to to your point five. They have safeguards that prevents certain images from being created, especially on the billions of things is being trained on. There could be things that are offensive, that have that are bias and stuff like that. So they're really making sure they have the safeguards around that. And then you as the thinker with the creative mind, you're using this tool just like a pen, you know, that you use to draw an image. You're using AI to draw the image. But the idea of that person, that orange sitting on the couch, like you said, Eric, the idea comes from you in the tool executes it. So really, it's a it's a mental human. It's a human to machine collaboration with shared responsibility. And I think some of that, some other disability needs to sit with you tools using the tool. So I'm more interested in to how this will grow and also, you know, create mass adoption and where we can really see something come out of this while we're continuously put safeguards around those things. Harpreet: [00:33:28] So yes, I post by Lawrence Maroney who's I think he's head of like developer relations or something at Google. And he talked about how he used mid journey actually to take the description he had from a sci fi character in a novel he was writing and created like an image of that, that character. And I thought that was that was super cool. I mean, I think it just augments human creativity, right? Like. Speaker3: [00:33:54] Correct. Harpreet: [00:33:54] There's people out there who have interesting ideas and they might be artistic ideas, but [00:34:00] maybe they physically can't. Can't draw. Speaker3: [00:34:02] It out. Can't draw. Harpreet: [00:34:03] Yeah. Yeah. And this just facilitates that getting their idea out in the world quicker. But think is a beautiful thing. Let's, let's go to coast, then back to Serge and shout out to Tom Ives in the building. Good to see you, Tom Ives. Matt Blaser as well. And then, Vin, let's let's hear from you on this topic as well. So let's go coast of Vin and then Tom. I'd love to hear from you if you've got anything to add. Speaker4: [00:34:26] Sir, I think I think we've touched on this a few times already, but if you look at anything that's that was kind of mass available post, say, 2002. Right. Whether it's YouTube, whether it's, you know, being able to put stuff up on SoundCloud, music sharing, video sharing, image sharing. You know, we've eventually gone through this almost this this life cycle of like initial early adopters that are just excited about the tech and put random crap up. The first few videos on YouTube weren't exactly production quality, right? But over time now you're getting highly curated content. You're getting, you know, highly well-produced content. Like to the point of stuff like a what is it, crash course history and stuff like that that's really specifically designed for people to learn from it, right? You're going to see it probably a very similar lifecycle with things like this as well. Right now it may not be photorealistic, but I mean, you can immediately say, oh, flying, golly, it's quite decent, like cartoons and little like, you know, hand-drawn, hand-drawn kind of things, like from picture books and things like that. Right? How much easier is it going to be if I want to teach, create a picture book for, you know, kids today, they want to learn about something new. Speaker4: [00:35:56] They don't have picture books with the right information in it. Right. And we want to start teaching [00:36:00] about different things. We can start creating that just because we know what we want. We don't necessarily need the drawing skills to do that. So you're going to see this accessibility to creativity, right? Because what you saw with YouTube is what you saw with things like SoundCloud, things like Instagram or Tumblr and stuff like that. Right now, I'm not saying this is going to replace autistic like creation, but the thing that excites me about it is that there is a degree of understanding, essentially our semantic. Like things that we that we define as humans, right? Like I say, a bear and it comes up with a bear. So there's an understanding of content, not just, hey, make this black and white or things like that. Right? There's there's just additional communication layer now where through a few lines of text, a computer is able to understand quite abstract and quite complicated combinations of things, right? So it's one step closer to me to that human computer interfacing that's just getting that additional layer of content understanding and context understanding. So that's the exciting bit to me now. Speaker4: [00:37:19] I don't know if it'll necessarily replace creativity. Now, like, for example, you look at how specific videography is and filmmaking is, right? Is it are you going to be able to turn around and say, hey, I want yeah, sure, I want I want a video of someone typing. Great. But I want a video of someone typing in this way. Framed in this manner in this particular way. Right. Are we going to be able to get into that? Because we can get to that. It just makes filmmaking like a super accessible art form, right? But I think there's a degree of human input and creativity there that you're [00:38:00] just not going to see rivaled any time soon. So I'm just curious to see where it goes. Right. But it is it's one of those interesting opportunities that starts off as, hey, this is fun, but who knows? It might be the start of like early day YouTube videos that eventually people will find a way to get human. We find ways to monetize basically anything. All right. Like that is our superpower. I think as a as a species, we'll find ways to make other humans pay for the things that we do. Harpreet: [00:38:30] I mean. Speaker4: [00:38:31] Is that the most cynical thing I've said in any of your podcasts? Harpreet: [00:38:35] It's pretty good, though. It's pretty true. But but I mean, like just how far does go? Like we end up with an AI rapper f'n mecca, right? Like like you know, getting signed by Capitol Records and then getting dropped by Capitol Records because of his use of very inappropriate language, you know, use the N-word and it's like, okay, that's. That's interesting. It's an interesting case study. But I digress. Let's go to a search, then we'll go to Vin and then we'll go to a Tom. Speaker3: [00:39:04] A quick thing, harpreet. Yeah, it's F. An MC is the artificial asset, but the voice is human based for now. Harpreet: [00:39:12] Oh, okay. Speaker3: [00:39:13] So the human was using those words. Oh, interesting. Yeah. What we can talk about at another time. Yeah, it's interesting situation. Harpreet: [00:39:22] Yeah. I started to go for it. Speaker4: [00:39:26] Yeah. And I think I said in another of these sessions, like the question was like, what? What would you you've been, if not a data scientist and I would have been some kind of artist or model or something animator. In fact, I was a 3D modeler for a video game as an intern once. I loved it. But in any case, like I, those were back in the days. And I don't know if you guys remember Winamp and the visualizations that you would see in Winamp [00:40:00] and there were all like fractal based. And I thought that was so cool. Like, we could. What if there was, like, a live feed of, of sounds, including music and, you know, it was responding to fractals, responding to that and then overlaying a, you know, generated art on top of it, like on the fly. Of course, like the GPU would have to keep up, you know, with 24, 30 frames per second and, and keep generating stuff based on that. That would be so cool, I think. And then if, if you also had, in addition to the sound, you had some kind of image input and that image was of the person like standing in a room, you know, then, then it could, you know, feed off it or if there was some other visual input. Speaker4: [00:40:50] So yeah, I was just thinking what, what are the like the applications of that, you know, like at clubs, you know, restaurants for like ambiance that didn't bore people. Like even the music could be AI generated. There's no reason it couldn't be, you know, especially for like all that, like, quote unquote elevator music. Yeah, I think it will be reductive towards art, but I think it will also raise the bar of what people have to create. I think from now on. At the same time, it also create avenues for companies to exploit other things. Like, like I was seeing this video the other day of how A.I. can use to use to personalize digital assets in all kinds of things. So imagine you're you're watching a show and it's personalized all like the backdrop. So all the branding is geared exactly towards you and you don't realize it, but it's, it's showing the products that they think you aspire to. If you like some artist, they will show a poster of that artist and all that sort of stuff that that will be AI maybe not generating [00:42:00] the images, but generating the it's predicting what you like and based on that, it's placing it in the scene in real time. Harpreet: [00:42:09] That's, that's crazy. That's, that's actually pretty cool. And like, that's a doable probably a doable thing that's probably not too far off in the future. Like the future ads are in generative models like straight up speaking of A.I. music that real quick, there's somebody who I follow in LinkedIn that's probably one of my favorite people. Dr. Tristan, I can't pronounce his last name. Baron's being freelance. He's got some awesome content. Also interesting A.I. music as well. Yeah, a search. That's a lot of stuff to think about there. Thank you. Let's go to Vin. After Vin, let's go to Tom. Shout out everybody else in the room. If you've got a question and you watched on LinkedIn, you got a question or a comment, do let me know in the comment section there. If you're in the room here and you got a question, just let me know and happy to get to question. Otherwise we'll continue on this interesting discussion of in go for. Speaker5: [00:43:04] There are already a lot of brands that use customized images. They're not being generated on the fly, but there's a whole lot of recommendations behind what's in an image, especially in ads. What image should I serve you along with this ad in order to trigger a buying response? And that's been running for at least seven years, probably longer than that. Remember in the second Ironman video, they messed with hooding. They experimented in that race track scene with what should be put in the background, where it should be put in the background. And it was actually a company in Reno that did some of the machine learning behind experimenting with that. And so it's been around forever when it comes to marketing. We're just now getting to the point where real time is possible. And so when you're talking about what are the implications of this, you're all kind of hitting on it. It's this concept of real time generation [00:44:00] and serving. If you can change things in response to your audience, content changes completely at that point because you can't have a person do that. There's no way to have a person shoot eight different endings to a movie or five different versions of every single scene, let alone when you start segmenting. You'd want 75, you want 130 if you're a big enough company. Speaker5: [00:44:28] And so the only way you could do any of this is if you have some machine learning put in there. And really you're doing some version of deepfakes with your actors, actresses being sort of simulated in real time changing, reacting, really engaging and interacting with the audience. And so that's. And three years ago, if I said that, I would have sounded like I was fringe. Like you're nuts. You know what I mean? That sounded crazy two years ago, and it's not crazy today. Now we're talking about it and saying, yeah, I could. That's that's not far off. And I'm talking about this in an article that I'm publishing tomorrow, but we're not as far away as we think we are from all of these next generations. You know, it's all 1 to 3 generations away. We're changing so quickly. We're advancing so fast. Two generations could be eight months. Three generations could be 14 months away. No one's ready for any of this. It really is unexplored territory at this point. But a whole bunch of people are already jumping on the bandwagon, trying to figure out how to exploit it. They're trying to figure in ghost have just nailed it. We can we can monetize anything. Yes. And we can also use anything for the absolute worst case scenario. Speaker5: [00:45:52] It's almost like what is the greatest thing we could do with this? And immediately after asking that question, somebody else says, oh, what's the worst? And [00:46:00] they go and do that. And I think if you want to look at the implications of something like Dolly two and our move towards increasingly realistic simulations. You know, because that's what we're doing. We're building increasingly realistic simulations and we're being able to build them in real time. It's kind of scary because social media by itself right now is immersive, addictive. We can get people to do a lot of things they shouldn't be doing and that they wouldn't have done it any other way by putting content together and using social media. Well. Now I'll take it the next step. If we have this auto generated, if we don't need people building content, if we don't need people doing some of these tasks. There's a lot of fiendish directions that that's going to end up going in. But I think when you talk about creativity, what are we going to have to do? You look at LinkedIn and it's a whole bunch of reposts. You look at pretty much any social media and you will see repost after repost after repost. Pretty soon you're not going to be able to do that without getting flagged. It's going to be so easy to recognize. Speaker5: [00:47:11] Repost content and repost content is stuff that a model can very easily mimic. So even if it's not exactly the same, you can type in a variation on this person's content and it'll give it to you. And so that sense of sort of original but not really original variations on common themes. None of that's going to be valuable anymore because you've got a machine that can do all that. So it's really going to drag in if you want to be a person, if you want to be a creator, if you want to be differentiated, you're going to have to actually be able to be creative, which not everyone knows how to do. You're going to have to have some originality, some talent involved in this. You know, you can't just dance on video anymore and come up with a different set of steps. And now that's the new viral [00:48:00] TikTok. You can't do that anymore. You have to have something that's better. You have to have something that's differentiated, something that's impressive. You know, we can't do stupid human tricks for the rest of our lives. And I think one of the things that models are going to force us to do is be smart, like we could be not be dumb, like we're able to be. Harpreet: [00:48:24] That's a very powerful, powerful statement. Be smart. Like we could be knocked down, like we're able to be. Where does like where does remix ideas like remixing ideas and remixing things kind of fit into that? I just picked up this book. I haven't read it yet, but this just remix remix making art and commerce thrive in the hybrid economy. I always felt like creativity was just taking something and kind of colliding it with something that doesn't seem like it goes with it and creating something new from that. I guess that's a whole other conversation on what is creativity. Speaker5: [00:48:58] Yeah. I mean, that's it's interesting to think that all of that could just be off loaded. You know, the variations on a theme, you talk about it in pharmaceutical research, the you can with a model do a whole bunch of variations on a single theme faster than a person could make some advances that way and it's truly you know, you're removing all of that stuff that we pretend is us achieving our potential. And then you're honing in and you're saying, look, we've automated the dumb sorry, you can't do dumb anymore. You have to do something else. You have to do smart now. You have to do creative. Now you have to do true discovery. You've got to actually use your human brain. Sorry. And I think that's going to be the end benefit to us, but I don't think it's going to be a fun path getting there. Mm hmm. Harpreet: [00:49:52] Love that. Thank you. In Cocoa. Let's go to you. If you're watching on LinkedIn, let me know if you got any questions. Happy to take good questions. If [00:50:00] you're on LinkedIn, also be sure to smash. Like if you're here in the room, be sure to let me know if you got any questions. If you want to, you know, comment, whatever. Raise your hand. Shout out to Maria. Maria, I don't think I said, how do you. Hi, how's it going? Let's go to Costa and then after Costa will go to Greg. Speaker4: [00:50:17] So like someone said of the comments then this week's episode of what Vince said. Yeah, that's kind of like this. There's a phrase that I like to extend, mostly because it's a phrase, the phrase that was extended in The Prestige, which is my favorite movie, if anyone hasn't watched it yet. Geez. What? Go watch the Prestige. It's worth watching. Right? Chris Nolan, Hugh Jackman, Christian Bale and Michael Caine and Scarlett Johansson. Fantastic. Right. So there's this phrase, I think it's the guy that plays Tesla, the Nikola Tesla that says it. Right. Man's reach exceeds his grasp. But what was it? His grasp exceeds his his his nerve or his courage or something like that. I can't remember the exact quote, but basically I'd like to extend that by going. The only thing that exceeds all of that is his infinite capacity for stupidity. Right? Like, we're so capable of doing amazing stuff. But the first thing we come up with is, is an avocado teddy bear or a koala koala on an astronaut suit. Right. Like, we need to think further than that. What can we actually use this for that's actually going to benefit? Can we use this for translations? Can we use this for something beyond just comical meme generation crap? Right? Or is that where this whole thing dies? Right. At what point are we taking technology and maturing technology enough for it to be? Yeah. That man's a grasp exceeds his nerve. Speaker4: [00:51:55] Right? But exactly. So at what point do we take technology to a maturity [00:52:00] point where we're actually able to use it for something positive and something extremely useful? And at what point do we kind of measure by the wrong stick? Right. Like there's this other idea that any metric, the moment it's used to actually measure success, the moment metric becomes a measure of success. It fails to be a useful metric. And I can't remember where that's from. Someone, please pipe up. What is it? I can't remember it, but essentially that's what we're seeing, right? Social media, it's turned into clickbait, like it turned into clickbait. 12 years ago, BuzzFeed started like throwing stuff up, right? It just like we're measuring things on clicks. We're measuring things on impressions. We're measuring things that. Is that the inherent value? That's a proxy for inherent value that we're providing? Is it a particularly good proxy? I don't know. We've got this. We've got this, like Vin was saying, this really addictive social media process. But now let's take that one level further. And immersion, it's going to be really easy to fool ourselves. Like it's really obvious. Like you talk to anyone, right? Yeah, sure. Clicks aren't necessarily the greatest, like, representation of value. Now, at face value, you can get most people across the line on that because you can present plenty of good counter examples. Speaker4: [00:53:24] But if you take that beyond the low fidelity item, that's a click and turn it into something else. Take it to the next level. We're adding another layer of dimension to this, right, which is that personalization dimension, which is that fully immersive. Like this is what scares the crap out of me with VR and the whole metaverse crap, right? You jump into it, you add this additional dimension of immersion, and it's going to be so difficult to tell people, Hey, that thing that you think you see value in doesn't actually have any real intrinsic value, right? It's already a problem with currently social media. I never discussed this [00:54:00] before on this show I think. But yeah, it could very potentially be an even deeper level now. It's like the Nigerian prince problem, right? You don't need to fool everyone to fool enough people to make it bad, right? You just need to fool a solid 60% of people, which is not very hard to do. Right? Like, yeah. So that's that's kind of where the stands and yeah, we're infinitely capable of leverage. Leveraging that for our own means. The question becomes who steps up and finds good uses for this? That does take us through that hard path of saying, okay, the dumb and dirty stuff. We're not doing that anymore. We've just got to do the things that humans are really inherently good at, right? Like we're already seeing that. Speaker4: [00:54:52] You look at automation and robotics, you look at self-driving trains, you'll see autonomous public transport come into play. It's not that far away, right? It's not too far away. It is not unimaginable. I think there's a there's a rideshare service in California. My cousin was telling me about this apparently. Does driverless ride driverless Ubers essentially. But it's only between like 2 a.m. and like 6 a.m. because there's no one else on the roads and it goes like 25, 20 miles an hour. But that's the start of it, right? Like we're going through that hard part of drivers are losing jobs, so they need to do something more than what's capable. But what we're seeing is our capacity as humans is accelerating all the time, but the capacity for technology is accelerating that much faster. Right. We've only had cars for 100 and 120 years. And in the last 20 years, we've we've gone from like good cars to, hey, the cars are almost self driving, right? So what is it that makes us inherently human that we can add positive value from? [00:56:00] And that becomes this massive existential question that we've got to ask ourselves when we're creating technologies, how do we create them responsibly, right? We could create them to make a bucket load of money, but should we? And what should we make? And that's what concerns me about all of this stuff. Harpreet: [00:56:19] Thank you, Khosla. You know, I'd love to hear from Tom. Tom, if you got anything to add to chime in here. I haven't. Haven't heard from you then. Then we can go to a to a Greg and Keith. Tom, good to see you again. Speaker3: [00:56:29] Hey, good to be here. And I love the basement part. Wow. Speaker4: [00:56:33] And, Keith. Speaker3: [00:56:34] I'm feeling hurt because I sent you. Speaker5: [00:56:36] A private message you didn't reply. Speaker3: [00:56:37] To, but I've since figured out that's a real background, right? That's super cool looking while you're on mute there. Speaker5: [00:56:47] So I'm going to go in and interrupt you. I wondered why when I posted it went to you and I was brought up and I just wasn't quick enough. Speaker3: [00:56:56] No worries. No worries. Hey, good to see everyone. I want you all to know I was in a moment of crisis because Hartford City. Speaker4: [00:57:05] Colony, I had no. Speaker3: [00:57:07] Effing clue what Dolly to us. Now I'm an expert because. Speaker4: [00:57:12] I perused the Google. Speaker3: [00:57:14] Search results. So now I know what I'm talking about. Speaker5: [00:57:18] But seriously, guys, I'm wondering. Speaker4: [00:57:22] I should start with a confession. Speaker3: [00:57:25] When I was young, I fell in love with several movie star actresses. It's true. And I dreamed about relationships with them and everything. Speaker4: [00:57:38] And then I came to find out they're not in real life like the. Speaker5: [00:57:43] Characters they played that I fell in. Speaker4: [00:57:45] Love with. I'm not trying. Speaker3: [00:57:48] To make light of what you all have been saying. Speaker4: [00:57:50] But a lot of what I've. Speaker3: [00:57:51] Heard y'all saying is this is powerful and it will be abused. Speaker4: [00:57:57] Yeah, probably. Speaker3: [00:57:58] But you know what? Humans [00:58:00] have been. Speaker4: [00:58:00] Good at. Speaker3: [00:58:02] Fooling us. Making something. Speaker5: [00:58:05] Not real, presenting. Speaker4: [00:58:07] It to us, making. Speaker3: [00:58:08] Us fall in love with it, and then later we find it's not real. So I'm not. Speaker4: [00:58:13] Trying to make light of all the. Speaker3: [00:58:16] Or concerns you're laying out. I think they're real. Speaker4: [00:58:19] But in a way it's. Speaker3: [00:58:23] Going to be helpful because we're. Speaker4: [00:58:24] Going to be less and less. Speaker3: [00:58:25] Trusting of non reality. Yes. Speaker4: [00:58:29] That's just my small thought for the day. Speaker3: [00:58:33] I hope that helped. Harpreet: [00:58:35] No, I love it. Thank you. It's just great to hear your voice, Tom. Thank you so much. Let's go to Greg and after Greg Will to Keith. If you're watching on LinkedIn on YouTube, you got a question, do let me know. We all questions are welcome. Does it have to be related to the topic at hand? We just is how we roll. Greg, go for it. Speaker3: [00:58:51] For me, it's more of maybe a question. I for the group, like what do we call creativity? I don't know if anybody if we explore that deep enough here and if I may start with that, what do we call creativity? Right. So. You know, in human I, I try to simplify this in my mind. I call it, you know, having different perspectives. I don't know. I'm trying to simplify this. Right. People will become creative, have a different perspective, a different sort of awareness around them in terms of how to solve certain issues, maybe. And typically, when you're when your needs or satisfy about these basic needs, you don't have the drive to become creative in certain aspects. When you want to solve a problem, you're able to put things together very quickly versus other people who are not pushed to do so. And then, if that's a good approach, is there a way you can train a machine to kind of like look up perspectives [01:00:00] a different way and then come up with a solution a different way, a more effective way and things like that and all that creativity. And then lastly, if something is born, you know, like a technology or something like that and nobody uses it or will, what is the criteria for calling something creative? Like, do we need at least one person to find this useful? Right? I was talking to a few friends yesterday and he was telling me he went to, I think, Vietnam sometime in the past, in the 2000, and noticed that people in certain areas there still weren't they still didn't understand how the wheels could be very useful to them, like where they could carry water. They could use it as a tool to do things right, which, you know, back way back when he was a creative tool at the time. So just wanted to to see that for you guys. What do you what do you define as creative? Can a machine do that? And what is the criteria for calling something creative, things like that. Harpreet: [01:01:17] Yes. Interesting, interesting question. It's just like I've interviewed quite a few people talking about creativity on the podcast. I think I went on like a creativity rabbit hole. A couple episodes I did was with Near Bishan. Another one with Natalie Nixon, one with also author of Serendipity, Mindset, Christian Bush, and a couple other podcasts episode. So if you're listening, definitely check those episodes out. But yeah, I mean, I'd love to hear other people's takes on this. We'll go to Keith next. But I think creativity just just. Speaker3: [01:01:50] Like read the book. Read the book. The creativity code is just why it was a good one. Harpreet: [01:01:54] Yes. Who is also on my podcast? Yes. Yeah. So that's that's yeah. We talk all about [01:02:00] creativity as well. That is a good book. Yes. Creativity code is really, really good. And I think of. I think. When we think of creativity as having to take something just completely brand new, there's a great magical work and it's like never existed. Like, I think that's too grandiose of a notion of creativity. I think creativity is like, you know. For example. There's like the story of the old philosopher whose name was Diogenes, and he is just he's a famous cynic philosopher, and he's sitting there walking with a cup to to go and fill some water into it. And he sees a kid drinking water, not with a cup or anything, but just his own hands. Right. So he's here. He has tools available to him in this environment, and he puts it to use an unexpected way for him to accomplish some goal. Right. I think that is really what creativity is. It's taking whatever your environment has available for you and combining the elements of that environment to achieve some end in an unexpected way. I guess like just trying to boil it down as simply as possible. But let's go to let's hear from Keith. Fabio, if you if you'd like to jump in on this, I'd love to hear from you as well. Go to Keith, Fabio, then Kosta, then if anybody else wants to take a stab at this. I love this topic. I think you let me know. Just raise your hand and I'll add you to the queue. Keith, go for it. Speaker3: [01:03:30] You're a mute kit. Harpreet: [01:03:31] Yes, you're muted, Keith. You're on mute. Speaker3: [01:03:35] Keith. Harpreet: [01:03:36] Keith is on mute. There you go. Speaker3: [01:03:39] Yeah. Speaker5: [01:03:40] It's late in the day. I needed more caffeine. All right, so I was. I was going to ask if it was appropriate to talk about that other topic that we had discussed. But we can totally do that later or even table it for another time. Harpreet: [01:03:56] Yeah, we get we'll circle back to that. We will circle back [01:04:00] to that. Yes. But let's let's hear your perspective, though, on what is creativity, what it means to be creative. Speaker3: [01:04:06] Yeah, something interesting you mentioned, Herbie, is that the definition you gave for creativity is very similar that I've seen for intelligence. So that is a very good question from Greg, because where is the. The stop or where's the line between intelligence and creativity? I think they're highly related. Something that was interesting about what you said also is that is something unexpected. That's that's interesting, right? Because intelligence doesn't need to be unexpected. It can be something that it sounds from friends. It makes sense to do it. But I think it's very hard to talk about one without the other. It's very hard because and and I think there's a very famous quote from Einstein. Right, talking about knowledge and creativity and that sometimes it's more important to have a good imagination than to know a lot of stuff. Right. And that was one of the things that drive that that drove him to build something that was completely unexpected for you, for those who don't know. I mean, all my all my career in the Academy, I was working with general relativity and Einstein's work. One of the most interesting thing about his work is this idea of that to two big ideas. First, the gravity is not instantaneous. It takes some time for gravity to do go to one place to another. We just discovered like that like [01:06:00] four years ago when we're we're the discovery of waves in gravity. I mean, we we knew. Right. But we we knew for sure these a couple of years ago. And the other big thing is that even gravity can affect light. So why am I bringing this to the conversation? Because that is completely unexpected. It's not common sense to to think even that the earth is moving. Speaker3: [01:06:31] It's not common sense. Right. Why isn't the earth just staying here? And we see everything going around and we're steady here. So I think creativity has been an amazing driver for intelligence and and and backwards as well. And one of the things I say to my students is that it's good to know how to do stuff in learning whatever you do, but try and thinking on. Creative ways to solve problems. That is what makes a change in companies. I have the. The lock on working on an H stool right now and we're building interesting stuff. One of the things that really matters for the whole autoimmune space. It is creativity, being able to solve problems fast and with creative way and in creative ways. So, I mean, I'm happy to hear what you guys think. But my my point for everyone listening is that both intelligence and creativity are maybe one is maybe the same thing in in in a way. But it's just different ways to represent [01:08:00] one, one more abstract or more more complicated concept that we haven't figured out yet. It's the same thing that Einstein mentioned. Earth, sorry, math and energy is just the same stuff. It's just we don't see it. Right. So that could be interesting forward and for for for all of the data scientists or aspiring scientists are that are listening here in this podcast or conversation, make sure that you're not only following their regular patterns of importance. I could learn and click that fit, make sure to be creative and make sure to really, really think about what you're doing. Because intelligence and creativity may be just the same thing. Harpreet: [01:08:48] Yes. An interesting point reminds me of this like so we're my kids like two years older are waning him off his his his smoothie like, you know, the pacifier. And he just he came up with a name with it. He calls his pacifier beater and it raining him off of it. And then he's got these puzzle pieces. And these puzzle pieces are these like disks that have like a knob on them. So in an abstract sense, it's kind of like his pacifier, which is it's got a knob and it's flat. And so he he looks at me, he starts laughing, he's like beta and just puts in his mouth this wooden puzzle piece that like, you know, it, it's like, oh my God. And he starts like using it like a pacifier. I'm like, that is a very creative, creative solution to to us taking something away from you. I just thought that was super, super crazy. But because of this, let's hear from you. Then after Kosta, we'll circle back to Keith's question. Speaker4: [01:09:42] So the interesting point there was the I mean, the recurring idea of doing more within the limitations of your environment, right? Not necessarily knowledge, intelligence coming more from creativity than knowledge. So this is something I had an interesting conversation with another percussionist. This was probably [01:10:00] going back about six years ago. Right. And it was at a time when a much younger, much dumber me was kind of annoyed at all the rules and constructs. I'm a classical musician, particularly South Indian classical music. Right. And we have a lot of rules and grammar around it, much like any kind of classical art form. And the the funny thing to me back then was like, I'm kind of sick of all these rules. I'd like to break a lot of these rules. And then having that conversation with him, who was much more mature musician at that point, was kind of eye opening because what he said was basically the real creativity lies in how you can work within the confines of those like those rule sets. Right now we do that in music. We, we set up these confines for ourselves and then we try to work around them to come up with interesting solutions that other people haven't thought of before to resolve a melody or to resolve a rhythm pattern or something like that. Right? So yeah, you're right. It is about working within the confines of what's available to you. Sometimes limiting yourself in terms of what you're able to use helps you come up with more creative solutions, right? It kind of lines up with the whole. What was it that that that there's a whole thought experiment. Speaker4: [01:11:21] I mean, I think it's actually been proven in real life as well, where if a company presents six flavors, people can make decisions more clearly than if they present 20 or 30 flavors because they're just inundated with too many options and ways of doing things. So often, if you actually cut down your ways of doing things, you actually end up with more progress. And what Fabio was saying was essentially what we're talking about is synthesis level thinking right now. If you if you look at any kind of teaching syllabus, you've got different levels of teaching and knowledge, understanding, right? The loss of which is just like recollection. It's like, hey, fact, this is this equation [01:12:00] synthesis level problems. You see them like university exams and stuff like that. Is your last question. The one that everybody struggles with is they just give you a problem to solve. But that's really the level of thinking that we're starting to encourage more and more with these kinds of tools. So the question becomes, as tools like Dolly and all of this pull away the value of recollection. Human recollection is we've already like destroyed the point of human recollection. Right? Rewind 20 years. You'd be memorizing friend's phone numbers, right? Like I knew all of my mates home phone numbers off the top of my head for sure. Right. That was like 15 years ago. Right now it's like I don't remember. I don't remember, you know, my best mate's mobile number because it's all in that we've killed recollection level thinking it's just pointless now. So now we're moving up the ladder and the only things that's going to eventually be left, and I suspect it's not that far away, probably 20 years away. Speaker4: [01:13:00] Right. Is that synthesis level, creative thinking? So how do we train ourselves to do more of that and bring out that like inherent human intrinsic value that humans do bring? Is that synthesis level thinking? Right. But here's the trade off. Is is exactly what we're talking about is we're capable of evil and we're capable of great. Good as well. Right. Just because a tool is capable of great evil, does that mean we shouldn't use it for great good? Right. This is something that like Richmond al-Awlaki on his podcast, one of his some of that was talking about essentially if we had a way of diagnosing skin cancer, that was 95% effective for one particular race of people but was only like 40% effective for the other race. Does that mean we should not save the kids that are from the race and benefits from it just because it's not equal and balanced and fair? [01:14:00] Right. The same way with Dolly too. Sure. We could start creating on the fly imagery that influences people to think in a certain way, but at the same time look at it the other way. Let's say we get to photo realism with something like Dolly to right now. Let's let's take surgical training in VR. Right. I know Sydney University is trying to do dental training with VR equipment, teach dentists to basically use their tools. Right. And they come up with fake scenarios. The problem is they've only got three or four fake scenarios. Speaker4: [01:14:31] It's very expensive to curate them because the people with the knowledge of what that should actually look like are completely different people and have probably never sat in the same room as the people that can have the technological capacity to create those images synthetically. Right. Unless they're working on their teeth, which is exactly how I ran into this. Right. A dental nurse, I was talking to her after my dentist appointment and she was talking to me about this and I'm like, cool, we should actually explore this. But what if we can get to a point where I as a surgical specialist, I'm not a surgical specialist, but I as a surgical specialist could say, hey, I want to see something like a slip disk, but I want to see it in the C5, C6 instead of the C three, C four. Right. And I want to see it with these additional factors and I want to see an x ray of that. I want to see an MRI of that. I want to see an ultrasound of that. Right, to see what tissue damage is around it. What if we could order generate based off the knowledge of extremely specialized people, so much more high fidelity experience that it makes complex surgeries a lot safer because a surgeon is allowed to see many variations of it long before they even encounter it in the real world. Right. So if we're able to do that with something like Dolly two, does that make the the essentially the benefits outweigh the. The other side, I guess, right? [01:16:00] Harpreet: [01:16:02] Tom, let's say this. Go to you. It looks like it. A follow up here. Go for it. Shout out to Mark Freeman. Yeah, I got. Speaker5: [01:16:08] Excited about Mark showing up to. I just wanted to back. Speaker3: [01:16:11] Up the costumes. Excellent point. And I kind of forgot. Speaker4: [01:16:17] When we were talking. Speaker3: [01:16:18] Earlier about Dolly, too, and the fears with it. It's going to be something powerful. And it's going to be able to use be used for a lot of good also quick illustrations. I could go. Speaker4: [01:16:32] On and on, but I think that's something really. Speaker5: [01:16:36] Good to. Speaker4: [01:16:36] Remember. Few things in life. Speaker3: [01:16:39] Are powerful that can also be used for hard. And so when people used to ask me on shows about. Speaker4: [01:16:51] The fear of I, it said, I don't I'm not afraid of I at all. Speaker3: [01:16:56] I'm actually just afraid of whose hands that power is in. It's it's not it's powerful. So, yeah. Speaker4: [01:17:04] It can. Speaker3: [01:17:05] Be used for bad. But I think if we keep doing communities like ours keep encouraging the use of AI for good, it's going to do the best we can do as humans to keep it at bay, as my thought. Harpreet: [01:17:26] Thank you very much. Interesting point here by Russel. Sugar can be super bad for diabetics, but we do not remove it from use for those that are not affected. Interesting point. Something that was talking about reminding me of he's talking about just being felt confined by the rules. I was talking to Matt Blasé feels like it was just a couple weeks ago he was talking about this concept of Shoe Harry, which I thought was super, super interesting. I think I was talking about this with Ben on the ride to to the Giants game, and it was pretty much the idea that you gain knowledge in like three [01:18:00] stages shoes, the beginning stage where students follow the teachings of a master. Precisely. They concentrate on how to do the task that worrying too much about the underlying theory. High is at the point at which the student begins to branch out basic practices working. They start to learn the underlying principles and theory behind the technique and then realize they aren't learning from anyone anymore, but from their own kind of practice. They create their own approaches and adapt what they've learned to their own particular circumstances. So I thought that was kind of being there. Let's go to Keith. Keith, you had a question you had cued up. So I took an hour and a half to get there. Speaker5: [01:18:39] But I kind of spontaneously asked the second question that we got so much amazing insight from. I'll make one final comment of this last topic that. I'd never thought about this. About Ellie, too. But what if you could take somebody's medical records and maybe x ray that's actually been taken so like an actual x ray, but create kind of a virtual reality digital twin from all that information. Because if it was very complex surgery, the surgeon could potentially try the procedure if it was very high risk before doing it. In reality, that's that's a super cool. The idea that we were bouncing around, basically. So the question I want to ask them. The group was what topics they've been seeing at conferences. So I have a couple of comments along those lines as well. But I don't know about all of you. I did my first in-person conferences since shut down this year and I'm up to maybe five of them. Speaker3: [01:19:43] Or or. Speaker5: [01:19:44] Something. I mean, I travel a lot. I would imagine everybody's had a chance to maybe do that many this year. Harpreet I know you've done at least a couple, maybe dozen or more, I don't know. But so I'm very curious about that. So I don't know where we want to start. [01:20:00] If I want to mention a couple of topics or I'm kind of curious to see what the group has to say, what they've seen. Harpreet: [01:20:06] That a couple of concerts I think I've been to two or three already this year are mostly ML ops related, but I've got a couple queued up for at least two on the horizon. I got one in September, September 27th and 28th in San Jose, the Intel Innovation Conference. That should be interesting. And then in October, I'm going to Tel Aviv for the European Computer Vision Conference. I'm excited to see what's going on there. But let's go to Fabio. He seems like he's a he's got some some insights into this, what we've been seeing happen at conferences lately. Speaker3: [01:20:41] Yeah. So being the only one in this region of Latin America would like to shed some light of what's happening here from Mexico to Argentina. Right. So of of what have been seen here so far, I think one of the important topics that always and and maybe this is old news for you guys in the US or or at places, but the, the topics about ethics has been very important in the last, I don't know, a year and a half in this region. A lot of people trying to create new laws for for this. We don't have any law right now. So we are free to do whatever we want and that's not ideal sometimes. Right. So I think that one of the things happening in the region is a lot of discussions about bias of models and ethics and explainable machine learning, how to create new explainability all all these kind of things are super common in, in conference. And here one other thing that I've seen a lot is what I be talking about, hyper automation. These companies that are regular companies that have there are old companies in the region. They're trying to to to transform what they do a [01:22:00] lot of different things. So not only the automation for for basic stuff, but like automating a lot of different processes that are crucial to the company. For example, all the banks right now are trying to create intelligent systems to speed up the process of of. Create an account or ask for a loan. And right now it's a tedious process with someone has to look at the document and see a do over this person or whatever. Speaker3: [01:22:32] So I think a lot of the things we're seeing here in the region are hyper automation and and the verification of of the of of these people regarding images and documents and all of that. A lot of NLP related with document AI systems that has been very powerful. And another interesting thing I've seen is the public policies go into AI and and and data science, for example, here in Mexico and also in Argentina. In Chile, they have opened a lot of data. So I know this is old news for more advanced countries maybe. Right. But it's good thing it's a good thing that we have now open data and we can download millions of data sets from what we do in government, in the, the, the police, the, the, the actual way the government is spending our money so that it's all public now by law. And very interesting talks about open data and all of that and in a more technical sense. Three topics has been very important in the region. One is automation [01:24:00] of machine learning. This is the place I work right now. A lot of companies are trying to to to incorporate platforms of data science. A lot of big companies like Amazon and Microsoft and and Google, they're now all jumping in the in these AutoML tools have a lot of things now and they're they're talking in all of the conversation in in all of the conferences now second thing is about. The usage of data science for for small and medium companies. Speaker3: [01:24:40] That has been also very interesting for you to know. Around 99.8% of companies in Mexico are small or medium sized companies, and they are 72% of the people employed in the country. Is that so? There's a lot of impacts. Millions of people are getting impacted not by huge corporations, but by small corporations. And we're seeing a lot of startups creating a lot of different tools for these type of companies, paying, taxed and stuff like that. Lastly, I think the one topic I also see a lot is trying to explain the business what's data science and a lot of data literacy. It's happening right now in the region, something that is very important that we keep doing. Most of the companies in the region, they really don't understand what's I mean, they have no idea what to do with it. So it's we are in this process of explaining to the businesses what's going on and what they can do with it. So I think that's the the panorama of the whole region, again, from Mexico to Argentina is in there if you combine is like 700 million people. So it's a lot of people thinking of what's going on and [01:26:00] we are on the path of creating amazing stuff for our people. We we haven't been so I mean, I'm happy to be part of this, this group of people trying to elevate the conversation to governments, to to big companies, and also to small and medium sized companies. Yeah, thank you for the question. Speaker5: [01:26:20] Question Yeah, I'd want to comment on that. On that too. Your list is very, very similar to mine, so maybe I can make a couple of quick comments about that. First, actually, you mentioned something that's on my list to the ML OP. So I mean, you've been to some conferences that are probably specifically ML ops conferences, but we were at SC together and there was a major ML Ops presence. So there's no question that I'm seeing that. And I speak at the DWI conference, which is really an IT conference. It's not primarily a data science conference. There's there's a data science track, but it's primarily what you can tell in the name Data Warehouse Institute. Right. But ML ops is becoming important there. So, so some something's going on with ML ops and I think it's you know, Joe and Matt who sometimes join the happy hour having great success with and a lot of buzz deservedly so with their data engineering book. And I think people are starting to realize that these are two different career paths that the the people that build the models and then the whole infrastructure and maintaining and managing the models, that there's so much complexity that you can't you can't do it all. So that's definitely a trend, no doubt about it. Then then with some of the ones that you were sharing, Fabio, no question on the ethics, fairness, transparency, explainable. Speaker5: [01:27:49] I, I notice that a couple of actually one of the last conferences I went to before shutdown was the CADE conference and Anchorage. [01:28:00] And I don't go every year, but it was an anchorage that was I wanted to go to Anchorage, so so I decided to go and I noticed a ton of explainable AI stuff. I mean, my notebook was filled with this stuff when I left and it ended up becoming kind of a research topic for me. And that's absolutely started to appear everywhere. I don't think I've been to a conference that didn't have at least a half dozen tracks on that, and deservedly so. But part of it is the more black box models you use, the more you kind of run into trouble with that. And then if you do get some public policy folks there, you know, I didn't the K-T conference that I was at last week was in Washington, D.C. and you would think there'd be like a government or public policy track, and maybe I missed it, but I'm kind of careful about this kind of thing. But I'm sure they were floating around. I mean, there's just no way that you have a big conference like that in D.C. and you didn't have some public policy folks. So it may have just manifested itself and having. Speaker5: [01:29:10] Discussions about explainable AI and transparency, even if the title didn't have government in there. But I don't know how many of you know anything about the CD has been around forever. I think this was like the 28th one and at the beginning of the week they don't have concurrent sessions, they have workshops, but they're not like one guy or gal standing at a podium talking for a half day. It's not that that's not what they mean by workshop. It's basically you have someone curating it. And every 20 minutes somebody presents a different paper or. You. You meet as a group for an hour and then you meet it roundtables or those kinds of things. So their notion of a workshop is a little bit different than other conferences, but they would have entire workshops dedicated to transparency as a topic. And oh, the automation [01:30:00] one is interesting too, because it's absolutely the case that there's all kinds of stuff you got to automate that is, it doesn't necessarily have a predictive model in the middle of it. And there's it's not it's not a topic that I've spent a lot of time exploring, but I don't know if you guys have ever come across and Bach and he's a thought leader he has some courses on LinkedIn learning. That's that's like his thing is intelligent automation and I know that he's been talking about how in a month he's going to a conference that is dedicated just to that. Speaker5: [01:30:32] But it's automation in all forms, not just predictive models or those kinds of things. So so no question that that's that that's a trend. And. Oh. Utter one out. Yeah. So. I don't know if this was if I'm the only one that this was new to. But the package that seemed to get a lot of attention, there were there was a whole workshop of CD on AutoML. Just in general, these papers you get to realize are really super technical. They tend to be post-docs. People just finish their PhD and they're basically presenting their dissertation as a as a 20 minute paper presentation. So there's a lot of Greek letters flying past you and these things you have to have to stay stay caffeinated. And for me, you've got to make note of the actual paper and go home and read it. Because listening to it only spending 20 minutes listening to it is only enough for me to figure out that I need to learn more and then I have to actually go home and read a paper. But there was also a workshop dedicated and it's in its entirety to a tutorial on flammable. Which apparently is a AutoML library that is gaining in popularity. Speaker3: [01:31:45] It's from Microsoft. I am actually. So, you know, I work in H2O. We also have an AI, very famous AutoML tool and next week I'm giving a workshop on all of the major [01:32:00] AutoML tools I'm teaching. So it's, it's going to be like a two hour workshop. I'm going to talk about flannel h two or three teapot out of learning out of pytorch and, and, and precariat. I think those are the ones that are changing the, I mean there's a lot more, but those are the ones that are changing the way we do things. And it's a super interesting place to work because it's, it's getting easier. I mean, if you remember when we started, this thing was super hard. I started working in Octave, you imagine so trying to go from from Java in octave to these super simple stuff. It's a very interesting thing. And I went. Speaker5: [01:32:47] To ah2 AI conference once and the Bay Area, they, they really put on a good show. I'll warn you, you'll probably get a direct message coming your way from me because I've been trying to figure out which 3 to 5 AutoML options are the ones to mention in an overview of AutoML or I don't know if anybody else has been trying to figure out like for the organizations, for them personally, which one do it, which one do I consider learning or whatever. But yeah, so you obviously have had to give that, give that some thought. So I'll be very. Speaker3: [01:33:24] Contact me or I'll, I'm happy to answer. Yeah. Harpreet: [01:33:28] I'd be interested in learning about auto pytorch. That does sound sound interesting. Flammable f wl aml. It's an implementation of ML dot net which is out of Microsoft. I remember trying to use ML done at a few years ago and that's when I was like, you guys are a trip and trying to have me do dot net. I'm doing it in Python. Yeah. Thank you very much, Keith. Great great insights from the from the conferences, I hope. See you guys at a conference soon. I'll be in a couple [01:34:00] Intel Innovation September in San Jose. I'll try to connect with my Bay Area peeps from out there and then also CV in Tel Aviv and then probably one or two more before the year is over. Mark Good. Sema, any questions or comments or anything? Mattis came to show up to support. Speaker4: [01:34:20] I had a last minute appointment. I was able to rush in real quick. I was like, Honey, you make it. Harpreet: [01:34:24] Thanks, man. Thank you for being here, coach. That was good. Speaker4: [01:34:28] Just. Just quick shout out. There's going to be a conference in Sydney called I think it's Data and Bates. And Joe Reece is flying down to to to be the keynote there. So I'm keen to finally meet him in person. I think he's the first one from this whole happy hour that I'm that I'm going to meet in person. Yeah. Sort of game for that. Harpreet: [01:34:49] When is data bites? That's in. Speaker4: [01:34:52] September. Oh, yeah. Right. Harpreet: [01:34:55] Well, if you're going to any computer vision conferences or know of any computer vision conferences, let me know because I will be going to a lot of those try to try to frontload. I'm also doing a bunch of meetup groups, deep learning meetup groups, you know, that got a frontload travel before the baby comes in in January and then I got to sit tight for a few months. Speaker4: [01:35:16] I should find a I should find a month of computer vision conferences to do a USA tour at some point. Harpreet: [01:35:21] Yeah, absolutely. Right. Oh, thank you all for for being here. Appreciate you guys swinging by five. Good to finally meet you in person. And you got to. Got to get you on the podcast soon. I'm actually I'm launching another podcast soon ish towards the end of the year. You guys know the artist data science is not you know, it's for data scientists, but it's really not always about data science. I always got a number of authors, some of them New York Times best selling authors, some, you know, bunch of philosophers and just a bunch of cool different people exposing this to interesting ideas. The next podcast I'm launching is going to be all about deep learning, and we'll be going super technical on this, [01:36:00] and it's going to be sponsored by DC. It's going to be one of the in-house Jesse podcast that I'm doing, but I'm super pumped to just have an outlet to talk about deep learning with with industry experts and and just people who are who are super into it. So if you are into deep learning, if you're an expert in deep learning, if you have a passion for deep learning, let me know. Harpreet: [01:36:21] I'm going to have you on the podcast. So definitely reach out that you know, it's in the works. I'm still kind of strategizing what that's going to be like, but you will hear me in a couple of other contexts. Jazz MIA For sure. Yeah, actually I'm part of a I feel so cool being part of a exclusive club with jazz. Mia Club Bushey is about, I think 13 or 14 of us that meet once a month and we just talk about deep learning and stuff. Definitely be reaching out to jazz me to be on the, on that new podcast, the as yet unnamed podcast. But it's going to be all about deep learning, specifically deep learning and production, how it's being used, the challenges people are facing, deploying their deep learning models to production, how they're overcoming those challenges, and what we could do during the model training and development process to kind of facilitate a smoother deployment process. It'll be cool, man. I'm excited about it. I'm super pumped. So if you're an author, yeah, I'll have you on, you know? Speaker3: [01:37:19] Yeah. And we have a new tool. It's called Hydrogen Torch. It's a new deep learning tool we have, and it has automation for deployment and it's a very interesting system. So contact me on on LinkedIn. I'm happy to talk about that. We're doing a lot of different stuff in the deep learning space. We haven't jumped into the AutoML for deep learning. That is called mass. It's called neural architecture research. Harpreet: [01:37:48] That's what my company does. So DSI does, Oh, we've developed a nice kind of algorithm and that's our bread and butter thing is ness. Speaker3: [01:37:57] Yeah. Okay, so you just contact me there and [01:38:00] I'm happy to be there. Harpreet: [01:38:01] Yeah. Awesome. We all thank you so much for hanging around. It feels good having one of these long, happy hour sessions. It's been a while since we've had, like one of these almost two hour episodes. Thank you all for being here. You guys stay tuned for the new podcast. Remember, I'm going to be having a new episodes release on the Odyssey Data Science coming in January. I'm just building up a backlog of episodes, getting a cure at the dawn on. I know that was something that we were supposed to do, but then my basement flooded and that didn't work out. But I hear that Don will be on the podcast and I'll be live stream that one. My friends, thank you so much for being here. Remember, you got one life on this planet when I try to do something big. Cheers, everyone. Speaker3: [01:38:36] And you guys.