Jordan: [00:00:00] So what he observes is like, well, the thing that makes the law of large numbers work, if you look at Bonilla's original proof, is that this coin flips are independent from each other. You know, you flip the coin a thousand times, but you're saying you're flipping like either a thousand different coins, the same coin a thousand times. The coin flips aren't influencing each other. Without that assumption, the law of large numbers might not be true.
Harpreet: [00:00:29] What's up, everybody? Welcome to the artists Data Science podcast, the only self development podcast for Data scientists. You're going to learn from and be inspired by the people, ideas and conversations that'll encourage creativity and innovation in yourself so that you can do the same for others. I also host open office hours. You can register to attend by going to Bitly dot com forward. Slash a d. S o h. I look forward to seeing you all there. Let's ride this beat out into another awesome episode and don't forget to subscribe to the show and leave a five star review. Our guest today is a writer and professor of mathematics. He earned a bachelor's in mathematics from Harvard University, a master's in fiction writing from Johns Hopkins University before returning to Harvard to complete a Ph.D. in mathematics and eventually joining the faculty at the University of Wisconsin, Madison, where he's now at the John D. MacArthur professor of mathematics. He's been writing for the general audience about math for over 15 years and has had his work appear in publications such as The New York Times, The Wall Street Journal, Wired Magazine and Slate. You might recognize him as the author of The New York Times best [00:02:00] selling book, How Not to Be Wrong The Power of Mathematical Thinking, which also made its way onto Bill Gates top five summer books in 2014. Today, he's here to talk to us about his latest book, Shape, which is a far ranging exploration of the power of geometry underneath some of the most important scientific, political and philosophical problems that we face. So please help me in welcoming our guest today, a man who taught himself how to read at the age of two by watching Sesame Street, Dr. Jordan Ellenberg. Jordan, thank you so much for taking time out of your schedule to be on the show today. I appreciate having you here.
Jordan: [00:02:40] Oh, thank you for having me on. This is going to be fun.
Harpreet: [00:02:42] Yeah, I'm really, really excited. Initially had reached out to you. Might have been late. Twenty twenty. And you were busy writing the book and turned out amazing. So great. Great job on that. I'm really excited to chat with you about this book. Before we talk about the book, let's learn a little bit more about you. So talk to us about where you grew up and what it was like there.
Jordan: [00:03:03] So I'm from a town called Potomac, Maryland, which is a suburb of Washington, D.C. What can I say? It was a brand new suburb hewed out of the woods. I mean, my family was the first family to live in our house. I think every family on our block. And it was, you know, the first family to live in the house because they were built in the early 70s when I was born. So, you know, a lot of people find the suburbs stultifying because there's nothing to do. I found it was a place where there was a lot of time to read books and learn stuff. So that was that was good.
Harpreet: [00:03:31] So you learn to read at the age of two. Do you remember what your first book was that you read?
Jordan: [00:03:35] No, that is something I have been told. But obviously I wasn't there in any conscious sense. So so I can't speak to it. I mean, I read a lot when I was a kid and like pretty indiscriminately. I mean, I read like all kinds of things, a lot of science fiction, like a lot of science, but also a lot of just, you know, whatever I would get up in the library, I would just sort of bring home a big shopping bag full of books and go through them.
Harpreet: [00:03:58] And so you grew up the child of two biostatisticians. [00:04:00] When you were in high school. Did you think you're going to follow the family footsteps and go into math with that already kind of the path you saw for yourself when you're that age?
Jordan: [00:04:11] I would say so, although, I mean, I was I was always very, very interested in math and certainly in high school. That was my primary academic interest, although I was interested in a lot of other stuff and especially writing, too, I'm not sure I really had a clear picture of what being a mathematician looked like a job. Obviously, I know a lot more now when I talk to young people who are interested in math, I have studied some meaningful stuff to say about it. I think I always saw it as a plan A, but I always wanted to have my mind open to other stuff.
Harpreet: [00:04:39] And your interest in mathematics led you down the geometry path. So what was it about geometry that you found so interesting and so fascinating that.
Jordan: [00:04:49] Well, I wouldn't say I was in geometry at first and the kind of geometry I am, if we want to sort of pin it down to them in arithmetic geometry, which means I do work, that's really sort of on the boundary between in some sense, my central interest number theory is very classical, part of mathematics that has to do with numbers and equations and which equations can be solved and stuff like that. Over the years, it's become more and more clear that you can't solve those problems without real geometric insights. Those problems don't seem geometric on their face, but like a lot of things in math and a lot of things in life, there are geometric under the skin. So I think that was how I was led into that part of mathematics.
Harpreet: [00:05:24] And your book has helped me rediscover my appreciation and fondness of of geometry. Last time I took like a proper geometry class was in, you know, in tenth grade. And let me just for context, I spent several years of biostatistician myself so heavily ingrained in statistical methodology and just recently getting it to like more machine learning, Data science type of stuff. And so your book was really cool to help me reconnect with that geometry, because it was fascinating to me in tenth grade. But I feel like throughout history people have seemed to have some type of a fascination with Euclid. [00:06:00] They tend to turn to Euclid and they're bummed out. So what's up with that?
Jordan: [00:06:05] Yeah, well, first of all, actually, before I go into that, I'm curious. Like, I always like to sort of turn it around in the interview. Interviewer a little bit. What was your experience with geometry? Like like where was it like what was your emotional reaction to it? A little bit,
Harpreet: [00:06:16] Yeah, I. Remember, I took it in between. It was a summer school class I took in between ninth grade and 10th grade, so it was compressed in just like eight weeks. And what I remember the most were the proofs. And that's like the the thing I remember the absolute most was how to do proofs. And I remember wondering, why do I need to learn this when my ever going to use this? And I came across some geometry, obviously, like, you know, throughout school, you know, in calculating the surface area and conical slices and stuff like that in calculus. But the geometry itself 10th grade and never really took anything more advanced than geometry since then.
Jordan: [00:06:58] Yeah. And like, you know, but you definitely experience this feeling of this is somehow different from all the other stuff that's going on in the math curriculum. And that's something that I think a lot of people say. And I was sort of part of what induced me to center the book around this this sense that, you know, from writing how not to be wrong. I talk to a lot of people about their experiences in math. And, you know, geometry kind of sticks out and something that people really remember about math. You know, I would say when you say this question, of course, of when am I going to use this? What am I going to need to know this fact about an isosceles triangle? You know, probably not, to be honest, probably not that particular fact. And one interesting thing is that that has always been understood about teaching geometry. You know, there's this there was this poll that was taken of American high school teachers in the nineteen fifties where they said, OK, what's what are the primary reasons we teach geometry? And, you know, some people said, oh, so that students will know geometry, they'll know facts about triangles and circles and line segments and etc. and that answer was pretty popular, but it was [00:08:00] number two and it wasn't that close. No one was to teach the habits of thinking, the teach, the habits of thinking and deduction and deductive reasoning.
Jordan: [00:08:10] So even then, I mean, I think it's quite it's a tradition that we're not so much teaching geometry to know geometry or teaching geometry to inculcate students with certain habits of thinking. Now, that being said, I think modern geometry, there's a lot more to it than. Facts about isosceles triangles, there are some isosceles triangles in the book, I want to disappoint people who come to it looking for isosceles triangles, but geometry today is so much richer than that. And when you say, like, oh, I haven't taken a proper geometry course since tenth grade and now I'm studying machine learning, guess what? You are taking a geometry course. You may not see it in that way, but the sort of the apparatus of modern machine learning is absolutely geometric in nature. But your question, Harp, was something else, I'm going to answer the question you actually asked this, which is this idea of geometry as Solís geometry when you're feeling down, which is definitely not like one of the top two or top 10 or top one hundred answers the geometry teachers would give, I think when asked, like, why are we teaching geometry? But it is like part of the history of how people have seen the subjects. You know, I start the book with this story that William Wordsworth, the poet, tells who I never knew. This guy cared about math, but he actually cared about it a lot.
Jordan: [00:09:20] And in one of his most famous poems, The Preludes, he tells the story of a guy who was shipwrecked and he's shipwrecked in a deserted island with nothing but a copy of Euclid. And then he kind of deals with his problems. He deals with his despair by kind of one by one tracing out the propositions of Euclid in the sand with a stick. What's crazy about this story Wordsworth makes up for this poem is that he didn't make it up. It's actually basically true. He sort of took it from the memoirs of an actual shipwrecked guy in shipwreck, not quite on a deserted island, but he is shipwrecked and he later goes on to become a major abolitionist in Britain and writes Amazing Grace. So sort [00:10:00] of important in his own way. I guess he found other ways to find solace in the end then geometry. But but, yeah, I think it is it's one aspect that people find both excitement in geometry. And you read again and again these stories of people encountering Euclid and their mind just becoming a flame. Right. Becoming electrified with this new kind of thinking they haven't experienced before. But it can also be calming. It can also be a way to kind of get yourself out of feelings of being trapped, feelings of being alone, like feelings of despair.
Harpreet: [00:10:35] And I mean, reading that part of your book made me want to go out and buy some of Euclid's work. Do you have a favorite modern translation of it that you think would be good for somebody who wants to find some solace for themselves?
Jordan: [00:10:50] It's a great question. And I will be honest. Euclid is not where I find myself. This this it's a very interesting question, like how should we be teaching this? And you can, for instance, go to like St John's College in Annapolis and they will say, hey, the right way to study this subject is to go to the source and read Euclid's books, maybe ideally in Greek, but if not that in some good translation. I actually, to be honest, I don't feel that way. Euclid wrote things like absolutely up to the standards of his time as well as he could. But it's not very modern. It's not very modern. Much of it is confusing to a modern reader and much of it, quite frankly, is not up to modern standards of rigorous proof, even though Euclid is almost a synonym in people's minds for rigorous proof. From a modern perspective, it's somewhat lacking the sort of the sort of geometry. You know, even one of the axioms that we talk about, these were kind of redone by Helbert like in the very late 19th century. And that's what we would talk about today. We talked about axiomatic foundations of geometry, not Euclid's original axioms. So to answer your question, partly it's a matter of taste because there is no question that historically there are tons of people [00:12:00] who had this experience with Euclid's elements itself, even though I did not. But if you ask me what I would give to a kid who asked that question, I would give them something like geometry revisited by Coxiella and Greitzer, like a more modern book. But that is sort of modern and classical at one that treats the classical things. But in a more modern framework. I want to say modern. This book is, I think from the nineteen sixties. So we're on a very long time scale in math, like modern means like oh it's only 60 years old, so it's quite modern in its outlook. It's not like two thousand years old.
Harpreet: [00:12:32] Speaking of modern geometry, modern math. Right. So I've got a one year old son. I'm teaching him all about shapes. You know, here's the triangle. Here's a circle here, square. He's only one year old. He kind of gets it, I think. But then I read the part of your book about topology, and I feel like I'm just feeding this kid lies because it turns out that in topology, shapes are not the same as ordinary geometry. To talk to us about this, talk to us about topology versus geometry and what the heck how is a circle the same as a square?
Jordan: [00:13:01] Atabaki Yeah. So topology is a subject that's invented by a mathematician named Henri Poincaré is a major figure in the book. Around again, around this moment of the term, I say the turn of the century. I still am getting used to that. People younger than me might not know which century I'm talking about. So for me, the turn of the century means going from the 19th to the 20th and around. Then Poincaré is developing this new science of topology, which he called analysis C, so he wasn't he was a great writer, but not a great brand or other. Fortunately, people sort of came up with a pithier name for this subject, that analysis. And he has this wonderful slogan. He's an incredibly quotable mathematician, by the way, and has tons and tons of quotes from in the book because he put things so well. His slogan is that mathematics is the art of giving the same name to different things, which is an amazing insight, because it's so true to what we are doing in mathematics that, you know, on a very basic level, we [00:14:00] use the fact that a triangle over here and the same triangle over here have the same properties we don't really worry about. Like we're like, yeah, it's the same triangle. You happen to draw it in two different places, but it's the same if you ask a question like is it isosceles, is it equilateral? That question is not going to differ depending on when you draw it on the page.
Jordan: [00:14:20] So it's very natural to call those two things the same triangle, the same way that in life, you know, if somebody says, like if you ask me where I was from and I said, Potomac, Maryland, I didn't say the Potomac, Maryland of nineteen seventy one, you could argue that that's that's a different place from the Potomac, Maryland of today. After all, there are in different temporal locations. But we don't do that. We we do what mathematicians are saying, identify those two things. We call them by the same name, even though one is a study from forty nine years ago and one is a city from today. So is a circle the same thing as a square? Well, that depends what features you care about. That again is part of the insight. Which different things do we call by the same name in some context. Like if we like cared where the triangle is, we really might call this triangle over here and the so-called Sunni Triangle over here. We might refer to them as two different triangles. But if all we care about is its shape, we might call them the same. Now, in typology you care about much less than you care about in classical geometry. In classical geometry, for instance, you care like how big an angle is in topology. You don't care about that. In fact, any operation you can do that involves bending or deforming or moving a figure.
Jordan: [00:15:33] As long as you don't break it, as long as you don't rip it, that that we just declare does not change the figure. It's the same thing. So you can make a square into a circle. This is a good sort of podcast exercise to visualize this if you're listening and audio and sort of smooth out the corners and then smooth them out a little bit more and you can imagine it kind of smoothly, transitioning from being a square to a circle in a way that like nothing ever breaks. And that's a signal that in topology, [00:16:00] those two things are the same now in other kinds of geometry where maybe you care about, you know, things like calculus, things like what is the tangent? OK, well, a circle has a unique tangent line at every point in a square. Certainly does not. So if that's the kind of geometry you're doing, you want to consider them different. So, yeah, it's a I mean, I think this kind of new kind of geometry, the Whangarei and his people in the same cohort as him thought about how this incredible insight that you decide which things you want to consider the same. And then having made that decision, that tells you what kind of geometry doing and there's no right or wrong answer. You make that decision based on what problem you're trying to solve.
Harpreet: [00:16:38] So is topology kind of the the way to square the circle, so to speak, because you talk about squaring the circle and how how that drove Abraham Lincoln nuts typology the the answer to that.
Jordan: [00:16:52] Yes, that's a perfect example of a very different kind of geometry where you really do care. So what is the problem? The problem was one that goes back to the classical geometries of ancient Greece, where given a circle, you want to construct a square which has the same area as the circle. So first of all, obviously, this problem doesn't make sense unless you think of the circle in the square as different. Right. So this is a problem for people who really. Consider a square square in a circle, a circle. Secondly, what is construct mean, like what do you get to use? Do you get to you do something like construct like with a bulldozer? Like now it means constructing this classical Greek way where there's only two tools you're allowed to use a compass and a straight edge. So it allows you to draw a straight line between two points or something that allows you to draw circles. And then, yes, the goal is you have a circle drawn on the page and you want to construct the square with the same area. By the way, why would this be a thing to try to do? It's because to the Greeks, really, area was defined in terms of squares. So their way of understanding, what does it mean to compute the area of a circle? It wasn't like we would think of it today where we want to know what it [00:18:00] is like eight decimal places or something like that. That really wasn't the Greek style. They were not thinking about real numbers and they didn't have decimal expansions. What they meant by computing the area was construct a square that has that area and then they would consider it computed so that natural fitting for them to want to do, but they couldn't do it.
Jordan: [00:18:19] And then, like lots of other people tried and they couldn't do it. And lots of other people tried over many, many centuries and they couldn't do it. And they became known as a famously hard, possibly impossible problem. It even appears in Dontae, actually, just as an example of, like, complete frustration. He starts one of his versus oh, like somebody who was trying to square the circle. Like, that's how I felt meeting like his readers. Understood. I'm completely frustrated and stuff. So, you know, LinkedIn, he has a super interesting relationship with this stuff because he's not a classically educated person. Right. He has this rather humble ringing. And in his study of the law, he finds himself saying, boy, I'm constantly coming into court, being asked to prove something, being asked to prove my case. What does that even mean? Like, I realize I'm doing it. I say I'm doing it, but I don't know what is meant by it. So he has sort of on his own cognizance, kind of says, I'm going to go back and study Euclid. That's where I'm going to find out what it means to prove something. So he was one of those people who found an incredible amount of insight and enthusiasm in his reading of Euclid himself. And he became quite obsessed. And there is like I am circling back to answer your question, Harpreet Sahota. We'll get there. You know, one of his law partners tells this wonderful story of finding Lincoln, just kind of like with a huge sheaf of papers and like all these different colored pens, like all this stuff going on in his desk.
Jordan: [00:19:42] And he's trying to square the circle. And he spends two whole days, like ignoring all of his legal responsibilities and trying to solve this problem. And eventually he gives up and his partner sort of says, like, we could we can see that he was sensitive about it. So we just never brought it up again. We just went on. That never happened. So, [00:20:00] you know, to me, what I like about Lincoln, I should say, people still try to square the circle, even though it's been proven to be impossible. That was proved in the eighties, like, well, after Lincoln's time. But and a lot of people who try to do this, they become quite obstreperous about it. Right. There's a certain approach where just like I refuse to accept that I didn't do this, you have to accept my claimed construction, even though we know it's not right. Maybe I don't know where the mistake is, but I know it's not right because I know it was proved impossible. Well, Lincoln, I think Lincoln comes off very well in this story because I like that he's ambitious enough to try. It has enough humility to understand that he didn't actually do it. That's like a very nice combination. And certainly that combination of ambition and humility, it's certainly one that every mathematician has to have. I kind of think it's probably what you'd want every president to have do.
Harpreet: [00:20:47] And I liked that he put out some parts of Lincoln's speeches where it was very prophy, the way he was writing his speech. And I found that to be really, really interesting. So if you guys want to see what that's all about, make sure you pick up shape. It is an amazing book. So look into a geometry of something that I think my audience is going to really find interesting. So most modern topics, scientists, statisticians, analytics professionals and something I feel they will find fascinating to take from your book How Not to Be Wrong, which is about the geometry of correlation, which I found to be pretty interesting. So so talk to us about that.
Jordan: [00:21:28] Yeah. And it's something it actually would have been great in this book. But, you know, you're not allowed to do the same thing in like two sets of books. Even if you write a second, you're like, oh, man, I wish I hadn't used that in the last book. It's right. So, you know, this notion of correlation is so fundamental in applied statistics. We use it all the time, sometimes misuse it. To be honest, it is a simple way. Developed by Karl Pierson is actually a character in this new book. He does lots of stuff in this new book. It's one simple way of measuring whether there's some kind of linear relationship between two variables. [00:22:00] To the extent to which variable X affects variable Y in a linear fashion. Often it's MITSUSE. People sort of use correlation to mean does X effect Y at all. And that can lead you to a big mistake because you can certainly have variable growe correlation. But one of them is strongly affected. The other, it's just not affecting it in a linear way, which is what the correlation is there to detect, so it's detecting a very specific kind of causal relationship. Oh, well, not even necessarily causal. And that's a big no no right to say that. But it's detecting one very particular kind of relationship between two variables. But geometrically, what's kind of cool is you can see it as an angle between two vectors and a very high dimensional space. More precisely, you can see it as the cosine of an angle.
Jordan: [00:22:49] So for the statisticians out there who work with correlation coefficients all the time, you know that that's a number that's always between negative one and one. Why is it always between negative one and one? Well, you could say you could like if you sort of study the formula, you can prove some inequalities and see that. But another way to say, why is that between negative one one? Is it because it's the cosine of an angle? And that's one what I one thing I like about that is that that geometric understanding of what a correlation is makes it clear. So here's a sort of balance that people fall to all the time. They say like, well, X is positively correlated with Y and Y is positively correlated with Z. So it must be that X is positively correlated with Z. Well, that's just not true. It's a natural thing to hope for, but it's just not necessarily the case because being positively correlated. All right, we're in for some trig. We're going to do some trig live on the air. OK, so when there's an angle, have a positive cosine when it is acute. Now, we got to really go back to eleventh grade and think about that. But if you don't remember, believe me, what I'm telling you is the case that it's an acute angle that has a positive cosine. So what you really say when you say two variables are positive accumulated, you're saying the angle between the corresponding vectors is a Q, [00:24:00] which makes sense right there, kind of like pointing the same direction.
Jordan: [00:24:03] The smaller the angle between them, the closer that cosine is to one, which is exactly when you would say the two variables are strongly correlated. So if you were to believe that whatever X is correlated with Y positively and Y is correlated with Z positively, then X is correlated with Z positively. That'd be like saying if I draw three little rays and this one is an acute angle to this one and this one is an acute angle to that one, then the two outer rays must be at an acute angle to each other. But that's just not true. It's just not true. Would be great if it were, but it's not. You could sort of have them both both acute angles, B, 60 degrees and making this little W thing here. And then the outer vectors would be at one hundred and twenty degrees to each other, which is of two. So it is perfectly possible for X to be positively correlated with Y to sort of influence Y in a positive direction. If you think causational and why positively correlated with the Zeba for X to be negatively correlated with Z and that maybe put some pressure on our tuition if we don't think about the geometry. But if you do think about the geometry, it's all most obvious that that kind of thing can happen.
Harpreet: [00:25:10] I found that really interesting when I first discover that that correlation and cosine have this intimate relationship the same thing.
Jordan: [00:25:19] And and people actually know in Data science, people often talk about cosine similarity, like cosine similarity of two of two datasets that they have. And that's exactly sort of trying to keep track of this fact that that's what correlation is really measuring. Mm hmm.
Harpreet: [00:25:32] Another interesting thing was this thing you're talking about in the book about law of large numbers and free will, and I found that to be really interesting. So what what does the law of large numbers have to do with free will?
Jordan: [00:25:47] Well, nothing, it turns out, but it took some time to figure that out. So it's you know, this is a crazy story, which, like a lot of things in this book, I did not know about until I started researching it. I mean, it's you think, you know, you're going to [00:26:00] write about you sort of set out, you make a plan, you go to the publisher and like, here's all the stuff I know I can write about a book about I can write a book about that and I'll tell you all this stuff. And then when you start to research, of course, you find out about lots of stuff that you don't know. And that's much more interesting to write about. Right. It's much more interesting to write about something that that you're learning that you're excited about. So this was a crazy story. OK, so setting the stage, the law of large numbers, famous theorem of Yakob, newly central to probability theory, which basically is the theorem that says that if there's some probabilistic process that has a fixed probability and you do a bunch of trials that with very, very likely the results of the trials are going to converge to that probability. OK, that was a little abstract. So let's put it in terms of coin flips, which is what we like to do as probable probabilistic. You flip a coin a bunch of times, very likely the proportion of heads is going to be very close to 50 percent. Of course, it's not certainty.
Jordan: [00:26:50] You could flip a coin a hundred times and get one hundred heads. But the odds of that happening or even of getting 60 percent heads or even of getting fifty point one percent heads, if you flip enough times, is pretty low. And that's something that I think philosophically people wrestled with. I mean, it's sort of people like was there some like weird force, like a law of averages is like pushing that proportionally towards 50 percent? No, it's not really like that. It's not actually that mystery. And then you can sort of sit down and prove on a sheet of paper, but it feels weird to people and it feels especially weird to people because it applies not just to coins or balls pulled out of urns or any of the other natural examples that we often see in probability textbooks, but to human behavior. So you can look at like people's age of death. You can look at people's age of first marriage. You can look at proportions of male to female babies born. You can look at all kinds of statistics of human life and you can see them settling down very nicely to these nice averages, just as coin flips do. And that is disturbing to people, right? I mean, people are like, so [00:28:00] are we really just mechanical, like a coin? Like we may think we're doing our own thing, but actually everything about our lives is like deterministically governance. So now where are we? Where does this argument of free will come from? It comes from Russia again, at the turn between the 19th and 20th centuries, this kind of period of both political and intellectual ferment in Russia and elsewhere.
Jordan: [00:28:25] And there's this huge beef between two rival schools of Russian mathematics. There's a St Petersburg school which is led by Pavel Nekrasov, who is an arch conservative, ultra religious Russian Orthodox guy from from an ecclesiastical family. And on the other side, there's Andrei Markov, who is a very angry atheist. He's constantly writing angry letters at the paper he like he demands to be excommunicated from the church because they me to Tolstoy. He's like a toast to Tolstoy. They excommunicate me. I'm just as big of an atheist as Tolstoy. Come on. So, I mean, these are two like, absolutely opposite figures. And Nekrasov, they're both and they both study probability. And the crux of is thinking about this law of large numbers. And he's one of the people is very troubled by it, because this idea that we're just deterministic automatons, that's not consistent with his religion. Right. So he's like therefore. So it can't be right. I got to find a way out. This is like a puzzle for him. So what he observes is like, well, the thing that makes the law of large numbers work, if you look at burnoose original proof, is it those coin flips are independent from each other. You know, you flip the coin a thousand times, but you're saying you're flipping like other a thousand different coins of the same coin a thousand times.
Jordan: [00:29:37] The coin flips aren't influencing each other. Without that assumption, the law of large numbers might not be true. Like, for instance, if you were to say, OK, the coin is somehow rigged to that. However, it falls the very first time. It's going to fall that same way forever more. So that would be like a failure of independence, right? It would be the coin of strongly independent influencing each other. Then you don't converge to fifty percent either. [00:30:00] The first one comes heads and then they're all heads or the first one comes tails and then they're all tails. But in any event, you put those thousand coins and they're all going to be either one hundred percent heads or zero percent heads. So what did the krastev say? He said these statistical regularities of human behavior we see you thought that meant that we were just all deterministic autonomy? No, but what it means is that actually we're all independent from each other because that's what you need for the law of large numbers to take place. So this is a proof, Nekrasov said, that we actually do have free will, just as the church demanded. And he was very excited about this and wrote many long papers about it. Well, Markov was not having it for Markov. It was bad enough that this guy was super, super religious, which Markov objected to, but that he could live with, that he would bring math into it.
Jordan: [00:30:49] That was too much. Right. And now Marco was going to denounce him. He's like, don't believe what you want, but don't get him mixed up with mathematics. So what was the mistake the Nekrasov had made? He really had made a mistake. I'm not going to say he made a mistake religiously. What do I know? But he'd certainly made a mistake mathematically, which is the following. The law of large numbers tells us that if variables are independent, then they settled down on those very predictable statistical behavior in the aggregate. But one thing we sort of drill into our students and we're teaching them logic and certainly we teach us and geometry is don't mistake a statement for its converse. So let's remember what these logical terms mean. It means that just because independence implies statistical regularity, that doesn't mean that statistical regularity implies independence. Those are two different things. And in fact, as it turns out, the latter one is false. You certainly can have variables which are not at all independent from each other, but still settle down into this nice regular statistical behavior, or at least a priori. There could be such variables. And one Markov did, in order to win this fight was to construct such. And that is the so called Markov [00:32:00] chain. That's absolutely fundamental to machine learning today. This thing that is named after him, I never knew that it was invented to win a fight about religion. But that is indeed like the origin of the idea from from Marco.
Harpreet: [00:32:20] Hey, are you an aspiring Data scientist struggling to break into the field or then check out DSG Jayco for Egressed to reserve your spot for a free informational webinar on how you can break into the field, that's going to be filled with amazing tips that are specifically designed to help you land your first job. Check it out. DSD jako for Egressed. It's really interesting, like going through grad school. I don't have a master's degree program and throughout all those classes I've taken, they teach you the facts. They teach you how to do things. They don't teach you the history of how those ideas have come to be. And, you know, by reading books that are in your field, but not textbooks such as you're in shape, how not to be wrong. And, you know, several other books, a book by, for example, you learn the history of how these ideas have have come to be. Did you find that true in your career? Did you start to look at the history of the ideas by writing books, or was this something that you're always interested in?
Jordan: [00:33:24] I mean, I had the same kind of training as you and my PhD training. Definitely. You don't really study the history of the idea undergrad and Ph.D. And by the way, I'm fairly sympathetic to that because just learning the math as it is now, that's a huge endeavor. That's definitely a full time job. Just to learn how we think about things now and to add on to that, by the way, how did we used to think about things and how did these things come to be? I think it's a great thing to learn, but I also recognize there's only so much time in the semester. And, you know, I mean, I just finished teaching an undergraduate course in real analysis, which was so fun. And I did I would say I touched [00:34:00] on where some of the ideas came from and how they came to be and how people thought of them before they came to the kind of understanding that they have now, but not that much just because a semester is not a long time. There's like so much wonderful mathematics that you want to to bring to people. But I would say that as a professor, I found it very, very useful to learn about the history of these ideas because, you know, one reason is it's as simple as this.
Jordan: [00:34:25] If you're going to teach, you have to be empathetic. You have to be able to kind of imagine yourself into the mind of somebody who doesn't already understand this subject and doesn't already understand the way we think about it in twenty, twenty one. Because that's what your students are, right? That's why they're in your class, because they've come to learn. And one of the best ways to sort of imaginatively inhabit people's world don't understand this is to go back into history and look at the people who created it because they definitely didn't understand it. The way we do it today, they were the ones generating the way we think about it today. And they were doing that in order to solve some problem that they had. And I think that can be very motivating. So I find it fascinating and I would never say I'm a historian of mathematics. I mean, I learn stuff to write this book and I learn stuff from I mean, there's a whole world of people whose profession is historians of mathematics. And I learn from what they write on that. There's, you know, there's so much more one could do. I do I touch on it a bit, but I go much farther.
Harpreet: [00:35:21] Yeah. I mean, that's the beauty of reading books that aren't just textbooks or reading books that are related to the field. But, you know, not not just like for some classroom or some boot camp, but going back to Markov, I found that discussion in the book about Markov Chains to be really interesting, especially as it pertains to to language. But before we get into that, can you help us get a brief overview of what Markov chain is? And then let's talk about how it relates to language.
Jordan: [00:35:49] Yeah, absolutely. Because I sort of told this whole story and I told you we came up with this example, but I didn't tell you what it was. And now I'm going to tell you. So a Markov chain is pretty similar to another word. You [00:36:00] might have heard a random walk. And that's the example that I give in the book. So let me tell you, that's a it's a kind of Markov chain. And how it works is very simple. You're in some landscape and honestly, you can even see it little. You can even be Wolanski with only two places you can be in the book. I read about a mosquito who has two Borlaug's that it likes to live in. You know, what I call bug zero and bug one. And the rule is very simple. When the mosquitoes in bug zero, there's some probability of staying in bug zero the next day and there's some probability of switching bugs. And when the mosquitoes and by one, there's some other probability of whether I stay or whether I sweat. And that's simple rule enables you to sort of simulate the whole life of the mosquito. You know, if you say the example I give in the book is one where the likelihood of switching is pretty low, like most of the time with the mosquitoes in bugs zero, they'll stay in bomb zero and most of the time of the mosquitoes and bug one, it'll stay in by one. And so what you typically see if you do a simulation is long stretches of time hanging out and bugs zero and not going anywhere.
Jordan: [00:37:01] And then every once in a while it'll jump to the other bog and then stay there for a while. And then every once in a while it'll jump back. So in a situation like that, where you are today is not at all independent from where you are tomorrow, those are very likely to be the same, but. It turns out that even so, in the long run, the percentage of time the mosquito spends in each bog settles down to computable averages, just like the coin flips. It's a little more subtle to compete with those averages actually are. It's sort of a problem in Matrix algebra, but it does. There's kind of a law of long walks, just like there's a law of large numbers. It's a little more complicated to prove and there's a lot of subtleties, but that's what goes on. And that's that has proven to be just an idea of fundamental importance. In fact, this is like and now I'm dating myself again. You know, the way that Google ranks your search results is essentially like this. Instead of a mosquito, there's like a [00:38:00] little robot spider that's like searching the Internet. And instead of sort of choosing between Borgs, it's choosing between websites. It's on a site. It looks at what all the links are on that page. It picks one of them at random and it just goes so it's doing like a random walk on the entire Internet.
Jordan: [00:38:16] And the miracle is that they're to you settle down to some fixed distribution of how much of your time you spend at each particular site. And as you might guess, if a site is somehow more important, it probably has a lot of links to it and indeed probably has a lot of links to it from other sites that have a lot of links to it and so on and so on and some recursive way. Well, that turns out to be like a really good measure of how important a site is, how much of its time the random walker ends up with that site. So this was, again, for people who were on the Internet in the 90s. There was really before Google and after Google. I mean, people had tried to index the Internet and people had tried to develop search and it just didn't work that well. And when Google came out with page rank, their way of sort of ordering the search results using this principle of the Markov chain, it just absolutely flattened everything else that was available as a way to find information you were looking for on the Internet. I mean, I don't think it's too strong to say that the Internet couldn't have become usable on a large scale in the way that it is without somebody having that insight for you.
Harpreet: [00:39:28] Youngins out there who don't know the struggles we faced with Excite, Ask Jeeves, AltaVista.
Jordan: [00:39:35] Yeah, look up AltaVista.
Harpreet: [00:39:39] Yeah, the Google is such an improvement. So if I can distill it down a Markov chain, what happens next is highly dependent on what happened previously. So there's like a state transition, state kind of dependance in a sense.
Jordan: [00:39:55] Yeah, absolutely. Although in a way, one of the and again, there's so many different [00:40:00] kinds of things that go into the name of Markov chain that I don't want to sort of make it, but in its purest form, actually, in some sense, it depends on what happened previously, only to the extent that you know where you are right now. I mean, the pure Markov chain is kind of a memorialized thing where you are right now determines what the probability distribution is and where you go next. But your whole history up to that point is forgotten.
Harpreet: [00:40:26] So what does that have to do with language is language Markov chain?
Jordan: [00:40:29] So I would say no. But there is a philosophical question there. What is true is that you can learn something about language using a Markov chain. So, for instance, you can say, let's look at English text. And if you are Google or lots of other people you have access to, like, you know, literally trillions of characters of English text produced by English speakers. And you can say, OK, if I'm looking at a letter, if that letter is a what's the next letter going to be? And there's a probability that if you look at look, you can look at like every AI in your dataset and see what K.M. it and you can see that a despite being a very common letter overall is not that common. Following an A right. A B is more likely a T or an S is like really likely. I don't have it in front of me, so I don't know what the distribution is of like letters following an AI. And so you can do a Markov chain where you just say, OK, if I'm in a look at some the distribution that comes from a corpus of text of like what follows an AI and then put that letter next. OK, now in a new letter, let's say it's and then go back and look at my text and look at all the ends and say, what was the probability distribution of letters following an end.
Jordan: [00:41:37] This idea was invented, by the way, like so much else in computer science by Claude Shannon. But he did not have trillions of characters of English text, right? He would. He literally made a thought experiment like this and he actually does a thought experiment. He did it. He'd say, well, just pick up a book of myself and flip through it until you see an AI and then write down the next letter. And with whatever the letter that is flipped through books until you see that letter and then [00:42:00] like, write down the letter that comes next. Now, you can improve on this. And this speaks to your question about sort of memory and how much of your past you remember by saying, well, maybe instead of single letters, I'm going to look at pairs of letters. And if I have an A and look into my books until I see an and if the next letter is T, because maybe it is a book about an. On my shelves or whatever it may be now, I'm like, OK, my state is now empty and now I look for a book and flip through the pages until I find the letters T and see what comes next. And the sort of longer the chains of letters that you keep track of, the more English like this randomly produced text becomes.
Jordan: [00:42:39] And it's a little spooky and it's a little awesome because the text that you produced this way, they're definitely not English, but you can very visibly see that they have something in common with English. So the Markov chain is remembering something about what English text is like. And of course, the ultimate example of this is a sort of text generation. Engine three is like one very popular one that's out right now, which, you know, is this like heinously complicated thing generated by a neural net with one hundred and seventy five billion parameters. And I guess it's supposed to have used up like as much wasted as much carbon is like a jumbo jet flying to the sun 14 times or whatever it is. I mean, it's but essentially what it is is like a very, very hefty Markov chain that's saying like, OK, if the last five hundred words of text, not like three letters anymore, but a much longer chunk, were this what's the most likely next word to appear? So obviously you have to sort of deal with the Data in a more sophisticated way to generate a chain like that. But it's the same kind of thing and it can produce something that is a lot a lot more like English text written by a human. But honestly, if you play with it for a while, you can still tell that it's not.
Harpreet: [00:43:55] Yeah, yeah. It's interesting. You had an entire paragraph in your book that came straight from Deepti [00:44:00] and it is in page ninety five of Sheep, which you guys should pick up. Excellent book page. Any five of shape. You guys, you had a paragraph that came that's is based on all the text previous to it and it was like spooky how like it just sounded completely natural. I found that to be right.
Jordan: [00:44:19] Well it's even more spooky if it's your own style. The thing is. Imitating Sellew Yeah. Yeah, it sounds like me in certain ways, but it doesn't really make sense. Yeah. So it doesn't, you know, it doesn't know what things mean.
Harpreet: [00:44:31] You know, speaking of Claude Shannon, like when I think about like my dream job, not Facebook, not Amazon, not Netflix, it would be Bell Labs and like the early nineteen hundred nineteen twenty nineteen thirty. There's a lot going on there man. Would have been amazing to be there.
Jordan: [00:44:48] Do you think we'll get back to a place where corporations are like running real research labs like that that are not directed directly towards revenue producing things for the companies, but are sort of more like academic departments but housed by
Harpreet: [00:45:04] I mean companies? To be completely honest, I hope we do like I mean, will it happen up at MIT that I don't know if I know enough to say whether or not it will happen. But just being selfish as heck is someone who loves research and also loves to get paid a lot of money. I hope that it does go back to that because it's awesome, right? I mean, a lot of great innovations came out of Bell Labs and places like that where people are doing research for the sake of making a profit, but also pushing industry forward. But I mean, isn't like some of the biggest companies, you know, Google, Facebook, Netflix, they're doing stuff like that, right?
Jordan: [00:45:41] Yeah, I think to an extent. I mean, I write about some work that came out of Google Labs that I really like. And actually I've talked to the folks at Deep Mind and like, there's definitely some stuff happening inside the mine that is not like directly business relevant and is just because, like, those interesting people who want to work on interesting things and are given the capacity to do so. But I still wouldn't [00:46:00] quite say there's a twenty twenty one analog of like that mid century Bell Labs. Yeah. Maybe Microsoft Research to an extent. I mean but I think that's shrinking a little bit.
Harpreet: [00:46:11] Speaking of Google, you talk a bit about where to Veck in your book and how this thing sounds like a magic meeting machine. So talk to us a bit about that.
Jordan: [00:46:20] Yeah, I mean, we're due back to a very cool project and it's actually related to what we talked about, about cosigns, because it models a word as a point in three dimensional space and then it models similarity between words as same thing, the cosine between the angle that cosine of the angle between those two points considered as vectors. So that's like a very geometric way of thinking about what words are a problem. One is it's very hard to visualize three dimensional space. So, you know, really just like a point in two dimensional space, you can write as an X coordinate as a Y coordinate what the actual representation is, if you like, pull this stuff up in Python is three hundred numbers. The same point in two dimensional space is a list of two numbers. While pointing three dimensional space is a list of three hundred numbers, plain and simple. But if you looked at those three hundred numbers, you wouldn't really get any insight from right. I mean, like that's not how humans think of things geometrically. So what's cool about that project that they discovered in. Almost Mikhailov is the lead author of this paper is that you can, in some sense carry out vector operations on the words, OK, what does that mean? It means, for instance, that you can say, what would you have to do to go from if you think of the word A, he has a point and the word is she has another point.
Jordan: [00:47:36] You can say what direction would you have to move and how far to go from here to she? OK, so you can compute, right, you have two lists of numbers, you subtract them, that's sort of what tells you that that's what I mean by a vector operation. But you don't even have to think of the numbers. Just what direction do you have to go and how far? OK, now what if you do that motion? But instead of starting with he, you start with the word king. Well, you don't [00:48:00] wind up on the dot at the location of another word. But if you look at so what's the closest word to the location that you do end up you get queen. All right. So that's pretty interesting. It's saying that this moving in this direction is somehow like the operation of replacing a masculine word with its feminine equivalent. And indeed, if you do it to waita, you get waitress. If you do it to actor, you get actress. So that made a big splash when they release this and talk about this as a kind of analogy machine. But it's also a really good time playing around with this. It becomes clear pretty quickly what the what this is not doing is learning something about the actual meaning of words.
Jordan: [00:48:39] I mean, what it's doing is what any machine learning system trained on Data does is it's learning something about the way native speakers use words. So in the kind of what it's saying, you can think of it like this and the kind of context where people are talking about a king. If you sort of say like if they were using the word here a lot, they might be referring to a king. If they were using the word she a lot in a similar context, they might be referring to a queen because they're saying something about monarchy. So, for instance, if you apply this operation to the word stunning, you get gorgeous, OK? Does that mean that gorgeous is the female version of stunning? No, it means that people, because of our habits, have a habit of using the word gorgeous to refer to women, but not to men. But that's not doesn't really speak to the meaning of the word. Right. It refers to. So it's not you shouldn't think of it as like a machine that is learning the meaning of words. You should think of it as a machine, that is learning how English speakers habitually use words which make sense because that's what it has access to. Right. The Data has access to is how people use the word words.
Harpreet: [00:49:42] Yeah, it's just I mean, the computer doesn't know the actual meaning of the word. They're just numbers and a number higher, for example. Right. They're just numbers to add to that.
Jordan: [00:49:51] But to me, you know, there is an interesting philosophical question. Of course, there are people who would say, you know what it would mean for a computer to know the meanings of the words [00:50:00] is if it could solve the problems and treat them the way a native speaker would this kind of Turing test philosophy. But it doesn't even pass. That is the point I'm trying to make.
Harpreet: [00:50:09] It's definitely got a ways to go. So I'm speaking about trying to visualize three dimensional space. I mean, that's hard to do. One way we could do it is using PCR to kind of do some dimensionality reduction. And a core component of PCR is the eigenvalues and eigenvectors. And I loved seeing that come up. In your book, you talk about eigenvalues and eigenvectors, but in a really interesting way that I never even knew existed till I read your book. And it's the relationship between the Golden Ratio and eigenvalues. So talk to us about this.
Jordan: [00:50:49] Yeah, this was something I really wanted to do in this book, like talk about what is an eigenvalue, because that's it's such a fundamental part of every branch of mathematics and, you know, is it hard? Well, we learn it pretty early, right? I mean, like getting an undergraduate math major would probably learn it like their first or second year when they took linear algebra. But it is kind of deep and it is not the kind of thing you would ordinarily cover in a popular book. And it's also not the kind of thing that you can explain what it is like in a single sentence, like some other mathematical concepts. Right. So I really wanted to kind of challenge myself a little bit and take some space to tell a story that would culminate in like what is an eigenvalue and what is an eigenvector. So how do I find my way into that? Well, part of it was that, you know, I said, like, the book never comes out quite the way I planned it and always has lots of stuff I didn't plan to write about. Well, you know, like a lot of people, I got much more interested in the mathematics of pandemic spread about a year ago. Right. That was not part of the original plan to have a lot of stuff about that in the book because but, you know, guess what? Suddenly I was, like, really interested in it.
Jordan: [00:51:54] There were a lot of other people and I found that ended up being like a really interesting [00:52:00] way in because the ways that pandemics spread, if there's if it's all heterogeneous, if there's like any kind of separation of different populations with different transmission rates or different habits or different whatever you have, kind of like a multi-party system where all the parts are interacting with each other. And I sort of set up various toy models of like, you know, you have two states and they have different habits of like how much they're transmitting some disease. And it doesn't have to be covered, by the way. I mean, it could be anything. And what you find is the long term behavior of any system with multiple parts. That's what makes you have to do a linear algebra that would makes you have to do eigenvalues. You're going to get exponential growth just. Like you do in any uncontrolled epidemic, the rate of exponential growth is going to be governed by an eigenvalue of some system which which, by the way, I'm still not telling you what this is on the podcast, because in some sense, the whole challenge is that it's not the kind of thing that I can say in two senses. This is the kind of thing where you can tell the whole story. But in one particular simple model of an epidemic, which just two different parts of the population, the number of cases of each stage is governed by the so-called Fibonacci sequence, which in the book I often call the bureau Hunka Fibonacci sequence, because it turns out it was known in India like 500 years before Fibonacci.
Jordan: [00:53:15] And a completely different context, which is something I learned from my colleague, Module Bhargava, wonderful number theorist. And that sequence, which is a very simple one. Some of your listeners may know where each term is, the sum of the two previous ones, a simpler rules you can imagine. So you start with one and one, and then the next one is one. Plus one is two, and then you get one, plus two is three and then you get two. Plus three is five, three plus five is eight. So on and so on this sequence. Tends towards exponential growth, but where the base of the exponent, how much you're multiplying the term by each time is this crazy number, the golden ratio like one plus the square root of five divided by two. What the hell is that doing there? There's no square root of five in the rule that I told you, but it just comes out and it comes out as an eigenvalue. So [00:54:00] that's sort of how I introduced the idea that maybe it's a somewhat mysterious global constant that governs the long term behavior of a multipart system.
Harpreet: [00:54:11] And there's a story that you get to leave out of the book that was eigenvalues quantum mechanics and the geometry story should be able to share that here with us.
Jordan: [00:54:21] Yeah, I mean, I did I said something about it, and I'll just pitch a wonderful book by a physicist named Sean Carroll called Something Deeply Hidden, which really, at least for me, is somebody who knows a lot of math and physics, really helped me grasp quantum mechanics much, much better than I had before. I guess what I would say is this if you're studying a pandemic and you break the population into, let's say, five buckets and say, well, I'm going to treat each bucket as a group, all of the members behave the same. But I'm going to be a little bit more complex and have the sort of five fold way you'll be studying what you might call five by five. Matrices of your listeners have used matrices before. Well, in quantum mechanics, it turns out that all the sort of simple things that you think that you would be doing, like measuring the position of a particle or something like and of course, and drastically or simplifying LinkedIn Infinity by infinity matrix. Like in this book, I talk a lot about high dimensional spaces and I'm like, don't be scared of a three dimensional space. It's kind of like a two dimensional space. As Jeff as Jeffrey Hinton, one of the gurus of Neuron that says, like, if you need to visualize a 14 dimensional space, just visualize a three dimensional space and say 14 as loud as you can. Infinite dimensional spaces are another beast altogether. I barely touch on them in this book, but they're vitally important to quantum mechanics. And they're there really are some differences that you actually can go wrong if you just think of them as like two dimensional spaces.
Harpreet: [00:55:52] And you talk a lot about neural nets in the book. I will not allow. I mean, there's an entire section devoted to them that really help me get a better intuition of neural [00:56:00] nets. So, yes, check out a great. But you also talk about the two ways to predict the future. There's curve fitting and reverse engineering. Talk to us about what these two methods are all about.
Jordan: [00:56:12] Yeah, and this was another part of the book that came out of the stuff I was thinking about as we were all thinking about the pandemic and how things were going to go, that I realized then one reason people were having so much trouble and their discussions about what is the model doing, like what can we predict? What can we say with some degree of confidence about what the future holds is that there's two fundamentally different ways of thinking about that problem of prediction. One of them I would call reverse engineering, which is you really try to learn something about what are the mechanisms by which the disease spreads and maybe there's some unknown parameters. Like I mean, at first we were like, we don't we don't really know how contagious this disease is. Well, you can try to figure that out from the Data that you can see like that's the reverse engineering. But it deeply involves having some theoretical understanding of what is actually going on under the hood, like how does the disease spread from person to person? You know, something about that. And then you try to figure out the parameters based on the data that you've observed. The other method, which is called curve fitting, is saying I'm just going to be completely agnostic about how the disease spreads.
Jordan: [00:57:14] Like, I don't know, I'm just going to look at the data I have of how many people we know had it at certain times and just try to draw a best bit curve through that. I'm not going to worry about trying to model the actual phenomenon that sounds to most scientists like a terrible idea. Like why would why would you make predictions without trying to understand how the thing that you're predicting actually works? But in many contexts it works very well. I mean, sort of a classical example from physics is that, you know, Galileo certainly knew that a thrown object moved in a parabola. But he didn't have access to like, you know, force equals mass acceleration. Right. I mean, Newton's laws, once you know them, just give you that. I mean, you know, the process by which something [00:58:00] that you throw sort of changes it changes its velocity and then you can just derive that it moves in a parabola. But you can also just figure that out by looking at the curve and sort of figuring out what the curve the best fits. What you actually observe is a problem.
Jordan: [00:58:11] And of course, another example, and this is sort of feeds into what you were just talking about is that, you know, contemporary machine translation and contemporary automated language production, as we were saying, those things don't know what words mean and it doesn't know how a sentence is structured. It doesn't know what a noun is a verb. And it's basically just a curve fitting on the Data it's already seen. And without trying to, like, understand the mechanism of language production, it actually I mean, the kind of question is like all previous attempts to do machine translation. So there is some reason to think that this kind of blind curve fitting without thinking about mechanisms is a good idea. On the other hand, for covid crashed and burned. But I mean, there's a sort of famous example in the US of the so-called Kubic Fit that was put out by the White House last May, just a little more than a year ago, where there like this thing is going to be like one hundred percent gone in two weeks. It's going to be zero in two weeks. And that's what you get if you fit the curve and don't think about whether what you're doing makes any sense.
Harpreet: [00:59:12] We'll have seven million cases of covid or seven shooting cases.
Jordan: [00:59:17] All right, that was another example in the book, an even worse curve fitting, where someone said if smallpox were released in the US, there would be seventy seven trillion cases like within a year. Right. So that's that's a perfect example of like naïf curve bidding. So in the end, you know, in the end, I think both have their place and there's kind of a. A useful tension between the two, but making that distinction sort of helped me understand why people in some sense were talking past each other as they were sort of arguing about how to think about the future.
Harpreet: [00:59:47] And on that note, we'll go to the final formal question before our random round. It's one hundred years in the future. What do you want to be remembered for?
Jordan: [00:59:57] That's a long time in the future. But [01:00:00] I will say, you know, math is on this long timescale. So in some sense, we don't remember the names of that many mathematicians from 100 years ago, although there are some like Bungaree, who we do. But, you know, mathematics is a communal enterprise and all the stuff of not just the very famous people like Poincaré or I mean nurture, but like hundreds or thousands of others, it all gets folded into the math, into the mass, rather, the mass of mathematical progress. So I guess in some sense, like I kind of hope to be remembered without being remembered. Does that make sense? Like, I sort of put my bricks into the building that we've been building for the last two thousand years. I guess now at this moment, they have my name on them because I published a paper. But probably people will forget whose name is on what Brick. It's just like part of the giant edifice that people see when they see mathematics. And that's sufficient for me.
Harpreet: [01:00:53] You're just looking to be one of the giants that some kid in the future stands on the shoulders of the collective giant.
Jordan: [01:01:03] Yes. I mean, one thing I say in the book, I sort of like of course, there's this famous saying, like we stand on the shoulders of giants. But I say, you know, it's a little more accurate, although it's a little longer to say that we sort of are walking up a staircase that's built out of, like the frozen thoughts of like thousands of other people whose names we don't know. And we could sort of anyway, we drop our thoughts on the staircase and they freeze to the staircase and like it, make the staircase a little bigger. And that's what we're all doing.
Harpreet: [01:01:28] I like that. So go to the random round here. First question I have is when I'm around. Yes, there and round. When do you think the first video to hit one million views on YouTube will happen? And what will it be about?
Jordan: [01:01:42] Ok, so I have to say that, you know, some of these questions were provided to me because my kids know a thousand times more about YouTube than I do. I consulted with them and my daughter says it would definitely be a music video because those are the only things that are watched repeatedly enough. She was skeptical that there would ever [01:02:00] be a video with trillion views. So I think what we computed was I think the one with the most users. Baby shark. Yeah. Which has about seven billion views. So I think our answer is one hundred and forty three baby sharks.
Harpreet: [01:02:13] What are you currently reading?
Jordan: [01:02:15] I am reading a book called Big Trouble, which is this kind of crazy book about American history in a time period of American history that, well, we come back again is set in 1994. So again, this kind of like concept of the century is that in American schools we don't learn that much about. So actually, Caleb, my son, is reading it for ninth grade history. And so we're reading it together and learning about a lot of messed up stuff that we did not know about.
Harpreet: [01:02:37] Yeah, that's interesting. Let's check that one out.
Jordan: [01:02:39] Trouble is the name of the
Harpreet: [01:02:40] Book, Big Trouble. What song do you have on repeat?
Jordan: [01:02:45] Oh, there is like crazy song called Social Cues by Tiso Touchdown who is like a young hip hop dude from Houston, which I'm very attracted to because he kind of uses these sounds that are very much these kind of like new wave sounds from the early 80s that I grew up with. To me they have a lot of cultural context to him. They're just like another sort of awesome sound, like in the hopper from history. And it's cool to sort of see those sounds rather directed and put into service in a totally different way than the way they would have sounded in the song that I grew up with.
Harpreet: [01:03:16] Yes, I noticed that happening a lot because, I mean, I was born in nineteen eighty three, so close in age and probably most of my audience and huge fan of the 80s music and I see it resurface in little samples in these new songs that come up. And it's always interesting to hear some. Definitely check that one out. So now I'm going to open up the actual random question generator here. OK, first question. What is your theme song?
Jordan: [01:03:40] Oh my gosh. What is my theme song? I have to say so. There's a song called Moscow Olympics by the Orange Juice, which doesn't even really have words. But somehow the way that it sounds, totally, that's what I would like to walk into if I was like I'd like walk into an arena to sort of do something thing in front of ten thousand people. I think that's what I would [01:04:00] want.
Harpreet: [01:04:01] The Moscow Olympics. I like that. Yeah. What is something you can never seem to finish?
Jordan: [01:04:07] I guess I would say I'm not sure I should say, like this paper that I'm writing with my collaborators that we've been writing for like five years, maybe I should say, like reading Proost. I've started that a lot of times and that I've never finished. I never even gotten very far. But someday I will. I mean, I did eventually read Moby Dick, which I would have once said was the answer do. And now that I read it, it's awesome.
Harpreet: [01:04:27] What's one place you have traveled to that you never want to go back to?
Jordan: [01:04:31] Oh, that's so mean. I feel like I'm going to be like I mean, to some place. Where would I never go back? No, I think I would never say never. Honestly, I think that, like, every place has its charms, in my opinion,
Harpreet: [01:04:45] To the last one from here. What's the worst movie you've ever seen?
Jordan: [01:04:50] Oh, another good one. See, I feel like I feel like you're asking me to be like, very negative, like doesn't come.
Harpreet: [01:04:56] I can skip this one to see what comes up next, if you like.
Jordan: [01:05:00] Sure. Let's let's back out to think about it.
Harpreet: [01:05:02] What's your favorite candy.
Jordan: [01:05:03] Huh? Oh, I can do that. I like Crackle. I always I'm telling my kids like you guys with your new school candy bars, a classic crackle is a great candy bar with the rice.
Harpreet: [01:05:14] Yeah, I'm into the majors and at that majors. How can people connect with you and where can we find you online?
Jordan: [01:05:21] Yeah, I'm these days I mean, I mean I blog and I have a website but like these days I'm probably on Twitter more than I am anywhere else. So I'm just Ellenburg on Twitter and I tweet most days, especially now that I'm like selling a book. But, you know, all the time, honestly, I'm on. So that's probably the easiest. I supply them at my main online presence nowadays.
Harpreet: [01:05:39] I'll be sure to link to Twitter as well as your website right there in the show notes. If you guys want to connect with Jordan, you can look right there in the show notes and find how to do that. Jordan, thank you so much for taking time out of schedule to come on to the show today. I really appreciate having you here.
Jordan: [01:05:54] Thank you. It was super fun day for me on.