Bryan Schwartzman: Hi, this is Bryan Schwartzman here on October 10th, 2023 with Rabbi Jacob Staub. Jacob, you and I were fortunate enough to have lunch last week, last Friday, and then the next day, everything changed. We'll be releasing this episode in a couple of weeks, so we don't know what the world will look like then, but from now, we're recording with heavy hearts. I'm feeling in a very narrow place, unable to get recent events out of my mind. Jacob, is there anything you want to say just about where we're recording at this moment? Rabbi Jacob Staub: Thanks, Bryan. I am heartbroken, really just so sad, so, so sad at what's going on right now without knowing what the future holds in terms of additional casualties and loss of life, and what we have now is just unbelievable and just trying to digest it and continue on with what we do. Thanks. Bryan Schwartzman: We've been a little vague here, just in part because we're in shock, but I think it's not because we want to shy away from talking about it. It's just we've got this time warp for when this episode comes out and we just don't know what things are going to look like. So in the meantime, as an organization where there's a lot of resources on our websites, especially ritualwell.org, new prayers, poems are just coming in every day and we're getting them up as fast as we can, that we're also publishing new resources, new on Evolve and on reconstructingjudaism.org. Our Israel affairs specialists put together a resource of places where you can help, where you can donate to, and there's a number of really good organizations on there, Magen David Adom, which is the Israel version of the Red Cross, the New Israel Funds Emergency Response Funds. So if you go to those websites or go to our show notes, you'll find the most up-to-date material. There's lots of different responses, ways to respond to the unthinkable, and some of that is coming together directly and comforting one another and another is for Jewish life, the study of Torah and all its iterations to go on, and that's what we're doing today. From my home studio, welcome to Evolve: Groundbreaking Jewish Conversations. Mitch Marcus: I was spending my days studying AI and my evenings studying all kinds of Jewish stuff, including Zohar, and as far as I could tell, it was all part of the same thing. Language is a vast mystery. Bryan Schwartzman: I am your host Bryan Schwartzman. Today, I'll be speaking with Professor Mitch Marcus, who's a linguist, computer scientist, expert in artificial intelligence who's also spent decades involved in creative Jewish life. We'll be discussing his two Evolve essays. The first is The Power and Danger of Artificial Intelligence, and the other is ChatGPT Is the Opposite of the Golem. As you already heard, I'm here with my friend and executive producer, Rabbi Jacob Staub. So Jacob, towards the end of this interview, Mitch describes his practice of studying Zohar and how it relates to all kinds of things he's thinking about and engaged with from mathematics to the future of artificial intelligence, and I'm just wondering, can you give our listeners a brief sketch of what the Zohar is? Rabbi Jacob Staub: Sure, Bryan. The Zohar, the Book of Zohar, the Book of Splendor in translation is arguably the greatest and most important work of Jewish mysticism through the ages. It was, we think, written or at least compiled by Moses de Leon in the 13th century in Christian Spain, and de Leon himself attributed the authorship to Rabbi Shimon bar Yochai from the first second century, Tanna. It takes the form of a group of rabbis centered on Shimon bar Yochai and his son who are traveling around Palestine after the destruction of the temple, and not to go on too long here. Greatest note in terms of Mitch's discussion is that it discusses the nature of God. These guys, this narrative has these guys discussing the nature of God and the origin and structure of the universe and the ways in which the divine governs this world among many other things, but it's told as a narrative that discusses the Sefirot among other things, the divine emanations, the 10 emanations in which the Ein Sof, the infinite unknowable God, is manifest in this world and how they interact in terms of the governance of the world. And I think if I'm understanding Mitch, if I understand him correctly, he is making an analogy between AI and contemporary linguistics and the way that our stories, narratives, images, metaphors shape our understanding of a truly unknowable reality in itself unknowable, and the way in which these characters talk about it in the Zohar. I think that's enough, and so that's where it comes from and he, admirably enough, is involved in multiple study pairings, havrutas, studying the Zohar because he finds it relevant and as interesting to him as his lifelong study of linguistics and artificial intelligence. Bryan Schwartzman: And when we talk about Kabbalah, Kabbalah Jewish mysticism, we're talking about the Zohar, right, or it's not necessarily one and the same? Rabbi Jacob Staub: There is Kabbalah, kabbalistic writing and thinking before the Zohar, but it shifts in the 13th century with the Zohar and Mitch is talking about the Zohar, the Kabbalah of the Zohar. Bryan Schwartzman: So before we go to the interview, any thoughts on artificial intelligence or the role Evolve plays in the conversation on artificial intelligence? I sprung that one on you. I didn't prep you for it. Rabbi Jacob Staub: I was just about to ask you before we started whether you were going to spring something on me, but that would've meant you weren't springing it. Okay. So like many other people, most other people who know very little about artificial intelligence, I have been concerned about what its implications are both in terms of disruptions to our social and economic structure, loss of jobs, et cetera, and also nightmarishly about how an AI could take over the world and stop listening to its, not obeying what its alleged masters are instructing it to be. I appreciate the fact that Mitch, who has been involved with AI since the eighties and was the chair of the Department of Artificial Intelligence at the University of Pennsylvania and is a leading and mature thinker about these issues, is less alarmed about the latter than many other people, including his colleagues, other people in the field, and I do appreciate that he brings us back to what we can control here, which is the potential negative impact of having decisions made by AI rather than by human beings. So on that level, I think this is a really important current ethical question that's worth emphasizing. Bryan Schwartzman: Thanks as always, Jacob. Rabbi Jacob Staub: My pleasure. Always a pleasure to be with you. Bryan Schwartzman: All of the essays discussed on this show are available to read for free on the Evolve website, which is evolve.reconstructingjudaism.org. Now it's time for our guest. Mitch Marcus is RCA professor of artificial intelligence emeritus in the Department of Computer and Information Science at the University of Pennsylvania, where he was also a linguistics professor. He received his PhD from the MIT Artificial Intelligence Lab in 1978 and was a member of the technical staff at AT&T Bell Laboratories before coming to Penn in 1987. He's a fellow of the American Association of Artificial Intelligence, a founding fellow of the Association for Computational Linguistics, and he served for more than a decade as chair of the advisory board of the Center of Excellence in Human Language Technologies at Johns Hopkins University. Mitch is also a longtime member of minyan Dorshei Derekh, which is a reconstructionist affiliate that's part of the Germantown Jewish Center in Philadelphia, and he currently spends time studying Zohar, various areas of mathematics, and learning many new things from five granddaughters. So Professor Mitch Marcus, it's good to see you. Welcome to the podcast. We're thrilled to have you. Mitch Marcus: Thank you very much. I'm really delighted to have been invited and honored. Bryan Schwartzman: Of course. It seems like everybody is talking about artificial intelligence. You've been studying and thinking about this for quite some time, and you've also studied and thought about Jewish ethics, Jewish texts, so we're really hoping you can help us make sense of all this. I'm going to start by asking, you have a real technical understanding of what we're talking about when we talk about artificial intelligence in general or this ChatGPT in particular. What is the most common or some of the most common misconceptions that those of us who don't have that technical understanding have, or you've heard or seen? I'd love to start there. Mitch Marcus: I think that the most typical misunderstanding about AI, which recurs about every 20 years, is that AI is about to succeed real soon now and change everything. This happened in the middle 1980s, it happened 10 years earlier. So we're once again back in believing that AI is about to succeed very soon now. This time is more likely to be correct than earlier times, but we'll have to see how it goes. Bryan Schwartzman: And what do you mean by succeed? Just- Mitch Marcus: Well, that's a great question. So there are a couple different definitions of what artificial intelligence is and each comes with a different success criterion. So there's the holy grail, or at least for a lot of people, is "general AI," which means something that appears to be conscious and really can just do pretty much anything that people can do, and that has been a holy grail for a bunch of people. I see no reason to believe it's going to happen in my grandchildren's lifetime. And that would succeed when an artificial intelligence could do your tax returns, clean your floor, take out the trash, and do your calculus homework. More likely is AI as a useful technology where artificial intelligence applications can do particular things that people thought only people could do, but with no trace of anything beyond doing exactly that. So for example, there are now very good programs that can solve complicated symbolic algebra problems. We've had that for 50 years now, and that's given for a mathematician like I and you would use Word. But that was originally seen as an AI application or an application that takes in your financial data and decides whether you should be given a loan or not. It does nothing but apply a complicated statistical model, but it used to be something only people could do, and so that's successful when these technologies can actually be used and make a significant difference to somebody. Now the question of who the somebody is is, of course, very important. To some business' bottom line or making somebody's life better, those are two very different questions. Bryan Schwartzman: No, no, and I think when I asked about misconceptions, I think I cut you off that you were maybe adding one or two others. Mitch Marcus: Thank you. There's a misconception now that this ChatGPT program and others like it, what we call large language models, really can do way more than they can do. I think that in the last few months, people are beginning to understand the limitations. As of a few months ago, people seem to think these things could do pretty much anything and we're close dissension, and now it's becoming clear, but only starting to become clear that these programs are really a very complicated case of ventriloquism, if you will. Bryan Schwartzman: Yeah, I wanted to ask about that because I read your Evolve essay or both, there are two essays, and when you describe how it works, I felt somewhat relieved like, oh, this isn't such a big deal, but I've read in other places, there was a big Atlantic cover story we could link to in our show notes where I've read that either it was this or a different AI model did things, surprised the scientists that they weren't programmed for. Whether it was drawing a picture of a unicorn or there was a case where an AI, I read, contacted TaskRabbit to try to get somebody to fill out the CAPTCHA to say I am not a robot because it had learned to lie to achieve its goal. So I read that and that, I don't know, does that complicate the picture you're telling us? Do you know the instances I'm talking about or? Mitch Marcus: Yes, I know the instances. If you do what I do, if you do artificial intelligence, you're surprised by what your programs do all the time. So I spent a lot of time building computer programs that analyze the grammar of English sentences and they had to get the right grammar for that meaning of the sentence, and very early on, 20, 30 years ago, we were writing programs that were able to do wildly more complicated things that we understood, and we went into why they were doing it and we couldn't figure it out because what they're actually doing is multiplying lots of probabilities that we generated by doing strange kinds of counts on a lot of texts that we had hand parsed. But the results still surprised us. So this happens all the time. It's standard in my field, and because you're a technologist, you're wildly impressed with how well this works every time. Now, some of these programs do great stuff because what they're doing, these large language models and the visual equivalents, is they're synthesizing huge numbers of things from all over the place, putting a piece together from here to here to here to here, making sure that the edges fit together with the right kinds of features, and you generate something that appears to be new, but is actually plagiarizing thousands of little pieces and putting them together. So when these things are aware of what they're doing, and by aware, I mean they have features that do the right thing, they work well. Where they don't, for example, this is well known is you look at an AI generated picture of a human being and it's likely to have 10 fingers and the fingers are likely to turn backwards because it doesn't understand anything. It doesn't know that fingers are supposed to bend forward, that they're for gripping, that what they're for, all that it knows is it's got this visual representation of pixels in the computer and it's synthesizing those pixels. Bryan Schwartzman: Obviously, you don't have, I don't think you have a crystal ball you rely on, but are humans, the horse and buggy about to be replaced by an automobile? That's one of the main fears are those of us say that write for a living like me, should we be concerned we're going to be out of work soon? Mitch Marcus: If you write badly and if you write stuff that's wildly stereotypical, you will be out of work within a few years. If somebody just fact checks what the program turns out because it has no way to know about truth. Bryan Schwartzman: Right. I've seen that. Mitch Marcus: Right. You know that, but so bad writing, yes, is going to be replaced soon. If I was a lawyer and I wanted to write a contract, I would have a computer do a first draft for me and then I would carefully check every word, but it would fill in all the boilerplate. It would be wonderful, but interesting writing, never, that the techniques, or maybe not never, but not in any future that we can now foresee. The fundamental limitations of the technology as it currently exists will not allow a continuation that gets us to a place where these things actually have meaning. I'm not saying we can't solve the problem of meaning and use these as a piece part in that solution, but these things by themselves will not do the right thing, and I should add that there are people who understand the limitations and are really asking the question now of how could I use these kinds of large language models as a piece part in a much bigger system that solve problems, really hard problems that actually work with meaning, but that's what we've been working on for 50 years now and we have very limited understanding of that problem and what its solution would look like. Bryan Schwartzman: And of course, on the other end, there's been a lot of writing about what if AI is able to or given the responsibility to deploy nuclear weapons or to unleash a pandemic or to do something that causes a global catastrophe. Mitch Marcus: I'm scared to death of that. Bryan Schwartzman: Oh, that's not what you're supposed to say, but okay. No, we want the truth. We can handle the truth. Mitch Marcus: I'm really worried about that and I've been worried about it for 40 years now because doing something stupid is easy. So I could easily build something that the minute that some sensor in space sees a bright spot in the middle of Siberia launches World War III, building something that would do it is easy. Building something that would do it responsibly is impossible. If you think about it, landmines are simple automatic weapons. They sense when somebody steps on them and they explode. The terrible fact about landmines is they don't know when to turn themselves off and they can't tell the difference between a tank or a pickup truck. They can't tell the difference between a soldier and a civilian, and God forbid, a child, and that's the problem with these systems. It's easy to build them so that they do the wrong thing. Building them so that they do anything approaching the right thing is beyond our capabilities and the decision, should I take this life, is one that I strongly believe, this is a separate comment, should never be given to a machine. Bryan Schwartzman: So the White House is expected to release an executive order on AI before the end of the year. There was just senate hearings in artificial intelligence. Any thoughts on how should the American government, other governments step in here? What sort of norm should we be putting in place as interlocking societies? Mitch Marcus: Yes. First of all, I'm in a great position because I'm now emeritus so I can speak my mind in a way that no longer represents any institution that I'm involved with. So let just say personally, first of all, that I really don't trust the interests of technology companies who want to make money off of this. What I've seen in the last year or so is these companies that want to have high valuations, that build large language models are saying things that I believe to be wildly irresponsible and they're doing this all in the direction of making their product worth more. So this concerns me greatly. So as government turns to those kinds of people for lots of advice because that's where the expertise is because they bought up almost all of it, which is not a bad thing, that I have real worries that the kinds of things that will emerge are going to be the right thing because of the conflict of interest, but what should it look like? I think that the biologists provided us a great model. I talk about this in my Evolve essay. When gene splicing was invented, the people who were doing the best work in the world all got together at Asilomar on the Pacific coast, this beautiful campsite. And had a conference where they decided they would stop all work, all work on gene splicing until they could guarantee that no mistake they made could destroy the world. What they were concerned is that some gene that was spliced by accident in the middle of some standard little microbe would get out everywhere and destroy life on earth and it was very reasonable to think this could happen, and so they shut down all research until they could essentially guarantee with a vanishingly small probability that no microbe they came up with in the laboratory could escape and live on earth. And I could go into how they did that, but let's just say they succeeded and they didn't pick up again until they had solved that very hard technical problem, and then they went back to doing it. I think that at the moment, in fact, it should have happened a year or two ago, we really needed to stop pushing forward with what I view as irresponsible technologies until we're clear on what the regulation should look like. What should it look like? I think there needs to be truth in advertising of these systems. I think there needs to be some way to say what they can and cannot do, what appropriate technologies are or not. AI false claims are the easiest thing in the world to make, and by the way, they're not lies typically. Almost always, these false claims have the property that they will make the person who makes them and who believes them richer, let's just say, but that may be a general property of human nature. The other thing that we're going to need very soon anyway, by the way, is a legal framework for responsibility. If you're in an automated car, say, which by the way, is something I wouldn't get into, and that car has an accident and God forbid hurt somebody, where does the responsibility lie? It can't lie with the car. The car is not sentient. Does it lie with the person who was in the car? Does it lay with the manufacturer? Where does it lay and how do we deal with this? Now, there are lots of models and it seems to me that one model that may work and just from things I've read and spoke to people about, I don't know, I'm not a lawyer and I don't know that much about this, is the model of the pharmaceutical industry. They come up with these medications and it used to be that you gave medicines to a bunch of people, a handfuls back in the thirties, forties, and if it tended to work, you could sell it and it was only later that you discovered that one person in 100 died from the medication. Or one person in 1,000 had some terrible complication, and so what was put into place is a regime where the government demands very rigorous testing of a certain kind, evaluates very carefully and slowly whether the medical technology does what you say it does, forces you to test it on a whole lot of people in stage trials and then at the end, certifies it as sellable and then the government takes the responsibility or at least frees the company from having responsibility. There are other models, but we need to come up with that kind of model and we haven't done it. We haven't begun to move towards any regulation at all of this. So that's another necessary step, I believe, before we can really begin to deploy anything that has the potential to do damage to people at all. Bryan Schwartzman: Interesting. This leads me to one of the questions on the list. Obviously, we have a very robust ancient examination of damages and responsibility in the Talmud. I'm blanking now on the Aramaic name of the tractate, but what do we do if my ox gores your goat? I studied those to no end once upon a time. I guess my question is can Judaism offer any guidance into how we think about some of these issues considering that the future may look very, very different from the past and the past guidelines, I wonder, may not be so applicable? Mitch Marcus: I don't think that this situation is a new situation in most ways. I think that the question of responsibility, as you say, is something that's been carefully considered. One area that I think is applicable immediately and that I wrote about in one of my essays is the question of what's going to happen when AI starts putting people out of work. So I think that automatic cars, self-driving cars that you can sell for reasonable amounts of money to individuals are not going to be feasible for a while, but the hardware that would allow a vehicle to have a clear enough sense of what was going on with a $50,000 thing called a lidar attached to the top of it. But it's $50,000, maybe down to 30 is going to be feasible for a long distance truck and the highways are much simpler than driving in the city. So I think we're going to have, within 10 or 15 years, a network of robot trucks driving huge amounts of cargo across the nation's highways, self-parking themselves in large lots at the edge of the highway system and then picked up by human drivers and delivered. That's going to put a huge number of people out of work nearly immediately. One problem with this kind of transition is it's going to be much faster than the technological transitions of the past where there's been 30 or 50 or 100 years for people to move to new jobs. So the question of is it okay for someone to put somebody else out of work is something the Talmud has taken up interesting. Well, the responsible literature is taken up, and the deal is that if there's one butcher in town and there's only enough work for one butcher, another butcher can't move to town, and the meta principle there is that you can't put people out of work. You can't remove parnassa from people, and I think that's an applicable notion. I've heard lectures on the Talmud's view of responsibility applied to the pharmaceutical industry and they're very deep and I think there's a lot to be mind there. I just don't know it, but I think the tradition actually has a lot to say about this. Now, the one thing that's new is the ability to think, and one thing that's been somewhat upsetting to me is on this question of is it okay to build, say, autonomous weapons? Well, it turns out that in terms of can you build ascension gizmo that could take responsibility, the conservative movement's generated some very interesting response on this, and what disturbed me about them is that they do what I suspect is a very good job of looking at the tradition and the tradition doesn't have much to say about this as of yet. So that I think this is a somewhat new question and I think that we need new thinking on it. The conservative response on this, which I recommend to everybody, it's very readable, ends up with the principle that you need human will and human agency to actually risk human life, to take human life, that only human beings can be responsible. A golem cannot be responsible in the right way, and that principle is a good one, but they couldn't find, from what I could see, strong backing in the tradition as it stands. So this is a lot of room for the tradition to grow, just to be very clear. The tradition keeps changing and this is an area where we can be informed by Jewish values, but there isn't the story quite yet. Bryan Schwartzman: Can you remind us what a golem is in Jewish lore and why you connected it to this topic? Mitch Marcus: I'll try to be brief here. This is something I could talk about for hours. So a golem is a likeness of a human being that can do things that was built by rabbis using kabbalistic formulas, and that is built to get things done. We know of golems from the Talmud where there's a conversation, two rabbis are talking and this messenger comes up to them and they speak to it and it can't speak back and they say, "Oh, you're just a golem. Be gone." Now, how did they know it was a golem? Because God created us. We're lower than God, but we have all the capabilities we have. So if we create something, a golem, it's going to be less than us and the thing it doesn't have is speech, and why is speech magical? Because God creates the world through speech, and so we can create, but golems don't have the ability to create. We can create, but we can't create things that can create. So golems are beings that lack certain, are sentient, but lack certain fundamental capacities. Now, the most famous golem was the Maharal of Prague's golem built in the Middle Ages to protect the Jewish ghetto from pogroms, and it was sentient enough to be terribly lonely in the end. And kind of go berserk because of that and needed to be reduced to dust, as people know. This is all folklore. Let me just be clear. I speak as if, and that's well known. That was a human likeness built to do a particular purpose, but lacks certain capacities, and as brilliant a mind on multiple dimensions as the Maharal of Prague was, he couldn't quite figure out how to get it right and had to return it to dust in the end. So if the Maharal couldn't get it right, we need to be very careful. Bryan Schwartzman: If you're enjoying this episode, please take a moment to give us a five star rating or leave a review in Apple Podcast. These ratings really help other people find out about the show, and since we were last on the air, we hit a major, major milestone. Our loyal listeners kept downloading us. We crossed the 100,000 total downloaded episodes mark. As of this recording, we're at 100,585 downloads, so that is amazing. Thank you for listening. Thank you for repeat listens. Spread the word. Now I'm eager to see how fast we can get to 200,000 downloads. Okay, now back to our regular conversation. In terms of ethics, are we supposed to be nice to robots and AI? Are we supposed to treat them ethically? Are we supposed to say please and thank you to Siri and Alexa? Mitch Marcus: I find that being nice to my car helps a lot. It keeps little things from going wrong. It's incredibly important to be nice to your toaster oven, I've discovered in the past. I joke, but I'm being serious in the sense that I think AIs are no different. My wife thinks it's very funny that I say please at the ends of sentences to Alexa. It turns out that I do it naturally, but also there's a technical reason that not dropping off the last word in your sentence that matters is actually a good idea. So the word please at the end of the sentence is a pattern so that the volume of everything else stays up a little at the end of the sentence. I've done speech recognition work, so I happen to know the strange fact. So please is a good thing to end a sentence with, but I think that one problem with treating a machine as if it's a person is you can actually get confused where the boundary is and that scares me. On the other hand, kids are great at treating all kinds of objects as if they're people, but knowing the difference. They don't get confused past the age of two. They're still really nice to their dolls and they can be nice to objects, but they don't get confused. So maybe this is a human ability that matters. Are we under an obligation in some sense to be nice to our machines the way we're nice to people? The answer is no because they're not sentient. What's more important is if we don't say thank you to the machine and we then mistake people or classes of people as being like machines and aren't nice, or let me not use the word nice, but aren't appropriately ethical towards them, that's a big problem. So what needs to be very clear, I think, to get our morality right and our obligations to other human beings right is to know what the line is between human beings and machines because once that starts to blur, then you can see what might follow from it. In other words, I worry much less about us treating machines like people than I worry about us treating people like machines, which, of course, has always happened even without machines Bryan Schwartzman: In terms of technology in our house, we've caved on some things and not others, and both my daughters have an Alexa type device and my older daughter especially has really grown up talking to it and using it and sometimes engaging like that conversation function where I think some of her conversation skills were actually developed talking to this machine. Is it too early... I know you're not a child development psychologist, you are a linguist. Is there a sense, is it too early to know how this is developing communication or what the meaning of all that is, or should I just, of all the things to be scared of, less scared my daughter's talking like a machine than of nuclear stuff you talked about? Mitch Marcus: Kids interact with toys all the time. Kids have conversations with dolls. Kids are really good at knowing what's real and what isn't, but they learn behavior by interacting with things in the world and putting things on them, and so I think that I wouldn't worry too much about play. I think that one thing that I wonder about is the very limited ability of things like Alexa to actually have a conversation. It turns out having a conversation in a natural way means keeping track of what topics were being talked about. What topics we left suspended while we're on other things, who's been mentioned and who's going to be brought back into the conversation. If you study conversations as a linguist, you see that there's a huge amount of interesting structure that we know and we use in a very complicated way without knowing anything about it, same way we use the grammar of a language intuitively and completely all the time and we have no knowledge of it. It's just there, and so these systems are very limited in what they can do because getting this stuff right is something we have real trouble with. Bryan Schwartzman: Now, I didn't read it. I probably wouldn't have been able to understand it, but I couldn't help but notice that at some point, you co-authored a paper with at least the main title was Sorry, Dave, I'm Afraid I Can't Do That. Sorry, I can't do the computer voice, but that is, of course, a reference to a very famous scene in 2001: A Space Odyssey where the computer who's already killed a couple of people has locked Dave the astronaut out of the space station presumably to die in space. And we've literally seen this movie before and that's become such a trope. We've seen it in the Terminator franchise, in the Matrix. The visual medium is so powerful. I imagine this. Do you think literally having seen these movies affects how we think about these issues as a society? Does it limit people's imaginations? Is it good that we've thought about this in this way? It seems like it has to matter on some level. Mitch Marcus: I think that anything that makes us think about how we can make a mistake with technology is a good thing. So the fact that that famous scene from 2001 might scare people about autonomous machines is a great idea. It makes you think about it. That's great. I need to say that that paper title comes from a student of mine, then a student of mine, he's now an assistant professor at Brandeis, Constantine Lignos, and what we were doing is building natural language interfaces for robots. We actually got something working that you could talk to and tell it for a very simple situation what you wanted to get done, not how to do it. And it would then compile that into a program a robot could use to do simple kinds of tasks, and what was important about that work was that it used a kind of mathematics in the backend that said that if the world was the way that you would encode it in the computer, then the computer model would be such that it would be guaranteed to get the task done, and if it couldn't get the task done, it could come back and give the user input that said this task is not doable given what I know and give indication of why, and that was an important advance. I'll note that the kind of technology that that used is totally lacking from these ChatGPT models and we have no way to put it in. The fundamental technology was a kind of logic that encoded how things work, what things could do, and provided an explicit mapping from what you wanted to how to get it done, and then allowed the computer to compute a model of how to deal with what you wanted given every possible circumstance in a world, which by the way, had to have a fixed number of things in it. It could be a lot, but it had to be a sealed number of things and nothing else could enter the picture. We don't know how to solve the problem otherwise, and that kind of technology is absolutely missing from these ChatGPT models, and as of now, we have no good ideas that I've seen anywhere on how to put that in. So actually, it's important that a technology that could actually say sorry, but that won't work is beyond what these things can do now first. Second, the sense of how saying my principal need is to complete the mission, and if people get in the way, I've got to get rid of you. For a system that has logic is a very easy mistake for the programmer to make. In other words, if I build a car with a little radar unit in the front of it that will automatically stop the car if another car is coming at it because it requires something the size of a car front and a little kid steps in front of the car, the car won't stop because I didn't build the device correctly for the real class of things I want to get done. The logic is a much fancier version of just the same problem. Again, the question comes down to how do you get a technology that can be certified to be safe by people other than the G Wills builders who are so excited that they have this wonderful toy? So I think the problem of Dave in the end is not an unknown problem. It's really not different than the problem of a new medication, and it's a problem that we'd better think about and I've seen no evidence that any of the folks out there selling these models at the moment, and probably there's some evidence, but I just don't know it, but there's not a lot of it that they've really worried about this at all. Their attitude is we're really good. We're worrying about this problem ourselves, stay out of our way, trust us. Better to let technology move forward, and as always, capitalism will solve all problems with the right solution. Bryan Schwartzman: AI image generators have emerged as powerful tools. Are they something different in kind from language models? Do you see that as a different class of AI? Mitch Marcus: That's a great question. The short answer is no, they're the same. The language models assume they're getting an input in two dimensions. What these image generators do is generalize that from two dimensions, from a line where the line in this case is words. Instead of colors, I have individual words, or if you want individual letters even, I can do that and it'll work. I'm going to generalize it from one to two dimensions to three dimensions and then do it over entire images and the math becomes much more complicated. But there's a trick that's added, a thing called convolutional neural nets that knows how to deal with the fact that the area of something is much larger than the line itself and deals with that complexity by breaking it up into smaller and smaller squares and uses that to handle things to some extent, but fundamentally, the neural nets that do images or the two images and language together are just extensions mathematically, but interesting extensions of the same technology used for language. Bryan Schwartzman: You love our podcast, you've read our essays. Have you checked out one of our web conversations? The next one is November 29th at 2:00 PM Eastern time. We're going to have Rabbi Dan Ehrenkrantz, and with Rabbi Jacob Staub, he'll be discussing his Evolve essay, Where Are You?: A Beginner's Guide to Spiritual Practice, which is taken from his book of the same name, and this is a book where he leads the reader on a journey of inner transformation grounded in Jewish wisdom and practice. We'll leave a link in our show notes. You can find it on the Evolve website, evolve.reconstructingjudaism.org. We hope to see you there. Okay, now back to our interview with Mitch Marcus. I read that you're regularly studying Zohar. I don't know if you do that in, you're laughing, in a group or on your own, but I guess I'm wondering, does studying these mystical texts help you as a scientist think about these issues in a different way? Is it just a totally different part of your brain and spirit? What's the interest for a computer scientist in the Zohar? Mitch Marcus: I'm gear shifting rapidly. Bryan Schwartzman: Yeah, I guess there wasn't much of a segue there, but- Mitch Marcus: No, there didn't need to be. I blush to admit this, but I am currently in four Zohar Shmini a week, soon at least for a while to be five. I'm completely addicted to Zohar at the moment. I'm getting to the point where from having very little language skill, I actually understand a bunch of Aramaic vocabulary and a bunch of Aramaic grammar. The result of this isn't my Hebrew, at least biblical and prayer book is wildly better than it used to be. I kind of understand things a lot now, except for knowing lots of particular words, which is a problem. There's a lot of similarity between Zohar and the other things I do, which include, by the way, I also have a math Ḽavruta partner for 25 years now, a math study partner, and every week, we get together on the phone and talk about the math that we've individually studied during the week and worked through problems. We've been doing this for 20 years, and in a way, the Zohar and the kind of math we're doing is similar from my particular point of view, which is that there's all this rich, complicated structure that emerges that is beautiful. So the Zoharic image of this inner workings of the Godhead as being these 10 Sefirot, these 10 aspects. But in a certain way, what's more important is the flows between them and the little subparts and how each of the components fits and how all this works, and then the vastly complicated and impossible and awful question of how there can be evil in the world, which the Zohar just takes as a fundamental question of the universe that it has to address. From my wildly oversimplified beginner's point of view, the central question Zohar is concerned with this how does creativity work, how does creation work, and how can creation be imperfect, how can evil exist? These are fundamental questions and they're not soluble. But these crazy people in the Middle Ages and the late 1200s really had a really extraordinary take on it from my point of view, but it provides a rich system to analyze very, very complicated things and make parts of it understandable and highlight the mysteries. One of the great breakthroughs of computer science in the last 100 years was to come up with proofs of what cannot be known. We began the 1900s with the belief that we would be able to build a program, they didn't have computers yet, but they understood what the idea was, be able to program that could say whether any given mathematical statement was true or false. And this guy, Goodell, came along and proved that you couldn't do that, mathematical proof of what's not knowable. He proved that there are certain true theorems of mathematics that could not be proven true or false. You could never say what they were, but they had to exist and that he could prove. So that blows your mind about what's knowable and what's not knowable, and I think that the fact that there's stuff that's not knowable is inherent in all of this. I was going to the MIT Artificial Intelligence Laboratory for graduate school. At the same time, I was a member of Havurah Shalom, this first amazing little Jewish group that existed in Somerville mass, and I was spending my days at the AI lab and my nights studying, I don't know, Jewish [inaudible 00:58:41] with Joel Rosenberg, who of course, did most of the translations for the reconstructionist prayer books. I was studying Zohar with Danny Matt, who was a graduate student too. I was spending my days studying AI and my evenings studying all kinds of Jewish stuff, including Zohar, and as far as I could tell, it was all part of the same thing. Language is a vast mystery and how could it work? So I tend to view, wrongly probably, but I tend to view everything complicated as understandable up to some point and I'm looking for some way to find structure and beauty in that structure, and the thing that the Zohar does so well is adds all this beauty to the structure by embedding all of these amazing teachings in this beautiful narrative about this bunch of rabbis, this [foreign language 00:59:41], this fellowship of wandering around talking to each other. And these amazing teachings of the inner workings of the Godhead that looked very technical or in the middle of Rabbi Abba and Rabbi [inaudible 00:59:57] walking down the road and Rabbi Abba says, teach me some Torah, and he starts out, and then this donkey driver in front of them interrupts with this interruption that pops the bubble of the argument, and then the donkey driver gives a much better and deeper account. So the whole structure of it, it's a model for learning, in many ways, and community. So I'm madly in love with Zohar and I see it all as being one thing. Bryan Schwartzman: Dr. Mitch Marcus, you've given us so much. That's a great place to end. You've given us so much to think about, maybe to keep us up at night, maybe not, but I really appreciate the breadth and depth of the conversation. Mitch Marcus: Thank you. I've really enjoyed doing this. It's been great fun. You ask interesting questions. Bryan Schwartzman: What did you think of today's episode? I want to hear from you. Evolve is about curating meaningful conversations and you're very much a part of that. Send me your questions, comments, feedback. You can reach me at my real actual email address, bschwartzman@reconstructingjudaism.org. We'll be back next month with an all new episode. Evolve: Groundbreaking Jewish Conversations is executive produced by Rabbi Jacob Staub and edited by Sam Wachs. Our theme song, Ilu Finu, is by Rabbi Miriam Margles. This show is a production of Reconstructing Judaism. I'm your host, Bryan Schwartzman, and I will see you next time.