["Hi, friends." That's what Scott Handelman says whenever he starts a talk or a podcast. He's done over 650 episodes of his Hanselminutes Podcast that he calls 'Fresh Air for Developers.' It's a tight 30-minute technology chat show that shares the same values that we do here at Greater Than Code. There's a huge library of guests for you to catch up on and a new high quality show every Thursday afternoon with a fresh face you may not have seen on other shows.] JESSICA: So go listen to it. AVDI: Yeah. JESSICA: After listening to this one because this was going to be great. SAM: Go listen in 3, 2, 1, now. Hello and welcome to Episode 114 of Greater Than Code. I'm your host Sam Livingston-Gray and I am also here with my doppelgänger, Avdi Grimm. AVDI: And I'm Avdi Grimm and I am here with Jessica Kerr. JESSICA: Thank you, Avdi and I am thrilled to be here today with Jean-François Cloutier also known as JF and I met him at Explore DDD Conference and I'm really excited to have JF here today but first, the bio. JF is French-Canadian who was born in Montreal in the year of the first spacecraft left Earth orbit. He has been an avid learner for ever and ever, from first grade to third grade science books and now, he has read another interesting book that he's going to tell us about but I'm not going to spoil that. JF has a Bachelor of Science in Physiology and he was going to go to med school but then he found programming and -- biomed school, sorry -- so, he begin the ComSci curriculum and he took AI classes and found Minsky's Society of Mind -- more on minds later, that's not a spoiler. JF has programmed in Fortran, in Lisp and in Prolog and Smalltalk -- a lot of Smalltalk because he was into OO before OO was cool. He's a total OO hipster. He's still in the OO that he's even written a collaboration IDE in Java after seven years of developing collaboration planning but then, he came across Erlang and the actor model and like actual OO but that's my opinion. Then his wife finance his six months sabbatical to learn Elixir. I think that may have involved robots because today, we're going to talk about Elixir-powered robots that simulate human brain processing. JF, tell us about your favorite cognitive model. JF: Oh, it's called the predictive processing and it's the dominant theory right now in neuroscience. I came upon it about a year ago, right after the previous Explore DDD Conference. It's absolutely fascinating. I would go up sure, to more details but later on. SAM: That's right because we've skipped over our usual intro question. JF: By the way, I did not write a book, nor do I plan to, so I don't want to disappoint anyone. JESSICA: That is very smart of you. That's on my anti-bucket list too. SAM: Some people want to write, some people want to have written a book. You want never to have written a book. JESSICA: I would begin by having written a book but I know better. Okay, so JF, what is your superpower? JF: Well, that's kind of a difficult question because I don't think of myself as having superpowers. I knew you were going to ask these questions, so I turned to someone who does have superpowers, I turned to my wife and I asked, "Liz, what are my superpowers?" and she immediately said, "You see the forest before the trees." I said, "Huh, that's interesting." I guess I do have a pretty good sense on systems. I'm a systems thinker. I approach everything as a system embedded into other systems embedded into systems and I always try to intuit the emergent properties of any system I approach or any system I write or a software. Software is always a system, even though the simplest of software is embedded into a larger system, including the environment in which it's used. I’ll go with what my wife says. She's always a good thing to do in a marriage and say, "Yes, I believe that would be my superpower" -- systems thinking. SAM: I would just like to take a moment and point out that you have, in fact two additional superpowers that I would draw attention to. The first is asking your wife things and the second is listening for the answer. JESSICA: Rare and powerful skills. Do you have some concrete examples of things that you have done differently based on your view of everything as a system within a system? JF: There's something I'm working on right now. I have built this Elixir-based framework to do Rest service backend. Early on, I had a sense that this should be event-driven because I could anticipate that there would be more and more components that would plugin to this backend and I could see this developing into a system, where various parts would complement other parts. Even though there was no immediate need for it, I pushed really hard and turned out into an event-driven system, so that we would have those nice systems properties and that backend framework. That's one example. The other example, of course is how I approach programming my robots, which is entirely systems-driven -- the design of it. JESSICA: So like you're picturing and envisioning the forest, even while you're building the first tree? JF: Yes. As I build a tree, I think of it as a tree in a forest more than as a bunch of trees, which will eventually, maybe by accumulation, become a forest. It's hard to articulate but I think it's really a matter of being intuitive about the unpredictable consequences of any design decision because they have ramifications. They have impact, there's feedback loops everywhere. SAM: So where does intuition come from? JF: Intuition is cognitive processes that are inaccessible to representation. JESSICA: Is it a format predictive processing? JF: Everything can be boiled down or, at least involve predictive processing but I'm not smart enough to answer that question. JESSICA: I think that's another superpower right there. JF: Not knowing? Well then, I'm very powerful. JESSICA: Yeah, like accepting that you don't know something and being okay with that. SAM: And being able to say it out loud. JESSICA: On a podcast? JF: Oh, well. I guess, I'm doomed now. Nobody will employ me. SAM: That's okay. You can lead our legion of super-people. JESSICA: So, your robots, tell us about the forest they live in. JF: Maybe I'll start with the origin story of how I came to the robot -- JESSICA: So your robots, tell us the origin story of your robots. JF: Well, I'm glad you asked. It started, maybe three years ago. I organized this Erlang/Elixir meetup here in Portland, Maine. On one point, I was like, "I need some fresh material. I want to do something exciting," and I saw this YouTube of the former city of Erlang solutions and it found a way of putting an Elixir on Lego EV3 robots and you could program in Elixir those robots and I thought that was extremely cool and I said, "I want to do that too. That would be great for a given material for a number of meetups." I purchased a Lego EV3 robot kit. By the way, it's the greatest toy in the world, bar none. It's just absolutely amazing and I set up to control motors and read from sensors from Elixir and that was pretty straightforward. I make use of a release of Linux that's actually augmented with drivers for the EV3. One thing that is really important is that you can actually override the programming environment of the EV3 by inserting a microSD card from which it boots. You can boot Linux on EV3 and if you can boot Linux and if you can talk to controls and actuators and sensors by, essentially, writing and reading its files, you can do anything from any programming language that will run on Linux. Once I figure that one out, I was like, "Wow, that's really cool." What am I going to do with this is I'm just going to have the robot run in a circle and I said, "No, that's kind of boring," and then I remember that in the mid-80s when I was taking AI courses at McGill since I've come across Minsky's is Society of Mind and in a nutshell, it says basically that the mind is a collection of diverse simple agents that interact in simple ways and from these interactions, emerge apparently what appear to be intelligent behaviors. I said, "Multiple agents? Elixir, which is a concurrence in programming language? Match made in heaven. I got to do something with my robots," and create this society of mind, so that’s what I set out to do. JESSICA: Oh, so society of mind is like society within the mind. JF: Yes, exactly. Have you seen the Pixar movie -- JESSICA: Inside Out. JF: Inside Out, that's it. Well, that is a simple representation of society of mind. You have fear, you have anger, you have... What else? There's joy. JESSICA: Disgust. JF: Disgust, all these things and they're kind of battling it out and what comes out of this battletude of agents comes the behavior of that little girl. That's a very simple, cartoonish view of the study of mind but I think, a firm one. That's what the society of mind would be -- a bunch of agents and they interact and out of these interactions, come out apparently intelligent behaviors. JESSICA: At one level, that could be like a neuron is an agent but I suspect that would be rather too many. What about the agents in your robots? JF: My first implementation, just not the one that you would see in my most recent Explore DDD presentation was an intuitive model of the mind. I sat down and came up with what I thought might be a working cognitive model. I had perception, which is what the robot sees and then those perceptions build on other perceptions so you perceive that you're close to the wall but then, you also remember that you were further away, just recently, so you have a higher perception of this as I'm getting closer to the wall, so hierarchy of perceptions. I had motivations: what do I want to do now. Am I hungry, am I curious, am I fearful? They might represent emotions and emotions will select different behaviors and those behaviors were finite state machines. How do I go about finding food if I'm hungry? And if I'm hungry then, my motivation to be curious is shut down. It's inhibited because there's something more important. If I'm afraid, my hunger goes away, so there's kind of hierarchy of emotions which kind of it's inspired from Rodney Brooks' robots, which are layers of behaviors that inhibit other behaviors. That was in a nutshell what I had been and there was a memory in there. There's also an agent for attention, so that certain perceptions may not matter in the moment. You shut down those detectors and so forth and so on. All of these agents are firing all at the same time. They're all interacting through events. It's all event-driven. Imagine like dozens of these agents are just like doing their things, inhibiting others, sending events, listening to other events coming from other agents and somehow, that little robot would actually move around, find a source of food, which was indicated by a beacon, so looking for that beacon and the source of food would be a sheet of paper on the floor and it would detect and over the course, it start eating. All these behaviors, it look pretty good. It looks like the little critter was actually truly autonomous and purposeful, which is what I was after. That’s what I did and that led to a number of meetups and eventually, I presented the results of that first cognitive model at Elixir days over two years. They gave me a lot of material and that's why I started on programming robots and of course, it's just glorious fun and I just kept going. AVDI: You said that was your first model, first approach, so what change and why? JF: At the Explore DDD Conference in 2017, I presented my best, initial model as an example of a domain-driven design as a DDD endeavor -- DDD rich project -- and I showed it and it went very well and at the Q&A session, my good friend, Bruno Ricard stood up and said, "I have a question for you." I said, "Yes." He said, "Do your robots learn and how would you go about having them learn?" And I went, "Blah-blah-blah... Neural networks, machine learning." I was totally mealy-mouthed. I had no idea, none whatsoever. SAM: That's the best kind of question. JF: It totally threw me for a loop, so I kind of messed up a nonsense answer and just stuck with me. I went, "Man, this is really, really annoying. I have this cognitive model that I've built and there's no room in it for learning. What kind of cognitive model is that? Learning is just basic. It's core capability of a cognitive systems," so for about a couple of months, I was like, "There must be a way. There must be a way," and I really didn't quite know. Then as luck would have it, on my Twitter stream, I saw a reference to a blog called 'The Brains Blog' and an article about predictive processing, more specifically a book by Andy Clark called 'Surfing Uncertainty' and the summary of the book and the concepts behind predictive processing made it very clear that learning is built, its core to this theory. I went, "Oh, my God. This might just be the ticket for me. This might be how I get out of my dead end and come up with a cognitive model that for one would not be like my own homebrew cognitive model. It will be based on the latest and greatest developments in cognitive neuroscience." I bought the book, read it twice, slowly. It's an amazing book but man, is a dense. JESSICA: I totally read the introduction. JF: How did it go for you? JESSICA: Oh, it's lovely. The introduction is totally approachable. JF: It gets better. AVDI: I read the cover. SAM: Making me jealous, Avdi. How do you find the time? JF: Well, here's my trick. I like not working. As much as I enjoy programming, I enjoy not being in the mindset of a programmer. I enjoy being contemplative. I enjoy thinking large, instead of thinking logically and somewhat, narrowly as you must when you go, so I need something to motivate me to do extracurricular coding. The way I get myself to do it is I submit a proposal to a conference and present it as if it's a work already done and then get accepted and I’m in the hook. JESSICA: Conference-driven development. JF: Exactly, so that's how. AVDI: I have totally done this as well. How did this new approach to robotics work out? JF: It worked out pretty well. My goal was to recreate, basically the same behaviors that I had with the prior cognitive model but with a completely new model, with some aspects of learning built in, so that the behaviors could improve over time through practice. That's the robot discovers what works and what doesn't. Long story short, I got some interesting results, I recreated the behaviors and I could see an improvement in the robot's abilities. SAM: What's an example? JF: How effectively does it find its way into the food, given various stimulations like obstacles and distractions, like being in a dark spot which makes it fearful. Does its behavior look random? Or does it look purposeful? In the first implementation, everything was kind of not exactly hardcoded but there was very little variability. Behavior was described as a finite state machine and a finite state machine is a finite state machine. In the case of a new implementation, it was based on predictive processing, which I may want to go a bit somewhat a bit later but it's all about making predictions as to what you are about to perceive, finding out whether or not those predictions are true. If they're not, then initiating actions so that the next round of sensations fit your predictions. It's like in the world of predicting processing, the brain and nervous systems principle task is to predict what it's about to perceive and if the predictions are correct, imagine the predictions going down, meeting the wave of perceptions coming up. If the predictions are correct, that's it. No more. We don't worry about it. But if there's a prediction error, the predictions bubble up and whatever made those predictions, sees those errors and say, "Oh, I need to do a better job on the next cycle to better predict the sensations that are always coming in, in a wave after wave after wave." It's this constant attempt to predict what I'm going to perceive and having some of the predictions work as many as possible, hopefully, some fail, prediction errors bubble up. That's what you actually become aware of. That's what actually gets perceived. At the same time, you're generative models, what in your brain creates those predictions, learns from its errors so that it predicts better, so that learning is core to this very process. What's really interesting is the behavior of the robot is about removing prediction errors, so it's not a finite state machine, just doing this then that and that. It's like the robot is making a series of predictions plus predictions are verified or not. If they're not verified, then there's compensation that the robot will act, will do things in order to correct those errors, to make those errors disappear. It's a very, very different way of looking at behavior. JESSICA: So, prediction errors, there's kind of a conservation of attention thing going on here, where as long as we're right, we don't notice anything? JF: Correct. JESSICA: And then when we're wrong, there's two things going on. There's one, sometimes we change our prediction but you also mentioned, we take action to make our predictions true. JF: Yes and that's really freaky. For example, imagine you're about to grab your cup of coffee. What happens is that you generate the predictions that you are feeling what you expect to feel when grabbing that cup of coffee but you haven't moved yet. You haven't moved yet. You're not feeling that cup of coffee in your hand and the result is prediction errors are firing and then, you're motor cortex says, "Let me correct that and move my hand so that it feels the cup of coffee," so your body's movements, what you're doing is you're fulfilling prophecies. It's a self-fulfillment of prophecies. JESSICA: The story of humanity. JF: Yes. My robot moves. Exactly what it does is, "I predict that I am over in food. If I'm not over food, what can I do to make that prediction come true," and it will be given a series of options on how to move, it will pick one and actuate move and if for some reason, that gets it over the food, then the prediction is fulfilled and the robot remembers, "When I try to be over in food, it works well when I do this versus that," and over time, it finds out that for correcting certain prediction errors, these actions work better than others. It will favor these actions, which means that over time, the robot will pick the right action-option to make a prediction error go away and it would do better and better and better over time. Now, my model is extremely simplistic and it seems to work but I can do so much more, which I tend to do. JESSICA: What conference have you submitted to in order to drive you to actually do more work. JF: I like the number three. In terms of the drama, three is great. First, you present something that looks really good but then, it doesn't quite work. With number two, you try to compensate and you go only so far and then, three is triumph -- you succeed beyond wild expectations. I'm hoping that's what's going to happen so I'm going to be presenting at Explore DDD for a third time, so that I have a nice arc. JESSICA: Yeah, I need to submit to that conference this year too. It's fantastic. SAM: Sorry, are you predicting seeing yourself presenting at this conference and then, generating actions to get yourself there? JF: Yes. Thank you. Exactly. I can feel it but I'm not there, so yes I will make corrective actions. SAM: This is a fascinating model but my first thought when you started talking about this and I even put this in the side chat about comedy and stage magic and these are things that humans do to entertain other humans by messing with their predictive models. JF: Yes. SAM: And we like it. When do you get to tell jokes to your robots? JF: It's called magic, right? That's what great magicians do. They mess with your predictive processing. Absolutely. AVDI: As I was watching your talk, your explanation of predictive processing is great there and it was completely baking my noodle. In retrospect, it's very consistent with this other stuff I've seen about neuroscience, where it's like if there's one truth of neuroscience, it's that the brain is incredibly lazy. This kind of slots into that, where if the brain can figure out a way of not doing work, I feel like we mostly imagine that our brain is taking all these inputs in and then building a model around it but that would involve constant work, so it sounds like what you're saying is that actually, the brain is just sort of imagining the world and then, any time the physical world actually collides with the imagination, then it actually has to do some correction, otherwise it sort of proceeds blithely along at low energy levels. The interesting thing that this makes me think about and kind of with regard to software is that I feel like all the software thinking that I've encountered has been guided by this idea of like humans are building a model of this world and based on their inputs, so our software should contain a model of the world and constantly updated based on its inputs. That thinking guides a lot of the way that we model things for software but what you're saying is that that there's a difference between having a model that's constantly updated, constantly scanning and constantly updating a model based on that, versus a generative model which is occasionally adjusted. Does that have the implications for how we approach software as well? JF: The general question is maybe hard to answer but in this specific, yes, the story is one of frugality. I prefer frugality to laziness because frugality is smart, lazy is just lazy but -- AVDI: Oh, so you studied marketing? JF: This concept of predictive coding processing but prior to coding most specifically, is used to do video encoding. If you can predict the next frame based on the previous frames, then you only need to encode what were your prediction errors. You can be very not very smart about it and just take a static approach and say, "I predict that the next frame will be exactly like the previous frame," and you only encode differences that's an MPEG, right? But if you are smarter, you have the video where things are panning in the background, you can predict the next frame, the background will be shifted a little bit left and you may predicted it correctly in which case, you're good, though the whole frame is changed since you've predicted the panning. Maybe, all you capturing is the fact that the horse has raised its paw a little bit more from the previous and that's all you're encoding, so you can do some really serious compression using predictive coding. That's analogous to what the nervous system does is by improving its prediction of the incoming sensory flow, it needs to only concern itself where the sensory flow deviates from the predictions. Going back to your notion of representation, we don't represent the world in our head, the way AI thought we did in the early days of robotics. I don't know if you've heard one of earlier projects from Marvin Minsky was to build a robot called Shaky and Shaky had a video camera and it would build an actual model of the space around it: blocks and cubes and where they are and whatnot. Then it would build the plan according to this model and then, execute that plan. The loop was so heavy. It was so expensive to build that model and then of course, that model with being validated as the robot move, it needed to rebuild that model again, that the poor thing will move like a few inches a minute and in the beginning, they thought this project would be like a summer project, ended up being a multi-years project and it was never really satisfactory. I think it's quite clear that the notion that we're representing the world, in terms of logical statements, logical assertions in our brain, that has not pan out. That is not how it works. But it's really interesting, you mentioned generative model, let's just explain a bit more clear what a generative model is a model of something that can generate instances of it. Not just recognize, not just classify but actually generate instances of it. Imagine now, we've grown, we're adults, we've developed this sophisticated multi-layered set of generative models, then we're tired, we close our eyes, we go to sleep. What happens? Those models are still running and we're dreaming and it feels real because why wouldn't it? It's the same as when we are awake, except we don't have a census telling us that our model is off a little bit or not. That's why when we dream, it feels very real but suddenly we start flying and that feels real but in real life, we could not feel ourselves flying because we feel our feet on the ground, unless there's a disturbance in the balance between predictions and sensations. If your predictions are too strong and they're not annihilated by the sensations and your predictions kind of overwhelm the sensations, then you start hallucinating because reality is not strong enough to tell your predictions. "No, you're wrong. Here's what's really going on," so this imbalance between your predictions and your sensations, if the imbalance is too great, then you might be suffering from schizophrenia because your predictions are too strong. If the reverse happens, if your sensations are too strong and your correct predictions cannot cancel the sensations, all sensations go through as prediction errors. [inaudible] shouldn't and your brain is overwhelmed with, "Everything is new all the time. Everything is surprising all the time." That might explain some aspects of autism. That was really, really interesting. JESSICA: I thought of an example of being overwhelmed by prediction errors and other people because these days -- I'm going to reveal my age here -- that you see a lot of videos where like every half a second is changing to something different. Isn't that painful? We're all 40 here, close enough, but you know, the kids are totally fine with it. They have like a prediction that, "Oh, yeah. The video is going to change every half second, no bother," and I'm like, "No, you're not be changing." JF: Oh, yeah, it drives me completely nuts. AVDI: I'm going to show you some [inaudible] videos later. SAM: Oh, and at the same time, if you try to go back and watch TV show or a movie that was shot in the 80s, for example, it's so slow now. I remember loving those things and then I go back and I'm like, "Okay, can we get there yet?" so my model, at least has changed but my expectations have changed about the pace of things and so, my model has adjusted to that over time and it's interesting to sort of go back and recalibrate. JF: Yeah. It's about calibration of attention. If you spend a lot of attention to detail, then you make more precise predictions, so there will be a possibility of greater number of prediction errors. You can calibrate and I think, when you watch one of those videos or ads, where the image changes every two seconds, it essentially calibrate down your attention. You don't pay attention to the details. You're not overwhelmed by the changes because you're not predicting much. There's not that many prediction errors flooding in. We kind of become detached. I think there's same amount of stimulation but we don't pay attention, so they feed us more and more and more and more changes and we pay even less attention and I think that's a good thing. It's an arms race of sorts. JESSICA: How well can we ignore them? AVDI: This is a bit of a change but I can't help but ask a little bit about this because this is like from one OO dork to another. I was looking at your bio. I've also had a long, long history of thinking about object-orientation and I realized recently that I literally got my first job because of thinking about OO. I basically just said polymorphism a lot and they were very impressed by that. I've been thinking about this stuff for a long time and I've also come to a very similar conclusion to you that probably the stuff that's going on in Erlang and Elixir is about the closest we've ever come to the actual object-oriented model. But the thing that I have a hard time putting words to, especially as I go on and look at other programs systems and I wonder if you have some insight into why does OO matter, why does that point of view even matter? Your thought on that. JF: When I first encountered OO and I'm going to age myself ridiculously, which is in the mid-80s -- early-80s actually, when Smalltalk came out, I was very excited because it felt like a very conceptual way of programming. I could implement concepts and it felt very, very friendly to me. Where I think it started going wrong is that we tend to fall into ontological traps when we do object-oriented programming. We look for a classification tree -- a beautiful conceptual tree -- of the code we're writing: classes, subclasses and sub-subclasses, so that we have beautiful polymorphism in all that and the inheritance of code. Then you hit a feature request and it completely demolishes the assumptions that are what's variant and what's invariant in your beautiful, polished ontology and then, you start putting square pegs into round holes and making weird subclasses that override too much and eventually, it turns into a monster, something that doesn't have that nice, logical structure you had before. You need to explain what you did with lots of bots and ifs and exceptions but in this case, but in that case, and I think that has been the danger of inheritance, specifically in object-oriented programming. It's very easy to go down the ontological path that becomes a dead end a little bit too far when it's too late to rethink your ontology to completely reorganize it because at that point, I don't think it's plastic anymore. I think inheritance is essentially cement that you've poured into the veins of your software. JESSICA: Did you say that inheritance is essentially a cement that you pour into the veins of your software? JF: It can be and it is pretty quickly. Yes, unfortunately. JESSICA: That's pretty quotable. JF: Thank you. SAM: But it is all about inheritance, right? JF: But, no. That's very nice and that's why, Jess and Avdi said that maybe the actor model is what object-oriented programming ought to be and that seems to be what Alan Kay, who is one of the inventors of Smalltalk, has been saying for a while, "Really, it should not have been called object-oriented programming. It should have been called message-oriented programming with the emphasis on individual actors exchanging messages and does engaging into conversations," but not with the notion of inheritance necessarily. I think the messaging is key, the inheritance is seductive but dangerous and one thing I've not answered that question yet is as I made the move to Elixir, which is actor model with functional programming after doing a decade or more of Java programming, I was happily writing code in Elixir and then I said, "Why am I not missing what I had in Java? Why am I not missing inheritance?" I'm hobbled because I don't have it. I don't feel like I'm deprived, then why is that? I'm trying to really answer that questions that in fact, I realize that I don't need it. It does not matter. I don't miss it at all. I'm very happy I can tackle tremendous complexity by defining simple data structures and defining transformations on these data structures and defining actors and agents that have a simple state and have a clear purpose and simple interactions with other actors. I find that this is entirely sufficient to build complex systems. As a matter of fact, it makes building complex system that much easier. I find it liberating, so I don't miss object-oriented programming and when I have to go back to it, I'll slip into that mode of thinking and I'm careful about not overdoing my ontologies, not creating complex inheritance trees for the sake of a little bit of reuse with a very flat hierarchies. But when I come back from it and back into functional programming with actor model, I feel like I don't miss anything. That was a surprise to me when I got into the world of Elixir, that I would not miss object-oriented programming at all. Not a bit of it. No. AVDI: Now, do you ever find as you separate the agents more completely because that's one of the things that Elixir versus traditional Java or OO really separating the agents that are messaging each other. Do you find that you start running into cases that are difficult to reason about because the case is all tied up in a series of messages that pass through a series of agents and tracing back the interactions between them? Has that become an issue or not? JF: I expected early on that it would be but it hasn't. AVDI: Interesting. JF: Here's the thing. In the old days of procedural programming with GoTos, when you're debugging, you ask yourself, "How the hell did I get here, at this line of code? And how did I get here?" When we do object-oriented programming, it would be like, "How the hell did I get to this state?" and it would be a very complex thing to unravel. But with functional programming, you're not anywhere. It's just simple transformations. As long as the inputs are the same, you can expect the output to be the same. There's no getting lost somewhere when you're doing functional programming. But when you have actors, the state tends to be very simple and I haven't found myself getting lost yet because the communications between actors is through immutable data structures and once you understand that when you send messages to an actor, these are immutable data structures that tend to be simple ones as well. They're processed in order by the actor as it scans its mailbox, so there's a very simple model of how the actor processes its messages. That does not make things complicated. When I program my robots, there are a lot of agents and they're fired and there's a lot of concurrency and in different kinds of agents, different kinds of actors, one would expect that I would get lost but I didn't. I didn't because it was easy to reason about what happens when a given agent receives a given message. In isolation, it's a very straightforward to reason what's going to happen, what will be the change of state of that actor. Since the change of state is isolated to that actor and an actor cannot change the state of another actor without sending messages, the simple model makes it actually quite straightforward to reason about the behavior of the system. Now, if your system have emergent properties, that's inherently difficult. JESSICA: You mentioned at the beginning that you're particularly good at seeing the emergent properties. JF: Intuiting that, maybe. They're always surprising. That’s the nature of a complex system. JESSICA: And do your robot have emergent properties? JF: Yeah, their very behaviors. Their very behaviors are emergent properties. I can't predict how my robots are going to behave because not only it's the result of emergent properties but there's learning involved, so it knows how to which actions tend to correct the prediction errors better than others statistically. I can't predict that either but I could ensure that the whole interactions between the agents make sense. AVDI: Here's a question based on that. If the robot does something that's emergent, unexpected and clearly antithetical to working the way you want your robot to work, do you try to figure out all of the message interactions that lead up to that happening? Or do you fiddle with things until it seems to get it right? JF: You fiddle with things because that's the nature of a complex system. When you touch it, you cannot predict how it's going to react completely. That's where intuition comes in and experimentation. AVDI: Yeah, I feel like this is very different from how we're traditionally taught to operate as programmers. JF: Yes. When you create a complex system, you essentially make a bit of a deal with the devil. Think of the Wall Street market. Nobody understands how it works. It's become complex enough with sufficient number of feedback loops that if you make -- or the economy as a whole -- if you change interest rate, you don't know exactly what's going to happen. You may say, "In the past, when we made this kind of adjustment, we've got these general properties to move in these general directions," but with the system become sufficiently complex, you cannot reason in details between cause and effect. You experiment and maybe, if you do the same thing twice, you get a different result. That’s a possibility but that's already the case in every day programming, right? Fixing a bug -- JESSICA: In real life, yeah. JF: In real life, when you fix a bug and the system is sufficiently complex, it has a lot of side effects, so feedback loops. You make a fix here, you break five things over there that you didn't even know existed, so we're already there. JESSICA: Yeah. There's bend forever in this move toward strive to simplicity. We want to be able to reason about our programs all the way through, not just in the small and the fact that that's great, get that where you can, fantastic. But hello? That's never going to be sufficient. SAM: Because our programs are dealing with input that comes from humans and humans are just messy. JF: Oh, there's that and then when your software reaches a certain level of complexity, cause and effect are no longer traceable and even worse, with all this deep learning software, we have absolutely no clue. We can't trace how an input leads to an output. It's opaque. SAM: Bias laundering. AVDI: I feel like this is kind of validating to the sort of beginner's mind approach to programming because when you're a beginner, especially if you get into it from a nontraditional angle, you're not going through like a computer science curriculum, you fiddle with things. You don't know what's going to make it work. You fiddle with things kind of randomly. I think in computer science, we kind of denigrate that approach and this is a validation of that, saying that, "At some level, you're just going to get bogged down trying to backtrack a problem all the way to root causes." JF: Yes. Actually, you said the word 'backtrack.' It reminded me when I was doing some Prolog. In Prolog, you describe the problem with logical rules and logical assertions. Then if you write a program well, you ask your program to, let's say, find a solution to, let's say a Sudoku problem and you basically program the rules of Sudoku. My first shock was I would write a program that absolutely works, not complicated, 20 lines, they come up with solutions that I have been thought about. I was writing programs that apparently were very simple but surprised me. I think the notion of that programs can be surprising starts with even very simple ones. Of course, when you get into very complex systems, then surprises your end of the day. JESSICA: You mentioned enjoying being not in the mindset of a programmer and thinking in a large and yet, when we're doing this poking, [inaudible] that cause it, with this fiddling and then experimenting to see what happens in a complex system, we're not in that super logical specific space, that a programmer, yes, we need to be able to go there but if we try to live there all the time, I don't know... We're really limiting the jobs we can do. JF: Yeah, we can be very narrow, logical, everything predictable in the small and we must be, of course, otherwise we're incompetent. But when we get in the large, then that's when intuition comes in. That's when I do my best work in the shower. You're in the shower, water is flowing and then, you just put up your mind, then sleep. Oh, you have an idea. You don't know where it came from and by the time you're done rinsing yourself, you're a different person than you were when you entered the shower because now you've seen something. Your intuition has opened up your mind and then of course, you draw it as fast as possible and try what you just imagine but at some point, intuition is a key skill. JESSICA: And how do we get that? Do we just do stuff more? JF: Yes. Read, do, take showers. JESSICA: Practice and reflection. JF: It's a good sign when you come up with solutions in your sleep. Your intuition is working beneath the harness of consciousness and direct the thought. JESSICA: I come up with solutions in my sleep all the time. I would be amazed if one of them ever worked but my sleep predictions are not that good. JF: I think we’re smarter in our sleep than we are awake, sometimes. I have dreams where I compose music and I wake up and it's pretty good. Darn, if I could do that when I'm awake but when I'm awake, I know I can't do this. When I'm sleeping, for some reason, I forget that I'm not capable of doing something and then, I do it but not always. JESSICA: That was beautiful. There's a self-fulfilling prophecy for you. "I can't do this." JF: Yes, you're always right when you say that. That's true. JESSICA: Yeah, because we're making predictions and making them true and there's always one that you can make true for yourself. SAM: Unfortunately, the inverse is Dunning-Kruger, so you know, "I can do this. Wait, no, you really can't. Yes, I can." JESSICA: That's true and then the counteraction of that Dunning-Kruger is if you can notice the prediction error of, "No, actually you didn't." JF: I've been listening to your podcast while I do the dishes. That's when I listen to podcasts. I'm on dish duty every day here at home and I don't mind because it's my podcast time. I remember in the previous podcast, you were talking about Beer... What is his name? The systems... JESSICA: That was Stafford Beer? JF: Yes, Stafford Beer. He came up with this systemic view of adaptive system and applying them to organization and I've just started thinking, "What would happen if we apply the insights of predictive processing to organization as systems." Just a thought. I haven't thought deeply about that but it seems that this strategy of predictive processing has been adopted wildly in the living world and is very successful. There must be reasons for why it's very successful and those reasons might be generalized to the world of organizations. We tend to see organizations as living systems, right? So what would it mean to apply the principles of predictive processing to organizations -- human organizations? I don't know. That's the question. JESSICA: Well, would it be, "We come up with a one-year plan and then we make it happen?" JF: Yes but that's like you're walking really fast in the crowded place, you open eyes every one hour. JESSICA: I guess, that gets into the octopus bit. The predictive processing doesn't just take place essentially in your brain. It takes place in your eyes, right? JF: Yes, exactly. There are cells just back of the retina that predict which pigment is going to be stimulated next and if that's correct, then the prediction is fulfilled. There's no neural traffic coming up but if it's wrong, then of course, there's neural traffic's. The predictive processing is not centralized in the brain. It's distributed throughout the nervous system and also, in and of itself, it is fractal. You make a large prediction and then, smaller predictions and then, even smaller predictions and your generative models are organized in fractally as well. You have very specialized generative modeling. You have higher level generative models, short term ones predicting how it's going to feel to open my hand and long term ones, which are, "Is this person going to smile after I say this joke?" It's fractal both, in terms of scope but in terms of spatial scope, time scope, in terms of the domain itself that it tries to predict in terms of the sensations that will generate for us. That's very well stated but yes. SAM: You talking about fractals makes me want to try to unpack that just a little bit. An example might be, if you're reading a book. You're looking at a page and you are seeing patterns of light and dark on the page and elements of your visual cortex are seeing like lines and then, a curve next to the line and they're saying, "That’s not necessarily this pattern of pixels. It's a line," and then it's next to this other pattern of pixels and then, something up above that says, "Oh, those two things together, that's a 'D,' and then something up above that sees several different letter patterns together and says, "Oh, that's a word," and then you read those words and you're building a sentence and as you read and understand the sentence, you're putting it in context of the narrative that you're reading as you read the book and so. JF: Yes, that's why finding a missing semicolon in our code is so hard or a typo -- noticing a typo was so hard because we don't read little by little. We have expectations that this word is going to be the name of our variable but instead of an 'I' it's an 'L' but we don't see it because of our expectations. JESSICA: The predictive errors for that are like the scale of a letter or a pixel or something, whereas the expectations of what we see in our code are at a much higher level. JF: Yes. JESSICA: So it's hard to get that little error up to that level. It gets overridden by what we expect to see. JF: Or we calibrate our attention. When you go back to the example of reading a book, actually when you read the book, you look at the quality of the font -- "Oh, this font is really good at doing 'A's but it sucks at doing 'E's." We don't because that's not where we calibrate our attention but you can't calibrate your attention and go for details. The same thing when we're debugging, we calibrate our attention at a pretty high level, looking for patterns of logic in our code and then, we say, "But it doesn't do what it's expected to do," so now we calibrate and say, "Am I writing the words correctly or there are typos?" and then -- JESSICA: And it's so painful in zooming in and out to calibrate your attention. AVDI: It sounds like we haven't created our programming tools to match the way we think because we’re used to accepting a whole bunch of error at the noise level. We're used to accepting like at the dots of ink on the page level, there could be lots of errors and we can safely ignore that while reading a book and understanding narrative because the errors are swamped by the good signal. Programming, at least the way we've traditionally done it, kind of breaks that for us because suddenly, a single error at that dot level. JESSICA: And that stupid semicolon or if you're in enclosure and it's like, "Is it a parenthesis or a square bracket or anything that uses like those symbols," because when you imbued so much meaning in a tiny symbol -- AVDI: Yeah, suddenly something at this low level of it, of the fractal -- JESSICA: Regular expression. AVDI: One stupid backslash. JESSICA: Argh! How many times do I have to escape that backslash. Do I have to escape the escaping backslash? [inaudible] a string, is it a back ticker? Double quotes limited string. Awww... SAM: I just dropped a link into the chat and we'll put it into the show notes but it's an article from Cambridge and I'm just going to read you the bolded abstract. It says, "According to research at Cambridge University, it doesn't matter in what order of the letters in a word are, the only important thing is that the first and last letter be at the right place." No word in that sentence that is more than three letters long is spelled correctly and I just read it at full speed. Our brains are apparently really good at pattern-matching, even we get bits of the pattern wrong, so yeah of course, we can't find freakin' semicolon errors. JESSICA: And auto-correction. SAM: Yeah, we fill in what we expected to see. JESSICA: -- And we just passed back up the corrected information. JF: There's another problem, I think is that our programming tools are good at a certain level of that fractal universe, a certain dimension. Anything more detailed is very hard for us to find but the higher level, the architectural view, the relationships between the various pieces of the software, there are some tools but I think most of the times, we live at the level of the text file and we're at a level of the tree and we miss the forest and we can't see the leaves very well. JESSICA: Or the leaves that grow out of the tree when it's running in an actual world system. JF: I'm thinking of the leaves like semicolon leaves. JESSICA: Oh, okay. SAM: Okay, you said the magic phrase, which is living at the level of a text file. I know, at least three of us have written, at least some amount of Smalltalk on this call. One of the things that was really different about Smalltalk was that everything was in this unified environment, where everything on the screen was controlled by the same language and you could jump up and down in different levels of abstraction. You could inspect anything that you saw, how did that change the way you approached software as opposed to jumping around between a bunch of different text files? I assume you've done both now. JF: When I first encountered Smalltalk, I had an epiphany. I felt when I was travelling through the code, the way you describe, the way I would feel travelling inside a cathedral, I could feel architecture. It was like I could touch it. It was like listening to a great piece of music and hearing and perceiving the way it is constructed. The first time I had a sense of architectural beauty was with Smalltalk and more than that, the first time I had the sense of there's a community, all the contributions are around this shared culture. There was a sense of culture and there was a sense of community sharing this culture because that image is the contribution of so many different people and it is so different from my experience of learning a programming language before that, which was you learn the primitives, you learn the idioms, you get a sense of how you're supposed to use the language with some examples but it's all very analytical. It's all very like a piecemeal but then, you're going to Smalltalk and this whole thing already exists and it has this architecture to it, the sense of being immersed in the culture. That, I thought was absolutely amazing and that's something I wanted to recapture as much as possible but I think, we've kind of lost that by going back to text files and ideas are trying to give us the sense of things coming together and the whole greater than the parts but I don't think they're doing very well. AVDI: I feel like I grew up in the post-apocalyptic wasteland that followed that culture because I remember discovering artifacts of it, discovering WikiWiki and all these old Smalltalkers talking about patterns on WikiWiki -- the original Wiki -- and all of these little bits and pieces sort of crumbling together and then, I spent a ton of time in the Ruby community and there was definitely big Smalltalk influence on that as well. It's very much the sense of like post-apocalyptic. There used to be a civilization here. JF: Oh, totally. I get you. I call it the great wasteland myself because I came to age as a programmer in the golden age and I learned APL, Prologs, Snowball, Smalltalk and there was this effervescence of ideas and paradigms. If you haven't looked at APL, please do. It is to die for. All of this was going on and then Smalltalk, which was just like Xerox PARC for programmers, all these effervescence of ideas and it was truly a golden age and Smalltalk is part of that golden age. Then, came C++ and Java and corporation saying, "It's either one or the other," and suddenly, all of this richness collapsed into a wasteland of programming languages and that lasted for a decade and a half. Then with Ruby, we have this renaissance, then now we have Ruby, Elixir, we have Elm and we have this proliferation of programming languages and paradigms and it's now okay to choose the best tool for the job because now it's not about corporate stodginess. It's about killing it, getting to the MVP as soon as possible and you've heard this person who says, "I can do it but let me use Elm and Elixir and I'll get this done and I'll do it quick," and go ahead because I don't care what your programming into. I want something that runs and I can show to VCs. I want something that I can move fast. The corporate stodginess is gone and we have this 'let a thousand flowers bloom' era and I hope it lasts as long as possible because I love it. JESSICA: That is a pretty good place to start talking about reflections. Okay, so we do a thing where each of us talks about something that particularly stood out or something that they didn't get to talk about but we wanted to or something that they'll follow up with later. AVDI: Well, the thing that stood out to me, JF, I don't remember exactly how you said it but the thing about trying not to do work but sometimes, finding ways to allow yourself to be pulled into doing work. That kind of stuck with me. I like that. JF: It works. You know, we have basic fears, we have fear of death, fear of insignificance and sometimes, fear public speaking but the fear of ridicule is a powerful one, so when you make a commitment, that's a strong motivator. The public commitment is a strong motivator. JESSICA: So does the systems thinking is to [inaudible] and arrange the system as such. SAM: A couple of things that came up for me during this talk, where this idea of generating predictions and then, at least theoretically, correcting your model when those predictions turn out not to be confirmed by sensory data, which is another way of saying that you see what you don't expect to see, which is not the way I would describe many humans. I was thinking about this idea of how humans are really good at letting their predictions override factual or sensory data. Then, you got to this other thing where you mention that Alan Kay talked about how he shouldn't have called it object-oriented programming. He should call it message-oriented programming, which is something that Avdi has brought to my attention a couple of times before and then, I started thinking about computer science education and how computer science as a field really wants to be math and the kind of people who teach computer science want to be mathematicians and they're really looking for big ontological models and systems of classification and so, when they saw object-oriented programming, certainly the name was misleading but they saw these incidental features of inheritance and thought, "Ah, that's what this is about. I can teach this and I can make myself feel smart and I can make my students suffer." Okay, maybe I'm making up that last part but it got me thinking about how this human effect informed and changed the way we think about this foundational paradigm of computing. I'm not really sure where to go with that but that's what I got. JF: If it's okay to reflect on someone else reflection -- JESSICA: Oh, yeah. JF: -- I like to jump off on what you just said. It so happens that one of my first job, fresh out of college, was to teach teachers and AB students two different programming paradigms -- very different: Smalltalk object-oriented and Prolog logic-oriented programming. My belief at the time was, yes, learning to program is a really, really good way to develop your problem solving skills, which is why in Quebec, we were introducing programming in high school based on Papert's Mindstorm ideas that really beats up and instruments your problem solving capabilities. But I thought, if you expose people to multiple paradigms of programming, if you help them look at a problem from, let's say a conceptual object-oriented perspective and looking at problems from a logical rule-based programming perspective, then the ability to switch between these two paradigms, knowing that there are more than one, just knowing that there's more than one way of looking at problem, would be a meta-skill of great worth. I did that work with students and teachers and I spoke at the first French community conference on ethics of computer science and that’s where I proposed. Actually, it created a bit of a stink because it seemed to go against the received thinking that what really matters is to learn the best possible one programming language. JESSICA: Oh, man. JF: It's like, no. Multiple paradigms, I think is something that's very important. I'm slightly worried that a lot of software developers coming to the field are essentially just learning variations on a single paradigm. They will learn JavaScript and then they will learn Ruby and then, maybe Java and this is object-oriented procedural thinking but they were not exposed to functional programming. They're certainly not exposed to logic programming and they lose the benefits of just knowing that there are multiple ways of looking at a problem. I think, that's really important. JESSICA: Because if you have multiple ways of making a prediction, then it's a lot easier to accept the errors that you see in one of them. JF: Yes, actually you're willing to make different predictions if you're in different paradigms, so you're very perception of the world will be multidimensional. JESSICA: Yeah and then, you can use the one that makes better predictions in a given situation without being married to it. JF: Yes. JESSICA: The thing that stood the most for me was when you talked about Smalltalk feeling like a cathedral, where you get a sense of the architectural beauty and in particular, the meaning of beauty there for you included essence of culture and community of depth, of this Smalltalk image has been useful to and made more useful by many, many people. I watched a talk by Eric Evans, which we'll link in the show notes the other day about 'Good Design is Imperfect Design,' and he challenges the audience to broaden your aesthetic, specifically learn to see this kind of depth, this kind of like a deep complex system that has been influenced by and has had value to many people. Learn to see that as beautiful and this applies to a lot of buildings. I really like old cities and to look at the buildings in Paris or Budapest and the chips of the walls and the 18 different colors underneath, it applies to Legacy software in any language really and I think it implies to other people that we're all different every time we step out of the shower or every time we interact with each other and that doesn't make us easy or simple to get along with but it does make us interesting. I'm looking for a word to describe that and I have yet to find a really good one. I think, we'll have to coin one. JF: Pretty much all of really exciting software developers I've worked with were all artists. They were musicians, they were painters, there were poets. They all have this very strong and very developed sense of aesthetics -- the sense of beauty and -- JESSICA: Beyond just math. We can also appreciate math but that is not the only beauty that Eric in that talk, also talked about how if it's a code, it's too consistent. If it's too perfect and the names are precise everywhere, then it gives a sense of finality and then you're afraid to change it and then you can't see those prediction errors that are coming from your actual user base. JF: Yeah. Math is extraordinarily beautiful but I don't think an accountant looks at math as something very beautiful. It looks as something practical and doesn't look at the aesthetics of it. I'm afraid that too many programmers maybe like accountants and they are might be too practical. Does it do what it's supposed to do? Yes. Is it beautiful, elegant? I don't know what you're talking about, so I -- JESSICA: Yeah, either that or I find it elegant and therefore, it's right regardless of what you say it's supposed to be. JF: Very well. Yes, that's true. My elegance is not your elegance. JESSICA: JF, do you have any reflections? JF: Well, I reflected on some reflection but if I would give another one, I would say that we didn't talk about is my extracurricular activity. JESSICA: I totally want to do... I to... [trailed off] I was hoping to ask you about Aikido. JF: Aikido is one of my passions. I spend way too many hours at the dojo and I'm constrained by my wife who says she will not become an Aikido widow. I would spend much more time than I do right now and I spend a lot of time. I appreciate her forbearance. But Aikido informs everything I do. It's the Zen cone of martial arts. It's a martial art that evacuates conflict. It's a martial art where you don't have antagonism or you eliminate antagonism. There's no competition. When someone attacks, your goal is to become one with the attacker, so that you move as one for a while, at least and in the highly empathetic connection with your assailant and then, you protect yourself and you resolve the attack in a way that also protects the attacker from harm. It’s a martial art where if you want things to happen, if you're greedy about the outcome, then the outcome is not what you want. It's self-defeating. It’s a martial art that really goes against ego. Your ego gets polished to nothing if you practice long enough. Also, to go back to predictive processing, a lot of Aikido is informational warfare. You're essentially trying to hack the predictive processing mechanisms of the person attacking you. You don't want to push or pull where you grab, you don't want to manipulate the person who's grabbing, you want to create no prediction errors basically when you're doing the technique, which leads to interesting situations where you commit to a full strike, you're attacking a master, you're committing to a full strength, big strike over the head and moving at full speed and then before you know it, you're flying in the air and asking yourself, "How did I get here? I did not feel anything." I love it to bits and it also changes the way you deal with people, you deal with problems. You don't think of them as a conflict. You think of it as how can I join this person's worldview and how can we move together in a way that's beneficial to both. It's a very different world and I think it colors everything I do now. It's also where I can do some acrobatics at my age and where it falls and have my feet over my head and go over the mat and just get my ya-ya's out. It looks like it's too philosophical about it either. JESSICA: That's beautiful. Awesome. JF, thank you so much for joining us. JF: Thank you. It's been a real pleasure. AVDI: Thank you. SAM: Thank you. This is a lot of fun.