REIN: This episode is brought to you by PagerDuty. In an always-on world, teams trust PagerDuty to help them deliver a perfect digital experience to their customers every time. With PagerDuty, teams spend less time reacting to incidents and more time building for the future. From digital disruptors to Fortune 500 companies, over 12,000 businesses rely on PagerDuty to identify issues and opportunities in real time and bring together the right people to fix problems faster and prevent them from happening again. We're like the central nervous system for a company's digital operations. We can analyze digital signals from virtually any software enabled system, and help you intelligently pinpoint issues like outages, as well as capitalize on opportunities, while empowering teams to take the right, real time action. To see how companies like GE, Vodafone, Box and American Eagle Outfitters rely on PagerDuty to continuously improve their digital operations, visit PagerDuty.com. REIN: Welcome to Episode 150 of Greater Than Code. I'm your co-host Rein Henrichs, with my co-host, Jacob Stoebel. JACOB: Hello and I am here with our guests this week, Brian Lonsdorf. Brian is best known for his work teaching functional programming via JavaScript under the moniker of Professor Frisby. He is an architect at Salesforce and works in machine learning applied to ux. Brian, welcome. BRIAN: Hey. JACOB: We'll start the way we always do by asking you what is your superpower and how did you acquire it? BRIAN: You mentioned that you'd asked me this question. So I was like, "I'm going to think about this question." And then I was like, "All right, I guess it was that I happen to have learned functional programming earlier than others." And then I was like, "That's not really a superpower." I ended up making a lot of little artsy, fun videos. I've been super interested in that. And I was like, "I think I'm creative. I can do this." There's so many more creative people that can draw and do Claymation and After Effects and all these great things that I can't do. So, I guess maybe that's not it. I would say my superpower is I can communicate and listen in a way that others seem to think I'm good at. So I'm going to stick with that. [Chuckles] REIN: Yeah. BRIAN: Which I guess makes me want to get into teaching and stuff because people are like, "Oh, when I talked to you about it, it makes sense." I'm like, "Oh, cool." And I guess I acquired that because I need everybody to like me and ended up being a mediator growing up and [chuckles] definitely understanding that realm. REIN: So what would you say is the role of empathy in teaching or communicating? BRIAN: I try to understand other people's modus operandum, why are we doing what we're doing and trying to get down to their motivations. And I think empathy is part of that for sure. Like, what are you feeling right now and how can I help you navigate tough conversations or hard things that you're trying to learn or whatever it is. Putting yourself in other people's shoes gets you down to kind of their level and what they're going through so you can better communicate. I think that's really, really important. What would you say though about empathy? I'm just curious because that's such a great question and I feel like you know a lot more about it than I. REIN: There are different forms of empathy and one of them is called the Process Empathy, which is being able to empathize with the experiences of others. And then another one is called Empathetic Report, which is being able to develop a trusting relationship with someone else. And for me, I think what you're describing is more along the lines of understanding how people experience the world, process information, that sort of a thing. BRIAN: Right. But yeah, definitely, I think it's important at every stage. Like you mentioned, it's not just the one version. But that's cool. And I think that tends to happen a lot when I try to teach stuff because I'll pick something really ridiculous to talk to people about or try to teach them about, and then you get out of your comfort zone pretty quick. And so, it helps to have someone who can empathize with that. JACOB: You mentioned just a minute ago that you happen to have learned functional programming early. And I think that's a really empathetic way to put it because I think we have an industry where we put a lot of value in what you know and when you got in on it and it's almost like it's a stock or something that you get in early. And yeah, I think just sort of acknowledging that you happen to know something is very different from making you an inherent genius. You know what I mean? BRIAN: Yeah, exactly. And I think there are people who are quite good at picking the next big thing, and so there is something to it. But at the end of the day, we follow our paths and we have our interests and sometimes the chips land in our court and whatever, to mix metaphors. [Laughter] REIN: What is it about functional programming that made you latch onto it and decide to devote so much of your energy towards teaching it, learning it, teaching it, that sort of a thing? BRIAN: Well with that, I'm sold on the whole idea of it. And it's because, I mean, I can give you the quick spiel. It's a, you write a program, tens of thousands of lines. It's just procedural instructions. And this is where we begin and we're like, "This is not maintainable. How do we pull out chunks of code so that we could maintain this application, reuse parts and try to further the understanding of readers." And so what you're doing is you're taking a chunk of instructions and putting it in abstraction, that's what we do, right? And if that abstraction does not have any laws or rules of composition and that abstraction is rooted in its environment, then you're doomed from the beginning. You can go so far with metaphors and learning and documentation and such, but having a solid unit that you can reason about, because when you're saying reuse, you mean compose. You want to be able to compose it later. And so functional programming is like, "Okay, we will establish rules around this unit of code and how it composes." Now in practice, it's still a big mess and everybody's still trying to figure it out. It's just as as good and bad as object oriented or logic or imperative. But that's why I'm sold on it. So when I came to terms with that, I was like, "Okay, this is the thing I believe in." But other people tend to not believe in that or understand it. And most of the time actually from when I started, they were thinking and was talking about procedural codes until React came out. So basically I wanted to spend a lot of time because I was just so sold on the idea and everyone around me was not. And I was like, there's got to be a way to explain this in a simpler way and then help us all as an industry level this up, because I'm still not satisfied with functional code. If we're all working on it, like we are on object oriented code, then I think it has a lot more potential. REIN: So you said, and I'm going to sort of condense and paraphrase it here. So tell me if I'm misrepresenting you. You said that functional programming is about sort of lawful composition. BRIAN: That's what I would boil it down to. After all these definitions I've heard over the years, I kind of landed on lawful composition, meaning if you have a function, it's a mathematical function, it has laws, it composes certain ways, it has guarantees. And the more you learn about functional programming with category theory and functors and profunctors and arrows and all these things, they're just fancy names for different ways you can compose these things within the world of laws. REIN: So Richard Bird, who wrote a book called Thinking Functionally with Haskell recently which is one of my favorite books on functional programming, talks about compositional thinking. And it seems that his goal in that book is to get people to think compositionally as opposed to about application. BRIAN: Yeah. That's a beautiful mind shift because you're breaking down the problem, but then you're recomposing it. And most people break down the problem, but don't often also think about how to recompose it. REIN: Yeah. There's some interesting parallels here. For example, Russell Ackoff, who was a management theorist and operational research person whatnot, he actually called himself an applied psychologist, which was interesting. One of his [inaudible]the interactions between pieces are at least as important as the pieces themselves. BRIAN: Oh yeah, that's it. That's the gem. The morphism. Very, very true. You can capture, I mean, it's a superset. You can't understand the interactions if you just have the pieces, but you can usually understand the pieces if you have the interactions. REIN: And so in category theory, you get sort of drilled into you pretty early on that category theory is about the arrows and that the objects are really just there to connect the arrows. And for me this is sort of the basic essential difference between object oriented programming and functional programming. Although if you listen to Alan Kay, Alan Kay would say that the messages are what is important in object oriented programming. And the messages are the arrows between the objects in a somewhat rigorous sense. But a lot of object oriented programmers seem to focus on the objects and a lot of functional programmers seem to focus on the data types. BRIAN: That's super true. I noticed that very, very early on that like, "Oh wait, we're right back into dynamic dispatch on these type class look up methods," or protocols or whatever functional language you're picking up has ways to say like, "Oh yeah, this function works with this data type. And when it calls it on this other one, we have polymorphic behavior," and you're right back to where you started. And then they run into all the same problems that the typical, what's it called, the row versus column issue where you extend an object with more methods or you extend a function with more types. That problem persists in both paradigms. REIN: The expression problem. BRIAN: The expression problem. Thank you. In my mind, if you're focused on the interactions between things, that's a great start. But if there is no laws around how they interact, then it becomes very hard to continue to compose and build on something. And so, in Alan Kay's approach, he's brilliant and I totally think it's great, but I also feel like that for me is missing and that's why I gravitate much more towards the category theory side. And why I'm not still satisfied with my current programming skillset and paradigm because I'm like still feeling pain daily [laughs] in functional programming. So, it's not a panacea by any means, and I think that might be one of the reasons. REIN: Okay. I have a question for you. If you want to do compositional programming, why are you doing it in JavaScript? BRIAN: [Laughs] REIN: And I don't say that to crap on JavaScript. I say that because JavaScript is a language that doesn't make composition ergonomic like Haskell does, for example. BRIAN: I do promote and very much enjoy PureScript and Haskell and other languages like that. I typically have not found a job that I want to do that's in those languages. And when I try to sneak them into my big company jobs, it doesn't usually fly. So I've found myself repeatedly being dragged back down into JavaScript land. And I just believe in the paradigm, so it kind of ended up that way. However, I will always, always first choose a language like Elm or PureScript if I can. And I should say though, those languages are really hard. They're really hard to learn. They're really hard to write programs in. And if you get really good at it [inaudible] just second nature for you. It's not as easy as many other languages for your teammates or members to learn and work with. So, empathy comes into play there for sure. Even doing this in JavaScript is harder. REIN: But you also want to teach, you want to sort of evangelize this style of programming. Stephen Jay Gould has this idea that he applies to biological evolution and cultural evolution where you have this population that has a bunch of variety and this variety can span between some range and there are limits imposed on that [inaudible] response time for a server, there's a left wall at zero. A thing can't take zero time, right? But the right wall is potentially infinitely far away or in fact it is infinitely far away because the server can never respond. Or look at sports performance, there is a right wall to, let's say hitting percentages in baseball. No one's going to get to 400 at anymore. It's not going to happen. So there's a right wall imposed by the structure of what people are doing by biology and so on. I think there's a right wall somewhere in the edification of composition functional programming where Haskell is pretty close. It is pretty easy to do compositional programming in Haskell and JavaScript is not as close. There is a lot of room to teach people how to do compositional programming in JavaScript. So if you're looking for a place to have an impact in terms of being a teacher or a communicator, there's more opportunity in JavaScript than there is in Haskell. BRIAN: Yeah, that's interesting. I don't know, this is kind of silly. But back in the day, I went through my punk rock phase and I went through my hip hop phase and just got into all different types of musics and there was always a band that kind of like was the gap, the lily pad into the world. So for punk rock, some people got into NOFX or Blink-182 or something and then they got way into gnarly underground stuff right after that. But with hip hop, you're like, "Oh yeah, we'll go with Dr. Dre and Wu-Tang." And then later on, you're actually listening to decent music. I mean, those are great music, but you get into stuff that people just aren't typically familiar with. And I see JavaScript as that bridge, like the gateway drug to other things. So I had a silly idea at some point where we just come up with the worst possible programming ideas, you say, like what could be a terrible language feature? Just one that would just ruin applications. And now, let's make it. And there's two options I would've gone with. One is Lisp and the other one's JavaScript to try to implement language features that don't actually exist yet. And because they're so flexible that you can kind of get away with a lot of stuff. So just to throw it out there, you could come up with language feature where you have a 12-sided die and it's like a data type and it just randomly grabs one of the pieces of data off that 12-sided die. You can never actually understand what's fully in there. So it's like a data production by impossible non-determinism. But you can implement that in JavaScript and pretend it's a language feature. But anyway, the point is JavaScript is really flexible and I think you're right, there's a big range there to hit a big spread of people who may not be familiar or want to even learn Haskell but might want to benefit from the idea of lawful composition. JACOB: I'm one of those people, I write JavaScript and Typescript in my day job. I don't really think can say I know really anything about functional programming. What problems of mine can I solve by learning functional programming? BRIAN: For me, the thing is people will push back on abstraction often in this day and age and it's wise. People are like anti-DRY programming. They're into like, "Okay, let's just repeat yourself because it's way worse to have one thing trying to act like more than one thing." And when you start to create abstractions and you're using objects as metaphors to the real world, you start to find yourself naming things like processor and you're like, "Well, there's no meaning to this." We just couldn't come up with a name that makes any sense because it's not a thing in the world. And so you stop inventing abstractions that need more knowledge around the context of them. Sarah Mei has a great talk about Livable Code where it's like you're a bunch of roommates in a house and you're all sharing this stuff together, but you kind of have to know the rules of the land. You have to understand every metaphor and every name. And when you start to go into the functional realm, you could pretty much benefit right off the bat from treating things as input to output functions and starting to capture some of the ideas from abstract algebra in your code that enable you to understand how this thing could assemble at the calling time. And that just tends to build up on itself to where all these rules carry through as you build more. So I think just to answer your question, you would benefit almost immediately from lawful abstraction and people being able to just come right in and look at a type signature and work with it from TypeScript without having to understand any metaphors. But then depending on how far you want to go with it, you can get it way super pure code and then it all composes all the time. But you're maintaining this massive composition machine and then you end up right back where we started. [Chuckles] JACOB: So, objects that are named something and baked into that name is meaning that may or may not be documented. And so we're trying our best to not do that. [Chuckles] BRIAN: Decode and understand this world of metaphors with objects versus in code, a kind of known mathematical structure to things. And there's a handful. If you know about 10 of these, if you understand groups or you understand semi rings or whatever, you can... REIN: [Coughs] Monads. BRIAN: [Chuckles] Yeah, monads. You start to pick up different abstractions and then a lot of almost anything, you end up becoming more of a detective. You're like, "Ooh, I have this thing. What is it? Let me look at its type signature. Let me try to generalize it." "Oh, okay. Look, oh my goodness!" I have an operator here that can work across many things. I can do this in parallel, becomes a monoid or whatever. And so by doing that, you start to build on abstractions in a very principled way. Just to quickly summarize, if I have two objects, an A and a B and they're both of the same type and I combine them, I'm just going to get a new one that's of that same type and I don't have any new abstractions. And that's typical on almost all abstract algebra programming is that you're trying to compose two things into one new thing that's that same type. So if I take two functions and I compose them, I get a new function. If I take two accounts and I compose them, I get a new account that's probably merged accounts or whatever. When you start to go into objects that are much more metaphor based, the composition typically lends itself to adapters and you end up with extra pieces as you go in it and it just continually grows as far as -- you just go back to the naming processes and then putting rules around them rather than continually naming processes and stopping there. REIN: Gabriel Gonzalez has a blog post and I think this is what you might be alluding to. It's called Scalable Program Architectures. BRIAN: Totally the one. Read that one. REIN: Yeah. I really like that one. I'll just quote the first two things because it's pretty good explanation. Conventional architecture is combine several components together of type A to generate a network or a topology of type B. Whereas he calls the Haskell architecture, but what I think of as the monoidal architecture is that you combine several components together of type A to generate a new component of the same type A indistinguishable and character from its substituent parts. And so this idea that you can build up stuff without having to create a new kind of thing and learn what it's like is I think really interesting and important. BRIAN: Right. And I think that's totally exactly what I was thinking of when I was saying that. So, good call. I think what's interesting about this whole kind of functional programming stuff is I'm shifting over into more of a machine learning data science role. And I'm finding that a lot of these mathematical ideas carry through. And trying to talk about distance functions and different ways to handle probabilities and [inaudible] search spaces and distributions as monoids. These ideas are not -- I guess single responsibility principle and all these things are still available for thought, but they're not as daily useful. So I think that's really, really important knowledge to have and learn and carries with you instead of just learning. Did you ever learn like this basic lambda calculus and you're like, "When is this ever useful?" And then all of a sudden you're dealing with type level functions and you're like, "Oh my gosh, it's useful now." [Laughs] REIN: That level of thing is really interesting to me because we've moved from the realm of something that is very simple and abides known rules in a predictable way, composition of functions to the realm of something that is too complex to fully describe. We use artificial learning because we don't know how to program a decision tree to make a program do the thing. BRIAN: Totally right. And in fact, we have a rule on our team is that you have to write the program first before you try to beat it with a prediction. Because you might be able to get away with 99%. Like it's too complex, but the stuff you couldn't capture is so irrelevant at that point that it didn't matter. Or you can write a great rule set that behaves perfectly for a certain domain, but as soon as you expand that domain, you can't do it anymore. And that's still beating it through generality, but I think that's a really great rule because you don't have a benchmark if you're just doing predictions. You're like, "Hey, I got 96%." And it's like, "But how does it really work?" [laughs] But you're totally right. We're moving into that realm and it's actually really cool. You can do non-determinism with monads, list monad and you can kind of capture probabilities and combine probabilities and monoid and so on. And so you can even start to use stuff like lattices to climb up and down and becomes a really just valuable set of skills that are beyond just like, "Now my programs are easier to read," if you're familiar with all of these things. [Laughs] I think it's really cool. As far as the teaching and all that goes, I think that the JavaScript stuff, definitely is a different set of walls. You'll hit a wall in JavaScript where you just can't do some of this stuff. Recursion schemes comes to mind. I tried to encode recursion schemes in JavaScript because I was very excited to walk the entire dom and when hit it with multiple functions and try to do stuff like comonadic, annotations on nodes, I was very excited. And come to find out both Scala and Haskell have really nice implicit coercions if you can provide a a function, like a witness to the natural transformation or whatever. So you can just say like, "List can be represented like this or this," Or, "My array can be a list." Boom, do it. And in JavaScript you're continually working to convert your function, your types into fixed point types if you're going to do recursion schemes. And it's just so not fun and you end up with types that are really hard to work with. So, JavaScript is not great for stuff like that, which didn't go way, way, way further on Scala. I don't know if you've seen, but Matryoshka in Scala, it's amazing. REIN: One of the first books I tried to read on category theory was Mac Lane's category theory for the working mathematician. And that was a mistake, it turns out. [Laughter] REIN: But one of the first things that he says, what he basically said is that the whole reason he invented category theory was to get to natural transformations. And he also said that kan extensions, which are an elaboration of natural extensions are everywhere. And it's interesting, you're talking about, you can get up the sort of abstraction ladder in Haskell all the way up or as far as you need to go. And in JavaScript, you get stuck. What a category theorist would say are where all of the interesting things are. BRIAN: I mean, you can encode kan extensions in JavaScript, but they're not going to do much for you. And I think when you see Edward Kmett's code kind of exploiting all these ideas in Haskell, some of this stuff ends up being really useful, and some of it's just a fun academic exercise. Actually Chris Penner, big fan of his, and he's just been doing so many cool practical things with this really heavy machinery from category theory. And the allure for, imagine you're in the same boat as me, is that if you can have these lawful based abstractions, awesome, let's use it as much as possible because then we can keep building on those. But as soon as you stray away and just have some kind of code that does whatever, then it might be better for that one specific case. And there's this tension between the very general lawful version and what you're actually trying to accomplish in your program, and sometimes it's not quite right. I think creator Evan Czaplicki was talking about that in a forum at some point. He was like, "All these type classes are great, they're always just almost what I want, but they're not exactly what I want." And I think you could shoehorn your problem or formulate your problem in certain ways to get that to work. But a lot of the higher category theory stuff feels like that to me. I get there and it's not quite what I need. But there's amazing applications like transforming lazy to eager, for instance with representable, just the [inaudible] stuff. And I think kan extensions left and right. One of them is eager, one's lazy in a lot of cases. This is where my limits and my knowledge start to hit. Like those are intense. [Chuckles] REIN: Even in Haskell, the vast majority of the practice, what practitioners of Haskell are doing does not explicitly use kan extensions. Does not explicitly use the machinery of category theory that's available to them. It's generally true in most disciplines that there is a research practice gap. Pick a field where there are both researchers and practitioners and the researchers don't talk to the practitioners and the practitioners mostly don't talk to the researchers. And it's true in functional programming. Folks like you that are attempting to build a bridge between what Stephen Shore calls the two islands in the same sea, I think are doing a useful service. BRIAN: If nothing else, it makes people aware of some of the cutting edge ideas, but there's no feedback loop. It's always this one way. It's like, "Go look at Academia." And for me, I have to fight through a white paper for a week, sometimes skipping parts that I don't understand and revisiting and watching another video to come back and try and understand a part. There are people that are much better at that than I, and they can just sit down and read it in one sitting. REIN: I don't know that that's true. Edward Kmett gave me the best advice I've ever gotten about how to read a paper. He said read the abstraction in the first section and file it away in your brain. And do that for a lot of papers. And then what will happen is you'll see a problem and you go, "Oh, I know something that might be relevant here." And then you'll go find that paper. And then the second piece of advice he gave me was when you read the paper, pick fights with it, challenge it. If the paper says, "When this, this, and this are true," say, "Why does that need to be true? What happens if it's this way instead?" And he pointed out that if you do this, then one of two things will happen. One, you will have a new result that could be published. Or two, you'll better understand why the paper is written the way it's written. BRIAN: Exactly. That was really cool. I remember a Phil Freeman did counterexamples for tech classes. And without them, it's really hard to understand. Like, "Here's all the lawful ones and you never see counterexamples for some of these." It's such a great exercise and I think that's great advice. And it's how I read books too. I'll read a book and be like, "Yeah, yeah, yeah, yeah, yeah. This is talking about this, I'll come back if I need it." So, it's good advice to look at abstracts like that. Although I think for me, when I usually read a paper, it's because I'm like, "Okay, I know it. I want to learn this. I've found the paper I want, so I think I need to do a better job at indexing other things so I can go find it." Usually I'm like, I find something I'm very interested in learning and then I have to [inaudible] for a long, long time. And then, it helps to code it in a programming language. You take the math or the ideas or whatever, and then you make it work in a program the way you can make it work. And that leads to much, much better understanding or solidifies your misunderstanding. But I think typically, if you encode an idea with your own code, it really helps me learn a lot. REIN: So you were talking about finding applications for some of these concepts? BRIAN: Yeah. REIN: I think we have good reason to believe that these concepts are generally applicable because of the Turing-Church correspondence between Turing Machines and Lambda Calculus. So it seems likely that when we're programming turing machines, we can do it with lambda calculus, which is a category. So it seems reasonable to assume that category theoretic concepts apply pretty directly to programming, but programming isn't an end. It's a means for humans to do stuff. And it's less clear that human society is organized as a category. You know what I mean? It's less clear that the things that computers can do are there things that we need to be able to do. It seems more that we've organized human society around [inaudible] rather than the other way around. BRIAN: Right. I think that hints at the empathy for being like, "All right, now we're all going to learn this rigid style and we're all going to stick with them because our language can't express everything." And then we all have to rely on each other to express this stuff. Or it's like TDD, right? How do you know that everybody is writing perfect coverage and all their stuff is tested without rigid code reviews. Yeah, it's a really good point especially if you're in your language that helps you be correct more, like we're writing Agda code or something. You could end up being very less reliant on other people, but then again, it's harder for people to join in. So anyway, there's these tensions. REIN: So I'm going to try to tie this into machine learning now. BRIAN: Okay. REIN: The strong AI period that started the turing in the 1950s was based fundamentally on the assumption that there was an equivalence between human brains and turning machines. And it turned out that that's not true. And then we had the AI winter where we were very pessimistic about AI. And modern advances in this area are not around based on the assumption that you can encode a brain as a very complex decision tree. They're around something pretty different. Machine learning is not like that in a fundamental way. BRIAN: Right. Very much agree. So yeah, when we're talking about category, I'm just keeping this all in my head. So we've got categories, humans, not disconnect there. Even though there was some in turing machines and now artificial intelligence is not. REIN: Modern artificial intelligence is in some way a way to recognize what's happened in cognition research and such in the last 50 years where it seems to be the case that human cognition is continuous and we are still working with computers that are discreet. But if you look at the behavior, like if you look at a neural net and you watch what it's doing, its behavior does not seem discreet, I think. BRIAN: It doesn't seem discrete in that, yeah, it can fill in the gaps and fill in the blanks. I'm extremely interested in generative AI much more than predictive discriminative kind of like. And that also hints towards that. You can project into the future really far as almost the computer has an imagination and that generation fills a lot more gaps than this is what I've seen in the past, this is what I think will happen. REIN: For a while now, there's been pretty good scientific evidence of that human cognition is continuous on the scale of like weeks to months. Like our changes and beliefs and attitudes are continuous. We don't have one belief and then at some point in the future, now we have a different one. We move gradually from one place to the other. But there's also a growing body of evidence that human cognition is continuous on the timescale of milliseconds. That mental States are actually places in a highly dimensional space and that we move continuously through that space. And that, for example, recognizing a word is an approach towards a particular attractor within that space. So there's this growing idea of cognitive gravity. BRIAN: Interesting. I've never heard of this and this is amazing. Thank you for talking about it. REIN: There's a paper that I'll link to. BRIAN: To tie it back to functional programming, you could totally encode spaces as with comonads and that's a fun tidbit. I've been spending my time trying to apply ML to generative models and design and UIs and other kind of areas in that realm. And it gets really exciting when you think about it that way of like, we're in this space and there's all this spatial dependencies and neural nets are capable of learning about to full circle the morphisms, the arrows in between the points and filling those gaps. I forgot who it was, the father of AI or something said it was everything just amounted to curve fitting. But I take much more of an optimistic view on, we could train machines to work a little bit more like we do. I don't know if we'll ever get to our full cognitive abilities or maybe we'll go beyond it in certain verticals. But as far as the generative side, I think that spreads that space much more than a prediction given a bunch of features. REIN: Speaking of generative computer things, you know what L-Systems are? BRIAN: Yeah I do, but I don't. REIN: L-Systems are you see the images of this thing that looks like a plant with multiply repeated but slightly different variations of leaves and such. That can be generated by L-Systems. L-Systems were originally used to study the growth of cells. And so what an L-System is, is it's a parallel rewrite system where you take a string and you have a rule for turning one -- so you take an alphabet, a set of characters, and you take a string of those characters and you have a rule for turning one character into zero or more characters and you apply that in parallel to every character at the same time generating a new string. And that's the next step in the generation of this L-System. And then you keep doing that iteratively over and over again. BRIAN: So it's like a comonadic milling machine or something. REIN: It is. Exactly. So the step is a monadic bind. BRIAN: Nice. REIN: So the monadic bind for lists replaces, in the case of strings, replaces a character with zero more characters. BRIAN: Oh, there you go. REIN: And it does it at the same time for every character in the string. So the monad for lists is a parallel rewrite system for lists. BRIAN: Interesting. I like that. I've heard of this, I think probably learning about Markov properties and reinforcement learning must've come up and I've seen the term a lot. I've never heard the definition so clearly. That's wonderful. Thank you. REIN: The interesting thing for me there is that this abstraction, the monad, which gets some bad press, it's often sort of reduced to being about computation in some way or being about containers in some way. But there are many different interpretations even for the same monad. So a lot of people think of the monad for list as being about non-determinism. You mentioned this earlier. Monad for lists is pick one from a number of choices, right? BRIAN: Yes. REIN: But the monad for lists is also about parallel rewriting as an instance of tree rewriting in general. BRIAN: Right. I totally agree with that. I think there's all those different, you kind of learn this abstraction on an abstraction and like, "Okay, I know what a monad is. Now I need to know what the concrete use case is in Ether and then what do I use Ether for?" And then there's more abstractions. It gets really hard for people to keep that in their head. And it can be worthwhile to walk through different ways of thinking about each of these and apply them in a bunch of different situations and being like, "We can think of it this way. Let's use it like that. We can put it this way, let's use it like that." There's not enough of that out in the world. REIN: We were just talking about generativity. And the thing that I really wish I could convince people, we're talking about systems with laws and a lot of people think of these laws as creating limits on what can be done, on what's possible. But if you look at the monad, for example, and all of these different interpretations and all of these different ways that it can be used, many of which weren't conceived of by the category theorists who first came up with the obstruction. These systems of laws are generative. BRIAN: Yeah. Are you saying in terms of how they interact with each other? REIN: That and also in the sense that they don't reduce the space of possibilities, they actually enlarge it. BRIAN: Right, yeah. Constraints liberate, liberties constraint. Exactly. And it's hard for people to feel that like, "Oh, I'm putting handcuffs on my program," but all of a sudden now I have all this freedom and refactoring and understanding how people use it and flexibility. It's counter-intuitive for a lot of people. REIN: Yeah. One of the other things that monads are for, and you can't see my quote, but I'm doing air quotes, is they provide a principled way to refactor [inaudible] ways to move things around that are correct in procedural programs. BRIAN: Yeah. That's really interesting to me too because funny enough, it comes up quite a bit with Async/Await and Promises and JavaScript. People tend to, they're like, "Oh well, this is a new feature. I'll just switch on my Promises to Async/Await." And you're reverting to procedural when you do that effectively. And then you get people that are typically like, "Oh, I want my code to read nice and I want this big pile of [inaudible] that I have to climb through to read my code." But the big difference there I think is the mechanical refactoring of this step by step process where it's just spilling variables out every which way as it goes. As you read your instructions, you just get more and more state because they're assigning your async calls to a variable. And in a promise, you're capturing it and have minimal closure and you can throw away the variable from then on out. We're done. That's usually when I tell people like, "If you need a lot of variables at the same time, that is if your for comprehension is going to be a lot of lines, you're going to continually grab more state and use all of it at the end, then you should consider a single weight." And if you're not, then that's probably the worst thing you do. You should always gravitate towards the constraints rather than the liberties because then you have more things [inaudible]. REIN: I have a couple comments on that. I think that's a really interesting way to think about it. I mostly agree. I would say that Async/Await is the do-notation and coding in Promises. It is the first thing. And the second is that in Haskell, do-notation, this procedural way to work with monads desugars directly and without any fuss into monadic operations. And so, it is very easy to show for yourself that the change you made to this procedural bit of code in a do block desugars into something that you can prove is a correct transformation by applying things like the monad laws and the free theorems you get from the types of the things you're working with. And one of the things that's great about working with Haskell is you can say that, whereas in JavaScript you can only sort of suggest it. BRIAN: Right. Again, you can't often enforce laws, or you can't at all yet in Haskell. And so there's still a gap there, but it helps you so much more. And the compiler will take the place of your constant anxiety. REIN: In Haskell, the laws are external to the system. They take the form of documentation. They're not encoded into the types, like they can be in other languages. BRIAN: But yeah, you're right. That's the tool. You have your procedural that desugars into a nice composition and you can choose the composition when you'd like to get rid of these variables, the staple variables kind of grilling in your code. And that's usually the guideline I follow when I choose one or the other. But I'm interested in your thoughts on when you said mostly agree, if there's other times that you would rather choose the do-notation over. REIN: Well, I'm not sure if I would rather choose a do-notation. I personally think that the choice to use monads that Philip Wadler made, I guess it was back in the early 90's or something like that. I don't remember exactly when the monads for functional programming paper came out. But it was an application of category theory to the practice of programming. So, it was an attempt to bridge the research practice gap that we talked about before. But it's not the only way that you can conceive of effectful computation. There are some other ideas that I think are more compelling that haven't really bridged the researcher practitioner gap yet. A lot of them come from Conor McBride. He has a programming language which he called Frank. So named to be -- I'm not going to be able to do his pros justice, but the paper introducing Frank is called Do be do be do because he always has the best paper names. And it is basically about how, I'm sure he's going to disagree with me, but I think it's about how [inaudible] between doing and being. BRIAN: Interesting. So almost like you're splitting up your declarative program into expressions that are just facts versus expressions that are actually going to go do kind of like... REIN: I think so. I'm not sure that I really understand Conor McBride. BRIAN: [Laughs] I saw one of his talks and he's like, "Here's my functor kit," and it's just like, boom! All of a sudden like higher level functors. I was like, "Oh my gosh, okay, here we go." REIN: I like that one where he talks about how much he likes to define binary functions that take three variables, like if they're higher level. BRIAN: Nice. REIN: So the idea of Frank is what if effectfulness was the default. What if the ambient environment in which computation was performed was effectful and what if certain computations could select which effects they allow or produce? BRIAN: Yeah, fair enough. Sort of like coeffects where you're declaring in the type -- well, there's a bunch of -- REIN: Yes, but the duality here is that it's a form of sort of effect inference almost where I know what effects I need because of the effects that previous computations required and the effects that my computation produces determines the effects. So in other words, if I'm a computation that has to read from the file system, I have to be executed in an environment which allows reading from the file system, or it could also allow other arbitrary things. BRIAN: Yeah. I remember reading a little bit about that. There was a lot of excitement maybe three years ago about Kofax and environments. Kofax I think now is definitely kind of solidified as like try catch generalized or whatever. But for a while there, people were talking about how you can inject an environment or type verify environment in the same breath as Kofax. So, wondering if the term split. REIN: He's pretty explicit about this not being an effect system and being an alternative. I'm not going to attempt to justify that claim because I don't understand it really well enough yet. But I definitely recommend reading the paper. I also wonder how many of our listeners we just lost. [Laughter] REIN: But let's try to bring this back. Let's try to relate this back to something that people who aren't functional programming nerds might care more about. I would love to just talk about this for two hours, but I suspect that not everyone feels the same way. BRIAN: The way I look at it is everybody, essentially what we're talking about when we abstract [inaudible] into compositional units is you have to capture their effects or else they don't compose. There's lawlessness. It's total anarchy if they're able to do whatever effect they want because you don't know if you can call this function a whole bunch. And perhaps there's a new set of laws that item potent, not the mathematical version, but the programming version of that might hint to you towards like, "Okay, I can't call this function more than once. It's probably doing something bad." But I think when we're talking about tackling effects in programs, which is the big divide between pure functional programming and imperative programming becomes really difficult to just, if you pick one of the five strategies I've seen out there, Kofax aren't really ready, free monads are tons of boiler plate. I don't even believe that the object algebras or final tag lists is like still a fully baked idea. Seems like it's just subclassing. I don't know. So you can go with interpreters and instructions or you can go with monad transformers and [inaudible] giant thing. Or you can go with like the ZIO/RIO approach where you kind of inject your effects via reader monad. There's a handful of ways to achieve effects and all of them still fall short of being something that we really want. So I think if the industry is more aware of these and working on that together, ideas like Frank, if I'm going to go check out that paper, hopefully we'll have more of those ideas. We still want to capture the laws, still want to have simple composition, but we don't want to have to manage all these really complex types to do it. REIN: I also think it's interesting that there are these freer effects library and things like that are concrete things that practitioners can use. They are in some sense bridging the researcher practitioner divide, but they stop at some point before where it's really practical for, not the people who write libraries, but for the people who just want to use libraries to get their job done, to pick them up and use them to do their work. And I wonder, like there's a part of the gap between these two islands is getting smaller. We're building the bridges, but the gap now seems to be more in the area of, we know how to make a thing in a programming language that embodies this concept, but how do we actually make it something people want to use? And that's where we get back to empathy. BRIAN: Yeah, that's a great tie in to that. REIN: I did it! BRIAN: Yeah. And I think that is the key is that if you're an academic, if you're putting yourself in the shoes of someone just trying to get something done or you're trying to get something done and you're putting yourself in the shoes of like, how do we further our industry, how do I do this the best I can possibly do it? Perhaps there's empathy on both sides that could really help bridge the gap even more is get academics to think like practitioners and vice versa. Maybe we could start to move the needle a little bit that way. But at the same time, I think a lot of this is really fun side project stuff and part of the allure for me at least is like, "Ooh, what if this paper, what if I find the best use for this idea?" And we come up with the coolest app. Promises have been around forever and then somebody decided to port it to JavaScript. Awesome. Some of these new ideas that are coming out, if somebody brings it over sooner, we could have so many more toys to play with and ways to solve our problems that are nicer. And if we have the community chiming in, I think that creates that conversation that could lead us towards the APIs we enjoy. Denotational semantics can help also bridge that gap I suppose because you can use that to help understand APIs. By the way, have you seen Conal Elliott's talk on the machine learning for functional programming with machine learning yet? REIN: No, I haven't. BRIAN: I haven't either. Can't wait to watch it. [laughs] But I suspect some of this is in there. REIN: I would like to interview him again. BRIAN: He comes down to the Hacker Dojo in, was it Redwood City or something? Mountain View, I think, every once in a while for Haskell meetups. Nice person. REIN: He is a nonviolent communication practitioner, and genuinely one of the nicest people I've met. And yeah, big fan of his. BRIAN: Yeah. I'm always blown away by his work, nonstop, which is really cool. I think I've lost a lot of motivation myself in doing some of the stuff he does because he's just so good. I'm just going to wait for him to do it and then learn from it. REIN: He's also a great communicator. Another interesting thing that happens is that people take great ideas and then there's a telephone game that happens between the idea and the implementation. So, you could pick almost anything for this agile, for example. One of them is functional reactive programming. It must be very difficult for Conal to resist the temptation to tell everyone doing FRP to get off his lawn. BRIAN: [Laughs] Yeah. That is a will and testament to his ability as a great communicator and listener. REIN: He's like, "FRP is about this very specific thing." And everyone's like, "Oh, it's about other things. Great. Cool. We're still going to call it FRP though." BRIAN: Yeah. If I can remember his papers correctly, it was really about capturing the essence of time through [crosstalk]. REIN: His being continuous, by the way. Continuous time. BRIAN: Continuous time as a concrete tangible data type that you're working with, I think. And then you end up with all sorts of, "The revolution of Reactive Programming in Scala. We're going to merge the actor system with these event streams and call it a thing." REIN: So this is another interesting possible example of a thing we can have an entire other show about, which is what I think is a general human tendency towards discrete representations of continuous phenomena. Human brains like it when things are discreet, we're very good at taxonomization. We like to put things in categories even when they don't fit. Most things don't. [Crosstalk] say, "Here's this boundary. These things are on this side of it. These things are on the other side of it and it's an impermeable boundary." But the real world is almost never like that. BRIAN: I got pretty into simulation theory for a while there and yeah, there was people out there searching for those limits. And that speaks to the human tendency to want to find those limits if not to prove that we're in a simulation. Like, "Aha, I found the limit." REIN: If you look at computers, what did we do? We made a discrete machine in a continuous environment. The whole thing about digital is you take a signal that can be not quite one or not quite zero and you represent it discreetly. BRIAN: Right. And perhaps we can move beyond that with whatever quantum computing or something. But it does help quite a bit to maybe make things discreet. And then perhaps there's some kind of category theory construct out there that can discretize and continue. REIN: That's interesting. You can't argue with the effectiveness of this plan. It's just that like everything that has a particular sort of operational range. Humans don't want to keep it in that range. We want to apply it to all sorts of things. BRIAN: Right. And it's interesting to take the opposite of that too. Like this is much bigger, but we're going to crunch it down, or this is really small and we're going to break outside of those limits. I think it's interesting too though that it's really inspirational. Like if somebody comes up with an idea and then you just abuse it, you might end up with something new and exciting. And then playing by the rules all the time. So I don't want to discount it, but it should be pretty explicit about your abuse, I think. REIN: So this has been great. I think we're sort of ready to start winding down. BRIAN: Cool. REIN: Is there anything else that you'd really like to talk about? BRIAN: No, this is good. I originally intended to talk about anytime I learn something, I want to teach it immediately because I'm so excited about it. But I'm not qualified to teach it yet because I haven't really sat with it and felt it around or understood the concepts all the way to really be able to teach it from every angle, and answer all the questions, and really, really grasp it myself. And teaching is way to grasp it, but you're out there, you might be misleading a lot of people with that. So what I found to keep myself interested is to make teaching art projects or try to think of it as a, "Oh, I have a new way to kind of express this," that I think will resonate with people. And then I get motivated again. But at the same time, sometimes that is fun and interesting for me to teach, but then ends up being very difficult for people to follow because like I'm like, "Oh, I made this art project that I want to teach you [inaudible] visualizations that I think are good, but then you might not think are good." So that was my intention coming here. But I think our conversation was much more fun and I learned a lot. REIN: I want to tell you a thing that I think might help you with this whole 'I'm not ready to teach' thing, which is that teaching and learning is a false dichotomy, which is that the way that knowledge actually happens is if it's constructed through reciprocal relationships. Every teacher also has to be a learner. If nothing else, you have to learn through empathy like we talked about at the beginning of the show. Where they're at, how they learn. And you use that to build from both ends towards the middle the understandings that you're trying to share. BRIAN: That's a great way to put that. I forget the quote somebody told me a long time ago about a great teacher teaches you how they see the world, but the best teacher teaches it the way you see the world, or something along those lines. And I think that it takes a lot of creativity and a lot of empathy and learning about the other person to start, but also learning about just the ways of teaching, how to be creative, how to be agile, on your feet, to be able to leap onto. It's almost like a psychology thing where if somebody says something then it'll instantly put me where their misunderstanding might lie. And not having that ability to really like be in tune with people and be focused and learning from them is a big flaw of teachers. REIN: So this idea that learning as a conversation comes from Gordon Pask, who wrote a book called The Cybernetics of Human Learning and Performance. And it is based on constructivist theories of psychology and sociology, which say that humans construct their experiences together. And one of the things that we construct together is knowledge. BRIAN: Right. And it's really rewarding. It's one of the most rewarding things I think is getting to that place where you feel like you've constructed the full picture and you're like, "I get it. I get this now." If I understand that correctly. But yeah, what was his kind point there, the book, if I may ask? I was kind of kind of filling the gap myself. REIN: The book is about a whole bunch of things. The point I guess of the conversational theory is that learning requires relationships and it requires bi-directional communication, i.e. conversation. And that teaching isn't just a thing that I do at you. Learning and teaching is a thing that we do together. BRIAN: Gotcha. So yeah, I assumed as much, I just wanted to make sure I understood correctly. But yeah, it's very interesting. So to keep it interesting to teach something that I might not fee -- either I'm feeling fully qualified to teach something and then it stays interesting by that adventure together or new ways to try to capture knowledge. And then the other side of it is you're teaching when you're not fully understanding something, but you're very explicit about that. And you're like, "This is how far I understand it. But I'm so excited to tell you about it and I'm not far away enough from shore that I still understand everything you're going through trying to learn this too." REIN: This frame that I think you found of like, "Here's the thing that I found, let's discover it together. Let's build something together so that we can learn together," is sort of what it's about. That's how it always works. BRIAN: That's beautiful. I like that. I think that's a great way to end this show. REIN: So, this is the part of the show that we call reflections where we do that thing. A lot of these shows are really insightful for us and sometimes also for the guests. And it can be difficult to boil that down into a sentence or two. We've talked a bit about the perils of reductionism. So instead of trying to boil anything down, I'm just going to talk about something that was really significant for me in this conversation. We usually let the guest go last, but why don't you go first? BRIAN: All right, cool. You explained L-Systems so simply and beautifully that I can take that with me now and use that as a launching point to look into it further without having no idea of what's going on. I'm very excited to read about Frank because I'm in my endless pursuit of easier effects. It seems like something that I'd be really interested in. And finally, yeah, the range that we impose. I think it shifted my thinking a little bit when you mentioned that because actually when you first brought up the range in Haskell and JavaScript of capacity of teaching functional programming, I was thinking more of like, Haskell gives you much more tools to teach it. But JavaScript, if you made a pie chart of the population of people using it, the range is much bigger there. And thinking in terms of those ranges, I think, is worthwhile when you make decisions in your career just day to day in the same way that I'm thinking probability has affected my life. I'm going to throw in ranges now. REIN: One of the things about this idea of ranges, implicit in the idea that there's a left wall and a right wall and that there's variation between is the idea that you can actually rank things along a dimension. But dimensionality is a way of making things discrete. We say, "Here's a thing we can measure on a line." That's discreet now. In fact, I don't think variety is dimensional. Dimensionality is like every other form of taxonomization, something that humans impose on the world. It's not a part of the structure of the world. But it's very useful for us that we do this. For example, stereotypes and generalizations of all sorts, we need them to survive in the world. It's a useful heuristic, but I also think we need to be very conscious of the fact that it is a heuristic, that it is an imposition on the world that we use to understand it and not a part of the world itself. BRIAN: Yeah. Earlier I was like, "Oh yeah, there's got to be a category theory thing that goes back and forth." And there are junctions where you have this continuous space and you can go into this fixed realm and then get back to the continuous somewhere along the line. REIN: By the way, I mentioned the word cybernetics, and Gordon Pask in that book defined cybernetics as the art or science of creating defensible metaphors. And Stafford Beer, another founder of cybernetics, very explicitly talked about how we create isomorphism between systems that are structure preserving. One of the ways I think about cybernetics is as an attempt to apply category theory to basically everything by people who didn't know that category theory existed. BRIAN: That's kind of awesome. It's a formalization around the idea or one of the formalizations. I'm still working on homotopia. I've got no idea what's going on there. [Laughter] REIN: Interestingly, homotopia is a way to make something discreet into something continuous. BRIAN: Oh, there you go. REIN: So equality in homotopic theory is not a point. It's a path. BRIAN: Right. I do know that. And I have worked a bit with typology, but I have struggled to really understand how homotopy applies to type theory in a way that, to carry that metaphor over, besides just kind of automatic coercion of type structure. [Laughs] REIN: This has been really fun. BRIAN: Yeah, super cool. I'd love to come back on the show and continue these conversations. I'm learning so much from you. REIN: Also, we have a Slack. And if you haven't been invited, you'll get a follow-up email with an invite and we're all in it. We have a really great community there and we'd love to have you. BRIAN: Well, I will see on the Slack then. REIN: Awesome. Thank you so much. BRIAN: Thank you.