JESSICA: Welcome to Episode 104 of Greater Than Code. I am here today with Janelle Klein. JANELLE: And I am here with my good friend, John Sawers. JOHN: And I am here and I'm happy to introduce an episode all about Sam Livingston-Gray. JESSICA: Yay! Finally, we cornered him. JOHN: Sam Livingston-Gray has been a dad since 2008, a Rubyist since 2006, a Portlander since 2001 and a programmer since, at least 1998, a juggler since 1988 and a human seems 1974. He's keenly interested in writing software that makes other humans lives easier and making technical topics easier to understand and in helping increase the number and variety of humans in technical spaces. Welcome, Sam. SAM: Thank you. I'm so glad to be here and I'm so glad that you all ambush me with this, so that I didn't have to be all anxious about it for like three days coming up. JOHN: It's a clever ploy. SAM: Thank you. JESSICA: I have a question. Before you are a human, were you a jellyfish? SAM: Not directly, no, but I came from a jellyfish or at least, part of me did. JESSICA: Which part is that? SAM: That would be my brain and your brain too and all of the other signaling mechanisms we use inside our bodies to coordinate them and feel things and laugh and cry and all that good stuff. JESSICA: Does that give you jellyfish superpowers? SAM: I wish. No, actually, as it happens, jellyfish just freaked me the fuck right out because they float around in the water and they're blobby and squishy and slimy and they can hurt or even kill you and they don't have brains, so they didn't even know they're doing it. JESSICA: They don't have brains but yet, we have them in our brains. SAM: Yeah. This is a really random factoid that I ran across at some point, that essentially, jellyfish as I understand it -- this may be apocryphal, I maybe spreading falsities but we'll go with it -- that jellyfish were the first to evolve neurons, which they needed to coordinate the way that they move because a jellyfish is just a bunch of clear cells that sort of, they can expand and contract and if they all expand and contract in the same pattern, then the whole thing can sort of go smooth and move itself through the water. In order to coordinate those, they developed these neurons, which are these horribly slow chemical signaling things but they worked well enough and because evolution is fond of things that work just kind of well enough and then adapting them and kludging them into other things that may also work well enough. We still use them today. JOHN: Are they solely chemical and that we add in the electrical signaling later on? SAM: No, I think they're electro-chemical so the electron goes through a single neuron but then there's a chemical interface between that and the next one. JOHN: So we tried to get you talk about your superpower and then you totally steered us off course into this jellyfish thing so -- SAM: I blame Jessica for that. JESSICA: Okay, so if you don't have jellyfish superpowers, what is your human superpower? SAM: My superpower used to be being able to spot actors who are on Babylon 5, even if they were only on like one episode under heavy alien prosthetics. But I think a lot of those actors are no longer working and so, that superpower has kind of become useless. But really, I think my superpower is making connections between weird things. JESSICA: Such as Babylon 5 and people in real life? SAM: And jellyfish and the way we treat each other as people. JESSICA: No wonder you're on the show. SAM: Really, what I'm saying is my superpower is ADD. JOHN: It has some advantages. SAM: I'm sure glad it has some. JANELLE: Taking that, you've got this metaprocess of being able to make connections, like seeing the relationship between jellyfish and thinking and these chemical interfaces that you describe like software, right? If you were to describe this superpower that you have of making connections between all these weird things: Babylon 5 and jellyfish and people, what is the magic between all of these things. Let's see you work with your superpower. SAM: If anything, I would just say that this is maybe another property of the way that we organized our jellyfish cells together and that brains are weird and the architecture in which we store our memories is extremely weird and we still don't know how it works. For me, I make a lot of weird associations between specific facets of things that just don't necessarily make sense to other people. I make a lot of intuitive leaps and it drives my partner nuts because she'll say something completely random and it happens to have like three words in common with a movie quote. I'll just say that movie quote and she's like, "Really? What?" My brother and my dad and my stepmother and I, we all can have like entire conversations in movie quotes. It's great. JANELLE: One of the things you mentioned with this specifically, in terms of quotes that stuck out with me, is that it's really slow but it works well enough and working well enough is enough of a process to drive evolution. I definitely see that pattern in software. What are the lessons that we can learn from the evolution of jellyfish that we might be able to apply to the software world? SAM: There's probably a really interesting vein of insight around like test-driven development and tests as a fitness landscape for your code and then, also customer requirements as a fitness landscape for your tests. Maybe, we can come back to that but what I really think might be more interesting is the idea that evolution produces organisms that exists and thrive in a specific environment. You take an organism out of one environment and put it into another end, sometimes it can adapt and sometimes it's just so highly specialized that it fails. But what this means for me in terms of software is that you cannot judge a piece of software objectively and I'm using heavy air quotes here. You have to judge a piece of software within the context of the business that created it or the organization or whatever else. It's not all business software. That's just what I write, so that's what I think of first. You can't judge it outside that context of the organization, the people who wrote it, the other constraints that they were under. What I guess I'm saying is that there is no good code or bad code. There's just code that works well enough that somebody was able to move on to do something else. JESSICA: More or less suited to purpose. SAM: Yeah and arguably, probably some things could be better suited to purpose than they are but unless you're looking at a piece of software where everybody left the project at the same time and it was abandoned, then it's an artifact really but you still have to judge it within that context because software really is living. JESSICA: Yeah and if the team has abandoned it, then it's a zombie if it's still running. SAM: Yeah. JESSICA: Karl Popper says that every theory exists in a particular problem environment and you can widen the theory to mean solution and so software is a solution to a particular problem environment and organism, so this way too. Organisms are solutions to a particular problem environment, which is their ecosystem. Then when you have a piece of software, if it's an interesting one, then it changes its own environment. Anyway, if it doesn't, its environment changes anyway so then you get a new problem environment and then you have to come up with new solutions. SAM: Because what have worked before doesn't work now. JOHN: And that goes along with sort of environment itself, evolving outside of the organism on its own and then having to adapt and step back and forth influence process. SAM: Right because each organism changes the fitness landscape for everybody else around it. JANELLE: Can you break that one down -- fitness landscape? I heard you say it a couple of times and I'd love to hear you unpack that one. SAM: Okay, yeah. I want to come at this from a perspective of evolutionary algorithms. The way that those work is that you have some piece of software that is able to generate candidate solutions to a particular problem. For example, the neighborhood that I live in is not part of the overall city grid. It's this weird diagonally-shaped thing with lots of loops and circles. Some years ago, there was a person who had a bike ride around the neighborhood and the ride was scored, so that every little segment of alley or street that you rode down basically, from one intersection to another, would get you one point. But if you back track, if you rode across the same segment more than once, you would lose a point. For those of us who see things this way, this is obviously a graph traversal search problem. I was fascinated by this and I sat down to encode it and I didn't realize just how many intersections there were in my neighborhood but there's over 100 of them, just within the bounds of this one exercise. I started writing some code to find optimal path that had as minimal backtracking as possible. Then a buddy of mine took a look at my very iterative code and he wrote this really quick thing that would generate two random paths and he would read them. Essentially, he would take segments where those paths overlapped and generate other paths from those and keep the ones that yielded the highest scores. Basically, what an evolutionary algorithm does is it generates candidate solutions to a problem. It evaluates them for their fitness. You have to have a fitness score that you can use to evaluate a particular solution. It takes the solutions that it has, it takes the ones with the highest fitness score and it uses those to randomly generate others and so on and so on and it's this very iterative process and it can go for thousands of iterations before it finds a reasonable solution. When I talk about a fitness landscape, what I really mean is the way that the fitness scoring works, in my simple example, we have a neighborhood that has a fixed number of segments and there is probably, a theoretical maximum score that you can achieve but that doesn't change. When I say a fitness landscape, what I mean is that you're doing this single process in the context of a whole bunch of other things that are also doing that same process and they're all interacting with each other. JESSICA: Is this about defining what is better? SAM: Right, except that there's no objectively better. JOHN: In the reading that I've done about evolutionary algorithms, one of the sort of key difficulties is in properly defining that fitness landscape or the fitness function that can determine what the successful end result is and then once you get to a certain level of complexity, like I would like to evolve Groupon from first principles, defining that fitness function is even more work than actually just building Groupon as you would normally. JESSICA: Yeah, I think a lot of what we're good at as humans then we don't realize how hard it is defining better. I watched a talk the other day about playing with neural networks and neural networks are coming up with names for ponies. It's easy to write the neural network compared to picking the names that are funny for the slide. SAM: Yes. JESSICA: Yeah, so you can make the computer come up with a bunch of random pony names but who picks the ones that are funny enough to put on the slide. Hold on, I need to get some examples now. SAM: For our listeners at home, there's a site called AIWeirdness.com, which has a bunch of these hilarious examples and it's exactly that. It's somebody feeding a neural network, a corpus of like thousands of examples of a thing and then letting it generate its own. Little while back, there was a hilarious one about pumpkin spice beers. Okay, Jessica, are you ready? JESSICA: Blue puss. Raspberry turd. Pocki Myer. Anyway, that fitness function is really, really hard to write. JANELLE: Thinking about jellyfish though, we started this discussion with using organisms as a metaphor to think about evolution in software or evolutionary process on other things. If we think about the fitness function that drove the evolution of neurons, the signaling mechanism, how would you describe the fitness landscape of the evolution of neurons? SAM: Well, those things worked well enough for jellyfish and I haven't looked into this well enough to be able to trace where neurons went after that. But another example that I can give you, another one that I read about a while back was our eyes. Fish were the first creatures to develop eyes as we think of them in ourselves and they were really well adapted to functioning underwater. When creatures evolved that started going up on to land, the eyes didn't work as well as they did under water but they still worked. They still provided a competitive advantage and so, evolution never wants to throw away anything that even sort of works, so we just keep using these eyes that originally evolved to function well underwater. There's been some adaptation since then but still, it works well enough and it keeps going. JESSICA: That's weird because I hate opening my eyes underwater. SAM: Right. JANELLE: I'm sort of thinking about your magical powers and how we can get it out and trying to come up with questions that I know are a little hard but they might come up with the flash of insight or you might do something like that, where you just sort of shift gears and brought up eyes because it's a connection. It's a piece of the puzzle. I'm thinking about this arrow of better and what that fitness function looks like. There's sort of a definition of good enough to be useful of having utility. We've got the evolution of a model and the evolution of a technology in the abstract sense, that has utility in solving a survive and thrive problem within the context of a particular environment. It seems like that arrow of better within a particular bounded context is the nature of a fitness function and then when you breed these things together, it gives you the opportunity to try out different solutions to the environment problem and potentially, find different peaks. SAM: Yes and I'm so glad you said that because this absolutely leads into one of the problems that evolution has, which is this idea of what's called a local maximum. Really, fitness landscape is probably this like 50-dimensional all space that we can't actually directly conceive of but if we take things down to a really simple two-dimensional example, like the number of points that you would get for writing a particular route around my neighborhood, you can figure out some way of plotting the fitness function and you can have like some curve that goes up for a while and then goes down and it goes up a little bit higher and then it goes down again. If you wind up on the first part of that curve, evolution is really good at doing what we call hill climbing. It will explore left and right of whatever point it's at and it'll find out that over to the right here, we do a little bit better, so it'll move up that way. It'll move up and move up until it reaches that inflection point where any change in behavior will cause it to become less fit for the environment that it's in. Let's say you find yourself on a hill in this fitness landscape and you've got little valleys on both sides and on one side it goes down to the sea but on the other side, it goes up to this huge mountain. You, as a human, can look at that and you can say, "I'm going to consciously make the trade off to go down for a while and then back up, so that I can get higher up," if height is my goal. But evolution will not ever be willing to make that trade off. JESSICA: Because they can't see that part. SAM: Right. Evolution is working with individuals that the real fitness test for them is do they survive long enough to reproduce and to reproduce and compete well enough, that they can continue to do that. JESSICA: And that's one reason that it helps to have several different populations and then to interbreed them. Kind of like you when you make leaps between two very unconnected ideas and you interbreed them and explore a whole new piece of the solution space, that probably is completely irrelevant but now and then, it's -- SAM: Or if you clone like 10 of me and brought me up in differ environments -- JESSICA: Then we would get 10 completely weird jumps into interbreeding solutions space. SAM: Exactly and that actually comes back to one thing that's useful about evolutionary algorithms as opposed to the actual evolution that produced us, at least within the bounded context of a single planet, which is that with evolutionary algorithms, you can start again with different random inputs so that if you think about it as again, two dimensional curve, you vary X and see what Ys you get to. If you start a bunch of different times, you start out at different parts of the hill and maybe, one of them will find the actual maximum. JESSICA: Which happens in nature and historically, when you have this many different populations that eventually get to compete on a population level and to be depressing about that, now that humans are globally interconnected, we really only have one population of us, so if this one doesn't work out, we're in trouble. SAM: Yeah. I hope we don't turn out to be our own best predator but it is kind of looking that way. JANELLE: One thing that's interesting about humans in particular and the level of self-awareness we've reached, like if I think of myself as a bounded context, just me, myself and I, the memories in my brain, here I am on the top of a hill. I've got valleys below, I can look across the landscape and I can see other people on different hills that I can have relationships with. Of all things that I found that have helped me to make a leap to run down the hill that I'm on and check out another mountain in life of following my passion and dreams, which sometimes causes lots of change in your life. Anyway, I'm thinking about the things that have pulled me to other hills and it's been through connections with others that become a catalyst for making those discoveries for allowing evolution to take place in the context of a lifetime, in the context of the bounded context of me, myself and I. JOHN: I think it does get to the point of where the evolution is switching from the physical, like evolving new limbs and new senses to the social and the emotional where it can change much more quickly and it can go between individuals and within the individual lifetime and still, have that same benefit of combining interesting disparate things that can allow much more explosive and this contiguous growth because you've got exposure to all the other interesting individuals in the world. SAM: I don't remember the exact quote because I actually haven't watched it for too long but there's a thing from Babylon 5 -- hey, hey -- where one of the characters says that we are the universe splitting itself into pieces to understand itself. JESSICA: Yeah, that comes up in Alan Watts that the universe is one hole that is playing hide and seek with itself. JOHN: Nice. I want to go back to one of the things that you had said, Jess earlier when we were talking about fitness functions and that you can build some things that will give you all the options but then picking the best options is something very human. I think this is something that Sam, you mentioned on an earlier episode, where there are physical brain disorders that disconnect emotions from input into decision making and when that happens, decision making becomes nearly impossible because you have no way to add any valence to a value of 'should I have the appointment today or the appointment tomorrow.' If there are equally, objectively useful, there is no like, "Today's a little bit easier," and so without that emotional input into decision making, which is I think something that humans do uniquely well, you start to get this broad undifferentiated, equally-objective thing that you're trying to choose between but can't really make that choice. SAM: That's such a wonderful segue. Thank you. Because it gets into how humans make decisions at all, right? JESSICA: How do humans make decisions? JOHN: At all? SAM: Badly. JESSICA: No, we actually do an amazing job, seriously. SAM: No, it's true. JESSICA: Like, I think it's [inaudible] this thing that cognitive biases, can we like move past the idea that those are bad? They all exist for a purpose to help us make decisions because if we did try to do the complete evaluation of every possible move and objectively which is better, it doesn't freakin' matter. What matters is choosing something: terminations. SAM: Yes, exactly. My favorite talk that I ever gave a couple of years ago was about cognitive biases and I had started collecting them before that point but I really spent a lot of time thinking about like why we have them at all but I came to a similar conclusion that we have these cognitive biases because at some point, they provided us with an evolutionary advantage and our fitness landscape has changed drastically as our culture has undergone a really huge transformation in the past couple of centuries and evolution just hasn't caught up to that yet. Some of those cognitive biases may now be mal-adaptive but many of them still are pretty good because -- here's another fun factoid I picked up doing the research for that talk -- our brain accounts for 2% to 3% of our body mass and 20% of our caloric consumption. Our brains run on rocket fuel, essentially. I first ran into this in the context of tired, old meme that we used 10% of our brains. There have been movies and TV shows that are predicated on this exact premise and it's [inaudible] because there is no way that evolution would saddle us with an organ that takes 20% of our caloric intake and only gives us a payback on 10% of that. We have these giant brains because they give us advantages but in order to make them work and in order to make them work using ridiculous jellyfish cells, evolution has had to take a lot of short cuts. JESSICA: Like what? SAM: Here's another fun little neurological foible that we have. JESSICA: Yay, foibles! SAM: Yeah. There's this thing called a saccade and this is what happens when you shift the focus of your eyes from one point to a different point. We don't see the way that a camera sees. A camera has to capture all of the light field that's coming into it because it can't predict where we're going to look so it has to capture the entire image. Most of our vision is concentrated in a small point about six degrees across and everything else is peripheral vision. In order to get details about something that we're interested in, we have to look shift to that area around quite a bit and our eyes actually make the fastest movements that a human body is capable of. You can shift your eye from all the way on one side of your field of vision all the way to the other in something like two-tenths of a second to approximately and -- JESSICA: Is that like the deliberate one or the accidental one? SAM: I'm not sure I follow. JESSICA: The two-tenths of a second thing, is that like when you move your eyes on purpose? SAM: I don't think it matters. JESSICA: Now, we're all looking back and forth. SAM: Right. JESSICA: Don't try this while you're driving. SAM: Yes, please. So in that, up to two-tenths of a second, the center of your eye, near the fovea centralis, it's getting a lot of smeary input. What your brain does is it actually just edits that part out. It shuts down visual processing while your eye is moving and then picks up again when your eye stops. This is all very smoothly coordinated inside your brain. We've had millions of years to make this thing work and what it means is that every time you move your eyes, you are effectively blind but you don't think that you're blind because your brain continues to provide you with the illusion that you're looking at a continuous image. JESSICA: It like fills in the part that you don't literally see, right? SAM: Right. Because it wouldn't be useful for you to look from Point A to Point B and then like be temporarily distracted by this wonderful smearing effect of all the pretty things going by. Because if you were distracted by that, you might go, "Oh," and you might do it more and you might find yourself looking around for 30 seconds and making all the colors blend into each other and then suddenly, you're eaten by a lion. This is one of those instances where evolution has taken a toll that doesn't work very well and kludge together something else that has its own disadvantages but mitigates the disadvantages of the first thing well enough that you can survive long enough to have kids. JOHN: There was a novel by Peter Watts called Blindsight that actually used this as a major point where there was a creature that would only move when it detected that you were in the midst of a saccade, so it was effectively invisible. SAM: Yeah, that book was excellent and it's so, so bleak. JESSICA: Dude, you can actually do that with AR. They can use that saccade to shift the landscape around you, to make it look like you're moving faster than you would if it moved while you were focusing forward. SAM: There's been experiments where people look at a computer generated page of text and as you move your eyes around, the computer knows where you're looking and will change bits of the text that you're not looking at so as you're reading this page, it continues to refresh but you never actually see it. There's another really simple experiment you can do, which is look at yourself in a mirror and look yourself in one eye then the other eye then back to one eye then the other eye and you will never see your eyes move. I ran across this wonderful quote in -- JESSICA: Oh, but it doesn't work on your phone because there's enough of the delay. If you're using your phone as a mirror, then you can see -- SAM: I haven't tried that. That's amazing. Thank you. But I ran across this wonderful quote about saccades and in the research for the aforementioned talk, which is something to the effect of your brain not only hides information from you but it hides the fact that anything was hidden. This was one of the things that I thought was important to understand about cognitive biases, which is that if you don't know they're there, they will rule your life. If you do know they're there, they will still rule your life but you at least may have some chance of offsetting some of them. JOHN: Yeah. I was watching a talk about negotiating salaries and they were talking about the anchoring effect or all the other numbers are based on that and the problem is with this. It's like even though that you know that happens to you -- SAM: You do it anyway. JOHN: -- You're still subject to it so you have to be really careful about where that first number comes from. If you can find an excuse to say some other very large number, instead of like the salary you're asking for, you can mess with that. If they say like, "What was your last salary?" If you can't get out of saying it, you can say, "Well, when I was working on this last project, we serve six million patients in a project and it was really helpful," and then all of a sudden, you've anchored the conversation at six million into the salary you're asking for. JANELLE: What's fascinating to me about your comment there, your brain not only hides information. It also hides the fact that anything was hidden. That means that hiding of the hiding has utility. It falls into fitness functions. What do you think is the implication of that, of why hiding information is useful? Or hiding the processing of how the knowledge was derived, if you will? SAM: Well, if I had to guess which I do have to guess because I don't know how this happened, my guess would be that organisms that didn't hide the information at all were at a competitive disadvantage to those that did. Inferring from that, I would guess that organisms that hid that information but then were aware that they hid that information, may have also gotten distracted and so, evolution probably added the second layer of, "No, really. Don't think about that. It's not good for you. Focus on this other stuff instead." It's just a wild guess. JANELLE: The thought that went into my head again, just a guess is that by passing information along, that information requires processing and our brain already runs on rocket fuel and consumes a massive amount of energy to run. The more we're able to filter the noise, the things that don't matter, the less processing we have to do and our brain is more capable it is to process information such that filtering and compression as a generalized feature is evolutionarily advantageous. JESSICA: Yeah, absolutely, like filtering out data is crucial, compression is crucial, forgetting is high energy and really important or high entropy production, rather I think which is weird. SAM: Yeah. I forget the source now but I ran across an essay that culminated in that phrase, your brain is the shell script running on top of all of the data that thinks it's the data. I was basically making that same point about there's massive amount of information coming in and your brain is filtering and processing most of it for you before it gets the conscious level. JESSICA: Did you read the thing about how our eye cells react to change, rather than sending constant signals. They only send signals that they think are relevant which in eyes typically something changed and then at every level on your own, you're constantly filtering out and sending only the bits that seem important. SAM: I did not read that but it sounds totally plausible. JESSICA: I should try to figure out where I read it then, so I can leave it in the show notes. JOHN: You know, I sort of wonder about the second layer of filtering that happens where you filter the filtering and wonder if that's useful for constructing a consistent narrative of the self like if you had all disjointed, sort of flashes of information that you know there's stuff missing, you may go looking for and then you sort of lose the benefit of filtering it out but I also wonder if just having this sort of faux experience of continuous inputs and continuous self-hood is part of that. SAM: We only have memories in the first place because they help us avoid things in the environment that might kill us or find things in the environment that will help us. We eat something, we throw up, we develop an association that makes us not want to eat that thing again. That actually brings me to another really interesting cognitive bias which is the unit completion effect. Basically, this is our tendency to eat all of the thing, even if we realize partway through that we're not hungry anymore. I think it may apply in other contexts as well but that was the -- JESSICA: Well, some people actually wants to finish books. SAM: Right, yes. Well, the sunk-cost fallacy -- JESSICA: In a podcast episode, if you have to listen until the end of the music at the tail end of this podcast, you might be a completionist. SAM: Very nice. JANELLE: It sounds like Gestalt Principles, like closure principle as a motivation force. It's kind of what I'm hearing with a unit completion thing. Like I start a thing, I sat in attention to do something and it becomes a transaction that's running, that wants to finish. SAM: Okay but why? Why did we evolve that? JOHN: I'm not sure where exactly the idea comes from. It might be from autonomics but I remember hearing a talk of a model of this sort of thing where you start a task or whether an input comes in and you need to decide what to do with it, where it's sort of like the system is at rest, a perturbation comes in, the system is now open and needs to find closure in some way and that closure is either from completing the task or from resolving the input into some decision and sort of our systems are built on always finding that closure in some way. JANELLE: Yeah, I kind of think the same thing, that the fundamental driving force of life is closure and intention. We you talk about your eyes, knowing where they were going to point next, there was already an intention behind the motivation of your eyes. We're not just passively sensing. We're reading in inputs that are relevant to some sort of problem we're solving, whether it's trying to understand the space we're in to get clarity on that, such that we can make a decision about how we're going to move, whether it's whatever I want to do with my life. It goes at all these different levels of abstraction but I feel like there's this pull toward, this gravity of home like a homing signal almost, that is this closure of life. What is the narrative of my life going to be like? If I read my gravestone, what it's going to say on it? If I look at my life backward, I got this dream of intention that becomes this motivating arrow, that is this feedback loop that I think drives the whole sensory system. See, this is what happens when I read 'Zen and Motorcycles.' SAM: Very nice. JESSICA: Completionist is probably one of those things that humans do it and as a rule, it's probably not usually a win but once in a while, it's a big win. Often, there's a lot of things that are like net negative for the individual, say entrepreneurship -- completely irrational in an individual level. Military service, how is that individually a good idea. But societally or in the case of military service only at the nation level, there are benefits because as a society, we totally benefit from entrepreneurs taking huge risks because thousands of them just go broke and go bankrupt because they don't have health insurance but a few of them do something amazing to changes the world. The world, as a whole benefits from people taking that risk and completionism might be one of those things too. In software, that totally applies because people will write open source projects and libraries and spend an irrational amount of time on them and yet, most of those just end up in GitHub forever quietly festering but a couple of them become incredibly useful in those communities. SAM: A way to bring it back around. Nice. JESSICA: Books, too. Why do people write books? I'm glad they do but there are so much work. SAM: Why do people right talks? They're so much work. JESSICA: No, talks are over. There's a very clear definition of finished with the talk that has nothing to do with the talk being the best they can be. SAM: It has to do with your time on stage is now. JESSICA: Exactly. JANELLE: With no fluff, though, it's like the same material grinding it over and over again. It's like, I got to do it 50 times. JESSICA: So it actually gets good. JOHN: Yeah, I've down my talk 10 times, so I've gotten up to version 2.7 now. I did realize that I wasn't following Sam for the first few. JESSICA: And then again, you get into the hard trick of defining what is better and then you need like surveys and you need feedback from that conference attendees because pretty soon, I know my material so well that I can't tell whether it's better. Is this joke funny or not? It's not funny to me anymore but they're still laughing. JANELLE: Different audiences definitely play into that too. I had some jokes that completely didn't land in Iowa but everyone laughed everywhere else I went except Iowa. It was like we're on a different wavelength. It was like this dead silence thing where then I'm making jokes but nobody laughing in my jokes. JESSICA: Different fitness landscape. SAM: Ouch. JESSICA: Okay, Sam. We talked a lot about interesting theories about the world but I want to know about you. Was there something that you have learned as a software engineer that you don't think you would have learn in another career? SAM: I would say the benefit of changing my mind because if I worked in some physical medium, if I designed cars for example, or realistically if I designed some small portion of a car, I would only have one opportunity a year to design something that maybe was slightly different than the thing I had done before but it really still had to fit all of the same constraints and maybe, I would make it like 2% or 3% better and I would get, however many years or in my career, chances to do that. Whereas in software, I can take a couple of hours and I can explore a completely differently shaped solution and I can see whether it works or not. Even if I decide that it doesn't work, I now have the experience of having tried a different thing and in the process of deciding that it doesn't work for the thing I wanted, I now have a better idea of some other thing that might work better for. JESSICA: Wow, so like an evolutionary algorithm. SAM: Yeah, I suppose. JESSICA: Yeah, that speed of feedback on software, you don't get that when you're bridge building. SAM: Yeah, it's malleable. It's relatively fast feedback. That is one of the things I really like. The other thing that I might not have really picked up and had hammered into me so effectively is the importance of clarity of communication because code is a medium of expression. The first couple of years of my career, I was just trying to make the damn thing work at all and then, I actually went back and got a degree in computer science and some parts of that helped me understand how to make things better and some parts of that started me on this path of understanding, how to communicate with other people and since I finished my degree, which was 11 years ago now, I really have spent a lot of time focusing on what my code says to the people who will work on it after me. JESSICA: The meta-conversation. SAM: Yeah. JESSICA: Not just what are you saying but why are you saying it or what does that say about you and about your problem space. SAM: I read comments very differently, you know? I mean, we can talk about self-documenting and describing code all we want but I find that I read comments very differently than I did when I started too. Anymore, I will write a comment that says, "When you see this, you're going to think X. This is misleading because of Y." I have been known to leave comments like, "Sam, you're going to want to refactor this. Don't." JESSICA: Nice. Yeah, you learn to recognize what's going to be a [inaudible] and either avoid creating it or documenting carefully. SAM: Right. You're going to get confused about this. This is why it is that way. JOHN: Yeah, but why is this so important there? I find that's where a lot of my comments end up lately these days too. It's just, "This is a little bit weird. This is why it's weird," so if the Y changes, we can change it but for now leave it. SAM: Right. Actually, some years ago, after I spent about a year and a half flailing around with Git and not understanding what the hell was going on, I sort of came to a point where I felt like I understood what was happening and I wrote this website saying, "Here's all the stuff that I banged my head on because I didn't know any better. Here's what I understand now. I hope this will save you some of that time," and that too was sort of a meta-conversation about the tools that we use. Curiously, somebody emailed me the other day because they'd read that site and they we're having a problem with Git that they didn't understand and they said, "Can you help me out?" and as it happens, I do have some time so I spent an hour or two, just yesterday helping them out. JESSICA: Oh, Git puzzles. I love those. SAM: Yeah and it was funny because they gave me a lot of details in email and I was like, "Oh, this sounds like it might be kind of hairy," and then when we finally got into a screen sharing session and I looked at the thing, I was like, "I can give you three commands to run and we can be done in five minutes," which I did not do because that wouldn't have helped them at all. Instead, I wound up like slowing down and talking about theory and saying, "This is what's actually happening. This is how that came to be. This is all the information that you need to know to be able to get yourself out of this mess in the future." Writing software has, first it taught me to explain to a computer how to do a thing and then really, what it's taught me is how to help other people do that same thing. JESSICA: Something I've been thinking about but that to teach people, to really make change in a way other people do stuff, we need to work alongside them. Because if you screen shared and you worked alongside this person, long enough to understand both where you needed to go and where they were and how to show not just how to get there but how do you know how to get there. SAM: Yeah. Modeling the problem was the challenge I had at the first couple of years of my career and once I got slightly better at that, the real challenge is modeling, not just the problem but how other people think of the problem and how other people think and how to get them cross some of those gaps. JESSICA: My God, this is hard. SAM: But it's the only way I'm going to be able to scale my own skills as an engineer, right? If I don't have to do all the work. JESSICA: Yeah and by scale, you don't mean AWS levels but in of the thousands. But we do mean a couple of here, a couple of there to get into the dozens and then they spread that into a giant pyramid scheme of knowledge. Now, everyone does Git like you do -- SAM: If only. JESSICA: Sam, did you say you have a story about jellyfish at LivingSocial? SAM: No, I didn't have a story about jellyfish in LivingSocial but John brought up Groupon and actually, I sort of forgotten the context. John, what exactly was it you said about evolving Groupon? JOHN: Oh, I was talking about in the context of a fitness function, like it would be incredibly difficult to build a fitness function that would allow an evolution in algorithm to evolve in the Groupon.com. It would probably be more work to build that than it would be to just when a couple of years building Groupon.com. SAM: Yeah, which as an aside, makes me really appreciate the way that the universe works and how it's put together out of relatively simple parts and yet, it's produced all this interesting complex behavior. But one other interesting thing that happens in evolution is this idea of mimicry -- one organism will evolve an interesting defense mechanism. For example, we have yellowjackets, which they're just the assholes of the insect world but they advertise themselves as assholes. They have this bright yellow and black stripes and when you see one, you know it's coming. There are also other insects that look like yellowjackets but they're relatively benign and they're just taking advantage of the fact that nobody wants to fuck with the yellowjacket. I'm not saying this is exactly the parallel. I'm just saying that mimicry is an interesting thing to talk about in its own right and that leads into the LivingSocial story which was that I went to LivingSocial when there were really interesting things happening there and they had been hiring like everybody you ever heard of in the Ruby community and I was like, "Wow, there's something happening. I want to go and check that out," and I was lucky enough to be hired on. It was a great experience but the funny part was my very first day, I was in the Washington DC home office and we started an orientation session at nine o'clock and it was me and two other software developers and then 20 salespeople. The person doing the training, as I recall the very first thing they did to start the training was they said, "Who would like to take a stab at explaining what we do without using the word Groupon?" and I just about lost it. JESSICA: Was LivingSocial a competitor to Groupon? SAM: I'm sorry, yes it was. They now are owned by Groupon. Maybe that was only funny to me. JESSICA: Oh, no, it just say how as humans, we can really only understand things in terms of things the already know. SAM: Yes, this is true. JESSICA: It's really hard to get across a new abstraction but yet, it's really hard to know how hard that is too when you already understand the thing. SAM: Yeah. I ran across something on Twitter last week. Somebody was saying, "Even reading this just talking but for people who aren't in the room." JOHN: What you were saying about abstraction, I think is like the key difficulty of any communication but I think teaching in particular, because that's like figuring out where the disconnect is and how to connect the abstraction to something that the student already knows, so that they can start incorporating it is probably one of the hardest parts. JANELLE: Sam, earlier you were talking about modeling. The problem was in your first couple of years and then you switched to modeling how others think about the problem and this process of learning how to communicate by seeing the delta between the model in your head and in other people's head. JOHN: Oh, that reminds me of the theory of mind, which is one of my favorite psychological concepts, which is like the ability to conceive that someone else has different thoughts than you do and it's something humans develop, I think in the age three or four but before that, they don't really have it. It's just they believe to think this other person has this other way of looking at the world. They have the facts that I don't have and vice versa and it's something that's just develops in humans at some point but it's not common, I think in most of the other animals. SAM: Yeah. There's a really interesting aspect or effect of that which is that you can tell exactly when your kid has started to develop a theory of mind because they start lying to you. Because without theory of mind, there's no point in lying because everybody knows the same thing. JESSICA: That's probably also where we start to feel alone. JANELLE: Wow, that's pretty profound. JESSICA: Sorry, I am bringing all these sad things today, at least two which is two more than I usually do. JANELLE: I've been thinking a lot about the same things. Just thinking about loneliness and connection and what we're essentially talking about to parallels with all my research in my book, Idea Flow. It means taking an idea from one person's head and getting it into another person head and learning the theory of mind where someone has that to communicate but the essence of that communication is connection and when we find ourselves like we can't understand where people are at different from ourselves and we can't get ideas across and we have something we're trying to express, then you feel like people can't hear us, don't understand us. We put on a shell that we end up hiding under because it's into the social thing that you're supposed to do. All of that just create this disconnection from being able to communicate and be heard and sets that loneliness in. SAM: Yeah and while we're on the topic of depressing things, I think this is one of the things that frustrates me so much about the current state of American politics is that so much of the... I don't even want to say discourse. So many of the words that are flowing around, really just illustrate how people are just leaning on their cognitive biases on those in-group/outgroup distinctions on basically doing anything that they can to avoid having to think about other people as people. JESSICA: Thinking is expensive -- rocket fuel. SAM: Exactly. There's a totally reasonable evolutionary basis for it but it's playing out in a harmful way right now. JESSICA: It's a lot of work to really model a person. SAM: But it's so worth it. JANELLE: At the same time, I think the concept of a fitness landscape, we think of ourselves are bounded context that's making these decisions and each one of us has some fitness function that we're optimizing for. We can kind of look at the different people in the world that are hurting and trying to survive. They're under the pressure of their social tribes. They're seeing the world the way that they're seeing it, they're in pain, they freak out and they're all trying to optimize their fitness functions. I think recognizing that our brains are wired that way to compress these things, to try and make sense and justify the decisions we've made, the mountain we've decided to climb because it's too scary to go down on other side, to recognize that a lot of people get stuck in these patterns and we're all just humans trying to survive. It's unfortunate that these things are happening but at the same time, being aware of it, it creates this opportunity to take a step back and look at what should we be optimizing for? We talked about this idea of scale that meant something very different than scaling up a bunch of servers. We're talking about scale in terms of scale of wisdom, really. If you think about teaching and our opportunity to contribute and what scale means in that context, what generativity means in that context, we've all individually got this opportunity to explore these different paths, to explore these different peaks and once we get to the top of the mountain, we can go, "I can look down at the valleys and all these other people around me and where they're at and I can take the wisdom that I learned on this mountain and I can go and figure out how to share that wisdom with others. I can build a mental model of where they're at and where I'm at and trying to figure out how to get the ideas in my head into other people's heads, so we can all share that wisdom." I think ultimately, that awareness of where we want to go, both as individuals and as a community, learning how to cooperate toward wisdom, learning how to shift our fitness functions to surviving and thriving as an individual but also thriving as a community because having a baseline of surviving in the world is kind of a sucky low bar. I want to thrive and I want other people to be able to thrive too. That I think is a good definition that we can agree on for a better place. Just let's try and move toward thriving as a community, as a global world. SAM: Yeah, I think that's really great and especially, it really ties in well with the way that we evolved as social species in the first place, which is that groups of humans do better than a single human does on their own and that's generally true in most primates. I do want to point out that there is one danger in the path that you described which is that you get to the point where you are thriving and you look at other people who aren't and some people get there and they decide that paternalism is the way to go. I know how I got here and so now, I know what's best for you and I'm going to make you do that. JESSICA: Different fitness landscape. SAM: I just want to point out that that's a trap that a lot of people do fall into when they get there. JESSICA: Yeah. What people wants for their children is their own definition of success. SAM: Yes. JANELLE: So, it's like principles of freedom and following your own arrow. It need to be part of the way we look at things of this general concept of bounded context. If I am a bounded context that having a bounded context means that I get to have freedom over all the things that are inside my bounded context kind of thing, maybe as a first principle. JESSICA: There's principles that we live by and then there's like the meta-principles of my principle is you need to choose your own principle. I think we equate those sometimes. We equate values that are useful for making decisions in the current problem space with meta-values about it's important to me that you be able to find your values and I respect those. We don't have a different word for that. Politically, I think that gets us confused because some people are like, "The country should be like this and I had the values, that it should be a place where you can get a manufacturing job and work hard and that's going to be enough," whereas, at another level, other people have a principle of, "I want this country to be a place where people can make it what they wanted to be." It's like the difference between having a very concrete principle that are useful on a particular day and having a principle that you're going to choose your own principles. SAM: Actually, this comes back to one of the things that I learned in my trip through college, actually my third trip through college. My first two were miserable failures because I didn't know I had ADD. JESSICA: If you know how your brain works, you could work with it. SAM: Yes, exactly. One of the most impactful courses that I took was actually 100-level writing course that I really talked about rhetoric. I know I've plugged this book before, probably on this very podcast. It's called 'Everything’s an Argument' but the mind-blowing thing I took away from this book was that when people say things, they are saying those things for a reason and there are probably reasons for those reasons as well. Interacting with people just on the basis of the things that they are saying may not be the most effective thing that you can do to get what you want or to change somebody else's mind or to learn something that will change your own mind. Interacting just the level of what somebody is saying -- you pick your own example -- means that you're engaging with them on their terms and agreeing to conditions of a conversation that may or may not work out for anybody but if you can understand like what value somebody has that is causing them to make that statement, then maybe you can get somewhere. JANELLE: Is it possible they defined some common ground of principle, like there's a difference between sharing all our first principles and having some foundational shared principles that we can agree upon? SAM: Yeah. JANELLE: Do you think it's possible to have a shared foundation? SAM: I think it's desirable. I think it can be possible. I was thinking about this the other day in terms of there are certain people, there are many people on Twitter for example that I just cannot have a conversation with because we don't start from the same place of all people deserve the same rights, for example. If we can't agree on that axiom, none of the rest of our conversation is going to get anywhere. JANELLE: It's amazing to me that this is even a conversation that we're having. JESSICA: We're having a conversation about the meta-conversation of meta-conversations? And at meta-level, Janelle was aware that we're having this conversation. SAM: It's meta all the way down. JESSICA: Up! Up! JANELLE: So back to jellyfish -- SAM: Must we? JESSICA: And local maximums. We talked about local maximum, right? SAM: Yes, we did. JESSICA: That's good because we want to tell you about this new podcast called 'The Local Maximum.' It's hosted by Max Sklar who was a Machine Learning Engineer at Foursquare. He has a lot of fascinating topics: AI, building better products and the latest technology news from his unique perspective. Max interviews engineers, entrepreneurs and creators of all types, with half of the guests being successful women in software and tech. Subscribe to The Local Maximum Podcast, wherever you [inaudible] it. JOHN: Flawless. JESSICA: Remember that not every local maximum is a good place to be. Take for instance jellyfish. They've clearly gotten as far as they're going to go in the direction of evolution and as a result, many people get stung. JANELLE: Oh my gosh, Jess, you were so hilarious, I swear. JESSICA: I kind of think all of us share the superpower with Sam. I bet it's totally something we have in common, that we can make a connection as being disparate things and that we enjoy that and that's one of the things that binds us as a group. JOHN: Indeed. SAM: That's what makes these conversations so much fun, yes? JOHN: Definitely. SAM: I'm really glad that we are now, like an hour and a half later, at this place where we can understand what I said about. I sometimes wonder if we do better at treating people as individuals if we're still stuck using jellyfish's signaling mechanisms. I was thinking like imagine how much better we'd be at treating people as individuals if we had more computational power to work with, if we weren't stuck with something that evolution decided was good enough? JESSICA: Damn! That is a great place to end it. For our listeners, we're not going to do reflections today. If you want to hear our reflections, hit rewind and listen to the episode again. JOHN: Or pay any amount of money to our Patreon at Patreon.com/GreaterThanCode, which will get you an invite to our private Slack channel, which will allow you to have conversations with all of us and the other wonderful people in this community at any time. SAM: And we can all reflect together. JESSICA: Yeah. We actually do answer questions in there and we get really happy about it. SAM: Yeah, we have some great conversations in there. It's totally worthwhile. JESSICA: Yeah. Thank you for joining. Sam, thank you for allowing us to make you our guest today and now, you get the benefit of having checked off the task of having you on in your own podcast and you never have to anticipate it again. SAM: Why, thank you. This actually has been a lot of fun.