he following is a rough transcript which has not been revised by High Signal or the guest. Please check with us before using any quotations from this transcript. Thank you. === Benn: [00:00:00] Some people tend to think because we now have all of these AI bot that can write code for us, we don't have to think about stuff in the trenches anymore. We are all thinking about strategy and big picture and like what is the app that we're building do. And I think that's sort of true, but really I think it makes us more like Steve Jobs, where Steve Jobs was famously not someone who was not in the weeds. He was someone who was like going down and like berating people for tiny little details. He wasn't writing code, but Steve Jobs showed up and made sure that every pixel was in the right place. Hugo: That was Ben Stansel on why AI is turning us all into Steve Jobs. Not the visionary who delegated the details, but the one who showed up and berated people over pixel placement as AI takes over the doing, Ben argues. Our job becomes obsessing over the polish. In this conversation, Ben asks whether now is actually a terrible time to start a company. If the tools you build on today are obsolete in six months, at what point does the head start stop mattering? He also tells us why technical [00:01:00] debt may be self-healing since future models can just untangle the mess today's models made. And he argues that all context engineering you're doing may go the way of boo and search. Remember learning Google Query syntax in the nineties? We also dive into Ben's thoughts on why Claude Cowork can't work. AI has these uncanny ticks you can't beat out of it. So anything it writes, as you will smell like ai, the solution isn't better AI writing, and perhaps it's to stop pretending we write to each other at all. If I wanna send you a message, perhaps my AI should just update a database that yours can read. Why bother with the social decoration in between? If you enjoy these conversations, please leave us a review. Give us five stars, subscribe to the newsletter and share it with your friends. Links are in the show notes. I'm Hugo b Anderson and welcome to High Signal. Let's now check in with Duncan Gilchrist from Delphina before we jump into the interview. Hi Duncan. Duncan: Hey Hugo, how are you? Hugo: I'm well, thanks. So before we jump [00:02:00] into the conversation with Ben, I'd love for you to tell us a bit about what you're up to at Delphina and why we make high signal Duncan: at Delphina, we're building AI agents for data science. Through the nature of our work. We speak with the very best in the field. And so with the podcast we're sharing that high signal. Hugo: We covered a lot of ground with Ben Duncan, so I was wondering if you could let us know what resonated with you the most. Duncan: This was a fun episode. You know, in tech we've all internalized the mantra ship fast and iterate. But as Ben points out, everyone's fast. Now AI can vibe code A CRM with a single prompt in a few minutes. So speed just isn't the moat anymore. What's left? Ben argues. It's the obsessive, almost unreasonable polish. It's the Steve Jobs energy that matters really honing every individual pixel because when creation costs nearly nothing, the moat is taste. And maybe our code bases aren't even meant to last. Maybe there's disposable specs for the next ai. Let's get into it. [00:03:00] Hugo: Hey there, Ben, and welcome to the show. Benn: Thanks for having me. Hugo: Such a great pleasure to have you here. And you are, you're currently in Brooklyn, New York City, Benn: correct? Correct. Hugo: Great. Benn: Uh, apparently like in a cave, but a cave in Brooklyn. It looks like Hugo: there are caves in Brooklyn. Benn: Yeah, Hugo: I saw some wild footage of New Year's Eve parties under the abandoned railway tracks on the Hudson in Manhattan, which looked, looked pretty, pretty wild. And I'm in Sydney, Australia. I'm still, I still marvel at the fact that it's been, we've been able to do this for some time, but the fact that we can have a chat and recorded and put it out being on different sides of the world. Benn: Yeah, the miracles of modern technology that don't have AI in them. Hugo: Speaking of which, we are here to talk about technology and also to talk about magic. I, um, as we both know Arthur C. Clarke, his third law was any sufficiently advanced technology is indistinguishable from magic. You reminded me recently in a bulk blog post that we'll link to in the show notes that teller of pen [00:04:00] and teller fame said, magic is just someone spending more time on something than anyone else might reasonably expect. Mm-hmm. And things do seem magical in many ways at the moment, but in this blog post, you argue that the magic of great work is often just the willingness through the tedious hard things others don't. So I wonder, you can tell me, like, in a world where AI makes the doing stuff feel effortless, where's the hard work? Who's doing the hard work and what is it? Benn: Yes. Obviously there's a lot of trends now about people being like, we're gonna vibe coat stuff and we're gonna make all our own software. And people talk a lot about like, why would you ever buy SAS products anymore? Any sort of product anymore when you can just prompt a CLA code or a lovable or whatever and it'll build you the thing in and 10 minutes. And that is to some extent true. Like you can certainly say, go make me a CRM and it will make you a CRM. Like it'll take some time and stuff like that. But you can get like a functioning [00:05:00] CRM pretty quickly. But the thing that makes like products good and that people wanna buy them often to me is not the fact that it works. It's like it has all of the little bits of polish and things that you really like that makes you enjoy being in that product. And it's the opposite of what people complain about Salesforce or Jira or whatever, like you use Jira, it's got a lot of features you could imagine, vibe coding, all of those features pretty quickly. But then you end up with Jira and everybody hates Jira. And like Jira is a product that basically people are looking to get away from to make something better than that. To make something where it's, it's Jira but you enjoy it. And like the example of this is something like linear or the example of a sales force that people enjoy is there's a few different sort of modern CRM type of things that people claim to collect. Working in that stuff, tilts still takes a lot of work. Even if you have something going off and writing the code for you, because you still have to come up with all of the little interactions that make it good. You have to keep harassing it and being like, no, make that transition better, make that thing better, make this UI part better. Make it like faster, make it keep doing things that that improve basically Polish. And none of that stuff is, we are [00:06:00] still, it feels like a long way away from like one Shoting your way into making a product that is as good as a notion or a linear or a Figma or whatever it is. Or like the trendy products that people like to use because of the quality of them. And so the sort of example I'd use with this is. People, some people tend to think of, oh, we will move the stack of sorts where it's like, because we now have all of these AI bots that can write code for us. We don't have to think about stuff in the trenches anymore. We're all thinking about strategy and big picture and like what is the app that we're building do. And I think that's true, but really I think it makes us more like, like Steve Joby, where Steve Jobs was famously not someone who was not in the weeds. He was someone who was like going down and like berating people for tiny little details. He wasn't actually changing them, he wasn't writing code. But the reason all these Apple products are good allegedly or so, the story goes is because Steve Jobs showed up and made sure that every pixel was in the right place. And so I think it looks more like that where like you still have to make sure all the pixels are in the right place. And that's [00:07:00] still a lot of work and it's still a lot of effort and it's still a lot of energy and it's still spending more time on something than most people would think is reasonably to be reasonably expected. That's what makes it good. That's not strategy, that's just like attention. And so. It's, it's to make something that feels like a magical product or a magical piece of software, you still have to give the attention. The attention may look different. It may not be you staring at code all the time, but it's still attention. And so I don't think we can get, oh, you vibe code something amazing and overnight. Yeah, you can vibe code a sketch, but a sketch is not the thing that ultimately people will choose to use. That is the thing that people will be like, I will vibe code my own version of your sketch. 'cause I'll make the same thing in a day where if you make something, again that's like a Figma. People aren't trying to get off a Figma with their vibe coded versions because someone put the time into making Figma as good as it's Hugo: makes a lot of sense. And I feel like there's some sort of barbell, or you're moving up the stack and down the stack or into the stack. Like you, we do get to think more creatively and at higher product levels and, and strategy. But to your point, at the same time, we're moving into the stack of becoming [00:08:00] incredibly detail oriented. And I think we, because it's unsexy and weird, we don't. Talk a lot about what that looks like and because it's one-on-one now, when the internet exploded, we all shared some sort of experience, although it was personalized with this, our conversations with agent systems are currently one-on-one and we don't talk about the screaming we do at these systems to move pixels around all the kind of these, we dont, cursorily pun unintended, but we, the conversations can and working with agents can be quite bizarre, right? Benn: Yeah, I certainly, I mean everybody has their various stories about this sort of stuff and if you play with it for a while you start to understand like what makes 'em finicky and whatnot. And they have obviously also gotten a lot better. Like you can give it instructions about stuff to do and sometimes it does a terrible job and you're like center this text and it like centers it in a weird way and sometimes it kinda understand what you mean. And I think those will get better if you say center this text, it will probably just get a lot better doing what you want it to do pretty quickly. But you still are gonna have to tell it to center that text. You're still gonna have to [00:09:00] tell it like, this little bit of polish isn't good enough and like the bar for that probably will get higher. I don't think that. The another way to frame the magic thing is there will always be somebody who is spending more time on something than you, or is more just spending more time on something than somebody else. And so great say we have all of these like AI robots that can write all of our code for us. That doesn't mean somebody will have spent 10 times more making theirs better versions of it that are like doing even more interesting things and doing things even better and nicer and more ergonomic than you did. And so what is an unreasonable amount of time or an unreasonable amount of effort? Sure. Like it used to be, I have a functioning app because it was hard to make an in sort of software. Now that bar is just higher. But I don't think that really changes what, like the expectation of unreasonable is still more than you think it's, okay, great, you can make functioning software in a day now you gotta spend a whole bunch more time on everything else because the functioning part isn't hard anymore. 30 years ago, like a functioning piece of software was unreasonable amount of time 'cause that took forever. So I think it's still like a relative thing of whoever [00:10:00] spends the most time on something will almost inevitably make it better. And that will feel like magic when everything else was something that people spent reasonable amounts of time on. Hugo: Where should people be putting their time? Benn: I mean, I don't know. It depends on what you're making. I don't know. If I knew that I'd be doing that, Hugo: you wouldn't be talking to me. Benn: Yeah. I like, I think that I, I don't know, I certainly, it feels like a lot of people now are spending time on two things. I guess they are spending time on trying to like reinvent whole swaths of stuff of like, how do we take the way the world used to work and rewrite it? Totally. And then there's a lot of people are spending time on like taking existing concepts and just blowing them up massively. So an example of the first would be like, ah, we want to change the way law works. We need to build like basically robot lawyers that do everything. Law is gonna be totally different. So it's been a bunch of time trying to figure out like what is a totally novel way of doing various legal stuff or how you do your finances or whatever, like. All that I think is probably reasonable. Maybe you find something very interesting. Maybe you really do come up [00:11:00] with something that's like totally novel and cool and people excited about that. I don't know. That's hard, right? It's like hard to reinvent stuff, but like presumably that's where the big winners are. I think the second one is kind of a mistake. The second one is like you are build, say you are building a CRM or whatever it used to be that you would slog through a million features. I mean like I come from like the BI world, you're building a BI tool. You slog through a million features. You have to build a million types of charts and a million different ways to like alert people of what's going on and a million different ways to interact with data and self-serve stuff and permissions and like different ways to connect to different types of data and all that sort of stuff. It's like this enormous list of features. It takes a lot of time. I think now people are like mostly saying, ah, I can build a BI tool in six months because it doesn't take me a month to build a new feature. It takes me a day. And so I would just like massively blow up the surface here. And I think that is a lot of time, but I think that gets. Unwieldy and messy and hard really fast. Not even if AI can handle the context or whatever. The point is, like, have you built a good [00:12:00] product? Have you built, you're just built like this really expansive thing. I don't know if that's time necessarily well spent. I think time well spent is again, trying to come up with something more novel. Like how do we reinvent some stuff you probably will miss, but like you might hit something interesting and time has been on making the thing really good. That that, again, if you want to build Figma quality stuff, you can do it a lot faster than Figma did, but that doesn't mean you are building like every feature. It means you are trying to build something really high quality, but you're spending tons and tons of cycles. Again, making it like you're sanding down the thing a lot faster. It's still, you're doing a lot of sanding, but you have a sander that's electric now instead of having to do it by hand, but you're still gotta sand the thing. Instead of, oh, I just have a chainsaw. I'm gonna cut a bunch of like sloppy boards and now I can make not a chair as fast as you can. I can make an entire dining set. You make a janky dining set. Whereas like. You also have a power sander, maybe make a really nice chair that's like you can sand way faster than people could before. That's a bit of a good analogy, but you can, Hugo: I love, no, I actually really like it 'cause it dovetails with something I was, I was [00:13:00] where my mind went in this conversation with the ability to one shot, three shot, end shot things, vibe, whether we wanna call it vi vibe coding or not, but be able to create software far more quickly. The creation of a janky table is an interesting analogy because I was thinking about the technical debt that you, that you take on by building software this way and then what maintenance looks like af after the fact. And the reason we buy SAS products is because the ability to trust and then outsource that type of stuff as well. Benn: Stuff. Yeah. And I don't, I, I have like somewhat of a, I am not totally convinced that like technical debt is just a non thing. Really. Like my rough definition of technical debt is something that like your future self is upset that you created. Hugo: I create that every day for myself using AI Benn: system coding as well. You what you do. However, there's, you could go two different directions with this. One is if your future self is Opus five, Opus six chat [00:14:00] GT seven, Gemini eight, like how much can they resolve technical debt? I'd be like, unwind all of my messes. They may be able to do that very quickly. Mm-hmm. And so creating technical debt now may be a thing that's, yeah, I don't mind it because I will just tell the next thing that's smarter to fix the thing that the slightly dumber model made and it'll be like, oh yeah, sure, this was pretty dumb. I'll fix it. It may just do that. Like I don't, that does not seem unreasonable to me to think that fine layer on technical debt now because who cares? Because the smarter thing will have a better way to solve it. If I am a lousy engineer, which I am, I'm not even an engineer. If I were like try to make something and someone who is a very good engineer shows up and looks at it and they have infinite time, they can unwind my technical debt pretty quickly. 'cause like it's probably pretty simple tech. It's like you just did a bunch of messy stuff. Like fine, I'll just sort it out and straighten everything up that. I can't tie that tight of a knot, whereas something that comes along that's way smarter probably can untangle that reasonably quickly. So I don't know that technical debt is actually that big of a problem. It could be, I don't know, but it doesn't strike [00:15:00] me as like obvious that it is. The second point though, which is the inverse of that, is there's another way to look at it, which is technical debt is a huge problem because the thing that is the next iteration of the model will want to do things like entirely differently than the previous slightly dumber model did. The other way you can look at it's, I write something and someone who's an engineer comes along and actually says, I need tore write this is, they'd be like, this is such trash, I will rewrite it from scratch that they basically use my code as a speck. And so on one hand, maybe that's really bad because you have to rewrite things all the time. On the other hand, I'm not sure that it is because what's a better speck than a functioning app that does everything you want, but just does it poorly? It's make the exact same app, but rewrite it to make it better, the better and faster these things get. That's not a bad speck. Here's an app, rewrite the whole thing. But good is like, again, if you take a really competent engineer and give them something I write and give them infinite time, they can rewrite it pixel for pixel, but better Hugo: I, yeah. I find this so fascinating. There are several [00:16:00] things that come to mind. The first is, I'm using agent skills a lot at the moment, and I find it a wonderful way of sharing, especially the progressive disclosure of context with agents. And it isn't just Claude. And so that's working with context in a number of ways. And I'm doing a lot of work with agentic systems to get all my workflows in skills. There may be a model soon that makes all my skills stuff totally irrelevant though. Mm-hmm. Right. Um, similarly, Ethan Molik wrote something recently, uh, about, and I don't quite agree with this, but I'd be interested in your take on it with respect to this conversation, that people who haven't really acted got their AI strategy working organizations who haven't really adopted ai. That could actually end up being a good thing because they haven't built all the stuff, which is gonna be irrelevant. Like irrelevant, like all the harnesses and that type of stuff. So they're able to start right now or in the near future. Benn: Yeah, I, so I would agree with that actually. Like for the most part, I, okay, so I'd say I'd, I'd say three things. One is I absolutely think that [00:17:00] they absolutely think that is true. That people that are over optimizing everything about how do I use clawed code are wasting their time. Not exactly like they are going faster now than they would otherwise probably, but in six months none of that stuff will be useful. That you saw this early with these things, which was like people were trying to write, they were trying to basically engineer all of these harnesses and these loops and like things like Lang chain to be able to do this stuff that suddenly the models just do them internally. Me and there's a bunch, like how do you prompt engineer everything? Actually it doesn't really matter anymore. They're pretty good at figuring out what you mean. There's a bunch of stuff like that the models just seem to absorb or the products like sort of immediate products around them, like the clog codes or cursors or whatever. Just absorb all of that. And so I'm gonna invest a ton in exact mile, exact setup with the exact right of skills and the exact commands and all this stuff is, eh, do you need to do that? Probably not like it. It seems like it'll just like you'll end up, the skills will be just a giant dump of some things you like and like it'll figure out the rest. So I think like [00:18:00] having some context of the things that you like and what your preferences are probably continues to be useful over-engineering and exactly where you're like, I have got this complicated thing where it like calls on these skills in these ways and it does it exactly the way I want. It's like, nah, that'll probably just get solved. The second thing is I like, I have talked about this some before I wrote a post about this, where it's like, I do think there is some question here of, is now a terrible time to start a company? And the reason for that being what you're saying, which is you are building on today's tools. In six months those tools will be updated. And so actually six and it's like this is, this goes on forever. It's like in six months is a better time than now, and a year from now is better than six months from now. Because you won't have to engineer these weird systems. Like the thing just does it that if you were, if you were to start a company a year and a half ago and you built your entire product using GPT-4 and Gemini two five and Opus three five, all of those things feel like ancient now. And so God knows what you would've built, like you probably would be there. There is a point at which you were moving so much faster [00:19:00] now. Like how far back would you have been better starting? Like how much of a head start do you need to stay ahead of someone building with the newest stuff? It's like a lot probably. And so there's a case where it's like each person starting at a particular point can probably go faster than the people who started previously. Where do those lines intersect? And so I do think there is some question of what's the point of getting really invested in these things? That's not to say don't build a thing, but I think it's like it doesn't make that much sense to invest in the cutting edge stuff. Now it's just use what's the latest thing, don't overuse it. The new thing will come out and make you better. Make sure you're able to use the new thing. Don't spend a ton of time engineering your harness that is gonna be obsolete in six months. The last thing I'd say is I think there was very roughly an analogy here with Google where if you like, grew up in the nineties, basically when Google came out, there was a fair bit of like, you need to learn how to use Google. Here's how to write queries and like search syntax and all the various stuff that goes into include [00:20:00] these sites don't include these. It was like the kind of Boolean operator stuff. Nobody does that. Nobody knows. Nobody cares. Like Google just got better and you just throw random sentences at it and it figures it out pretty well. And I think this is the same thing where. There was an early thing with LLMs. It's like we all need to be like understanding the depths of these technologies and how they work. It's like, no you don't. You just need to get a feel for the machine. You need to understand kinda like what it does and what it's weird and what it's good at and you'll figure it out. And the kids who use Google today are way better than we ever were because they like grew up with a thing and they have just a really good sense of how the machine works. And I think that's basically like what AI skills are, is like you just use it a bunch and you get a good sense of the machine. All of the particulars that like help you take advantage of the kind of edges of it today are like a waste of time in six months. Hugo: Totally. Man. Not only did I come up in the nineties, I miss the nineties, man. I miss cassettes. I miss O Ninja Turtles. I miss Benn: n Turtle's pretty good Hugo: watching season four of the Simpsons for the first time monorail. I, I agree. I am wondering, and I'm being slightly provocative here, but [00:21:00] whether like an emergent part of the new skillset is the ability to re-architect and I'll link to a podcast we did with Lance Martin from. Lang Shane, but with Lance, we talked about how Manus rebuilt their harness five times last year. How Anthropic tore out the Claude Harness three times in, in six months. And I am half joking, but maybe a new one of the highest paid tech roles will be re-architect at some point. Benn: I mean, I, it does not seem crazy to what I was saying before. I don't know at what level that is true. And certainly if you are Figma again, you're not gonna probably re-architect all of Figma probably. But there certainly is a point where for relatively skinny products and like a clawed code type of harness certainly seems like that or whatever, especially because it's sitting on top of the core technology. Underneath that is the LLM that is itself changing. It's the model. It's way cheaper to rebuild the whole thing. Like why would you, if you were gonna launch, if you had built like the first version of cloud code and you're like, ah, we now have new models and stuff. [00:22:00] Why would you use that at all? It's not that big of a product. Just rebuild the thing and use the first one as essentially a spec. And I don't know how much that really holds up, but it does not seem that far away. That you would say, here is Figma re-architect, all of Make Figma, but make it better. And like it gets far, like everybody says the vibe coding stuff is pretty good if you give it specific enough prompts. And what's a more specific prompt than that exact product? Again, I don't think it works exactly that well, but there is something roughly like that where there is no better spec than the thing itself. Just don't copy the map, copy the literal territory. And I don't They are not there. They're not there. Exactly. But if you are, if you vibe coded a fun little app that you used six months ago. Probably do not update that thing. Just tell it to do it again. Show it the app and be like, make this, but make it better and faster than it was before. And it probably will do a better job there than it does at trying to rebuild the, or trying to update the thing as it was. Hugo: Finally, something I've found perhaps more successful in some cases, [00:23:00] less in others, is getting Opus 4.5 or Gemini three to look at the current software and write a plan of how you would build it. Then simplify that plan with the LLM and then give that plan back to Opus 4.5. So it doesn't see the original code base, but it has essentially an extended and detailed PRD for it. Benn: Yeah, and I like I, they're not great at some of this stuff. They obviously get a lot of things wrong now and like I'm sure they will continue to and who knows and maybe they hit limits of how good, you know, I don't know, maybe they don't get actually that much better any this stuff. It's, that certainly seems possible. I'm not, that is far above my pay grade, but like it, it doesn't seem, the pace at which they have been developing does not. It does not seem crazy to just to do what you're saying, where they get better at picking apart here as a product, figure out how to make it, you don't have the code, but you have turned this product into a PRD type of stuff. It does not seem at all like implausible that they would continue to get reasonably good at doing that. Hugo: And the other thing, there are a few things, and I won't go into all of them, but there are a few [00:24:00] pretty simple hacks you can do to make these things far better. For example, give them UpToDate documentation. If you're building with Gemini at the moment, it, it doesn't always choose the correct Gemini API and mm-hmm. One of the APIs has been deprecated recently. So give it the up UpToDate documentation. If you're working with pedantic log fire, give it the URL to the documentation or get it to use a web search tool to find it. This is context of sorts, but there are a few things you can do to make your job and its job a lot easier. Benn: Yeah. Yeah. And I, and that's. I think that is true. I suspect that will stay true for a bit. But those are also the kinds of things that like will that, how long will Gem and I use the wrong API probably not that much longer. Like providing it all of that sort of context and like do the right stuff here is like that stuff starts to feel like it gets baked in, or at least that is the pattern that seems to have happened so far. I don't know. Like it again, it can't, I can imagine it stalling. I'm sure there are lots of, it will find new weird edges that it struggles with. But like any of these things where it's like we have [00:25:00] to build little bridges for it over the places, it constantly gets tripped up. It seems to get tripped up less and less. Never say it won't, not to ever say it will not get tripped up by anything, but like the places where it does seem to be things that improve with each round of it. Hugo: Yeah. Without a doubt. I love that we've been talking about Anthropics products and Notion. Notion has done wonderful things with ai. Mm-hmm. Of course philanthropic has also, but I wonder your thoughts on just the challenges of being a pre AI. Era product and whether being like really natively AI will become increasingly important. Benn: Yeah, probably. I don't like, I'm, the easy answer to that is if your legacy thing or whatever, like your temptation is to stick a chat bot on top of the thing and that's all it does and it works and it's like, yeah. I think like to put a chatbot on it kind of mentality is obviously not great. That's not, that was what people did two or three years ago. I think there's a lot less of that now. I think that [00:26:00] the thing that is, at least to me that makes pre AI versus native AI is a transition that actually hasn't quite happened yet, which is like when we build software, basically every, so every app that people build, this is like, you start to see this some, but every app that people build is basically like a database with a UI on top that like, I mean, what is Notion? It's a bunch of like files and it's a bunch of field. Like Salesforce is literally a database with a UI on top. A lot of apps are basically databases with thick UI on top that allow you to interact with that database in like interesting ways. But it becomes very like people tend to roughly ship that model of essentially a relational database. There are files, there are folders for those files. There are like relationships between different objects that are connected in certain ways. There are tags that you put top all the, on top of the files there are like status for each of the files of done or not or whatever. They get very, there's this very sort of like relational structure that everything is defined around and [00:27:00] AI's not really like that. Like it's all sort of moral like networky and loose connections and on the fly connections that get generated. Like gimme a thing and kind of co-locate what's around it. And it functions again, much more like a human where it's, we don't organize things in our heads in exactly a relational way. To the extent that we do, we basically do those things on the fly. There is no folder in your head that is your friends, but if I say, who are your friends? You think friend. And then you think of what's near that and it's this and this and this, and you're basically like, I could ask you who your friends are today and I could ask you who your friends are tomorrow, and I could ask you who your friends are the week. And you'll give different answers each time based on whatever else is in your head. That's sometimes good and sometimes bad, but there's a lot of things that you can do with that type of thing where there are no sort of categories and everything is just like this on the fly sense of connection as opposed to like everything being a giant lookup in a database. And I think software will start to lot look a lot more like that where you start to see it in some things like note taking apps or un ticketing apps or whatever where you're just throwing stuff in a [00:28:00] bucket and kind of extracting the things you need from the bucket on the fly when you need them. And it like does a pretty good job of like poking around the bucket and finding the interesting things. It doesn't really need a relational database, but you don't need to put things in cat. Why have tags at all? Why have any of these, why have you don't need any sort of organization. You're basically, it's all like searchy. And so I think like that. There will be a lot of things that start to do that kind of process where they realize like actually the, everything is actually a relational database under the hood, has some usefulness. Obviously there's a lot of stuff there that's valuable but also puts a lot of constraints on the way to think about how to make something that a true AI native software will be like one that is not necessarily built on everything as a set of relationships. Hugo: Something we're talking around is whether we're building for humans or building for AI and agents, right? We've had conversations around text sequels good, but does it really need to know the semantic model and these types of things. And something I'm hearing there is maybe that was something we needed for humans and we need a whole new [00:29:00] paradigm of how to give this context to AI and LLMs on the other side. Earlier in this conversation you talked about maybe a, an eng, an engineer coming and looking at your code and being like, Ugh, look, ugh, perhaps we're not. Optimizing code for senior engineers in anymore, in, in a lot of ways. So I suppose I'd love your thoughts on what will we be increasingly building for, in terms of who will be ingesting what we create. Benn: I have no idea. I look, I, so I was just talking to someone else about this. It's a, I have a half written thing about this that I haven't, it's an idea that I probably need to think about a little bit more. One way to think about it to me is we probably increasingly create things to be read, not written. Like to your point, are we writing code for senior engineers? Maybe. But what seems more likely is like. We write code with the intent of understanding what it does. We don't act ever actually wanna write it. There's a lot of, a lot of like code is designed and I'm much more familiar with the [00:30:00] data world and don't, I have no idea how code works. I'm not an engineer, I don't know what I'm talking about. But in the data world, like sql, SQL is like a popular thing. People try to write like new versions of SQL all the time and try to reinvent it and all that sort of stuff. And primarily when they make those efforts, what they're trying to do is they're trying to make it easier to write. They're like, there's all of these annoying things you have to do. There's like weird stuff where you're trying to do, joins on certain ways that are annoying or even just like simple, like Case Statementy stuff is annoying. There's a lot of stuff and like writing sequel that you have to write things in frustrating ways. And so they try to put a bunch of shorthand on top of it that makes certain things easier. There's like annoying stuff with like nesting of queries and all that stuff. And so let's write this in an easier way so that like we can shorthand the stuff that we have to do on and over again to make it easier to write. That does not feel like the way that we, the thing we need. Like I don't care what the thing writes, I wanna tell it what to do, and it'll write whatever messy code it wants to write. I don't wanna read it. I don't care what it does. What I wanna do is I wanna understand what it did, and then I wanna tell it to do something different. If it didn't do what I wanted it to do, I don't necessarily [00:31:00] need to read the code to do that. I need to read something that's an explanation of the code. And I don't think that's, like, just English is not very good at that because code is complicated and it's very information dense. And the idea of read an English description of the SQL query is a nightmare. But I don't really wanna read the SQL query either. It's like, I want something else there. That is basically a way for me, like what is a human readable version of the nonsense that this thing has written. And so I don't know what that looks like or what that means, or like how exactly to expand that, but I think that looks a little bit more of like the intent of building stuff. If engineer, if AI is writing all of the code, be it SQL or Python or Rust or whatever. I don't wanna read it, like I don't care so long as it works, but how do I understand it? How do I understand it so that I can help guide it so that I can like, not necessarily even help guide it, write the code differently, but like understand what the thing does and try to correct it in places where it doesn't do what I want it to do. So I think there's that sort of approach potentially where we read a lot less of it, but we just need a better perspective into what's going on. And you [00:32:00] can think of this as like higher level program managers are basically this for like lower level program I, which is just like, I don't wanna read, I don't wanna read like some machine code that's impossible to understand. Show me something that'll make me easier to understand it. And I think you could, you could certainly imagine that sort of thing happening in a rough way here Hugo: or link to a blog post in which I think you may have written. And I paraphrase, you wouldn't mind if it was even written in pig Latin. Um, Benn: right. Hugo: So Benn: yeah, Hugo: going back to the nineties as well. Benn: Yeah, yeah. Uh, the, the Clutz book where I learned what that was. But yeah, like I don't care what the thing, and, and we already do this, we don't, when you write something in, in Python or. Rust or whatever, you don't care the layers below it. Like you aren't gonna look at the layers below that. You just know that the thing you wrote will turn into something that probably works below it. And like the same thing sort of as here. I don't think English is that, like English does not feel quite like the right thing there, but it certainly seems possible that someone will figure out an intermediary that is English ish, that is like more structured English. So you can read it and you can get what it's doing, but [00:33:00] you don't care about the kind of like the Python abstraction that sits on top of that. Like what's a layer of abstraction that sits above it. Hugo: Did you catch, um, with McKinney's Post a couple of days ago from human ergonomics to agent ergonomics? Benn: I did. I, like, I basically where he was like, I don't think I wanna write Python anymore. Hugo: Pretty much. And it, it's a wonderful, provocative post, but I mean, the reason it's relevant, I mean that's, it's incredible. I mean. It's incredible that someone who's worked in Python so long and also had many frustrat, you work that much with Python and you get very frustrated with Python as well. But we wrote, put another way, human ergonomics in programming languages matters much less now. And he's having so much more fun building software in go, even though he hasn't personally ridden a line of go in his life. And he goes as far to say that Python isn't going anywhere as long as it's moat as a language for data work and ml AI inference and training persists. But he says that's also partially due to, uh, inertia as well. Benn: Yeah, and like I, [00:34:00] this stuff probably won't happen that quickly. And I think the data one is tougher because talk about this a lot before of like, if you are building software you never have, you do not need to understand what it does like to, the thing that Wes said, the thing is writing a bunch of, he's building software and go. He's never written go. He's probably not even looking at the go. It doesn't matter what the go does, he doesn't care. Like it's just, it's nonsense. I mean I'm sure it's like, I'm sure he understands it, but like it is largely nonsense. And again, it could be pig, Latin, it doesn't matter. The point is it seems to be like building a high quality product and you like what the product spits out on the other side and he can test the product and the product works. Whatever go does is who cares. That doesn't work for data. Like you don't have any way to check it. That if I tell it to make me a chart and it spits out a chart, the only way for me to know if that chart is correct is to look at what it did. Like a right chart, the wrong chart look exactly the same. Whereas a product that works in a product that doesn't work does not work the same. Like I can test it externally to the thing itself. I can't do that with a chart. [00:35:00] And so there is like probably need for readability more there, but I don't know exactly what that looks like. But it seems like there will always be a limit on what you could like vibe, analyze, because you still need to know what the thing did to know if that analysis makes any sense. Whereas you can vibe code an app because you never actually need to look at the code. You can just like poke the app long enough and eventually you'll be happy with it. Hugo: I'll link to W'S post in the show notes and I'll also link to something we built called Spicy takes.org, which he built with ai, which is the TLDR and quotable quotes from blogs. I like using AI and Ben Stan's blog. I is there among many other spicy takes and he's even given spice used AI to give spice ratings to a variety of of posts, which is pretty cool. Benn: Yeah, I have seen these spicy takes is useful for remembering things that I have said. I stand behind all of them. Hugo: I certainly hope so. Speaking of spicy takes, Ben, you recently published what is I consider spicy [00:36:00] but incredibly thoughtful as it you ride the Pareto front of spice and thoughtfulness wonderfully and have for years. It's a blog post called Why Co-Work Can't Work and of course Release Claude. Cowork very recently. It's in the Wall Street Journal. I mean, it's, it's taking off and it's occupying the cultural consciousness in a variety of ways. Your post is called Why Cowork Can't Work. The Future Isn't Collaborative. So Ben, my question is, if the CO in Cowork isn't short for collaborative, what is it short for? Benn: Uh, I mean, I'm, I'm sure that it is, uh, and you know why co-work can't work Felt like a, it had a rhythm to it. You gotta sometimes write the clickbait things. Hugo: No, but you had another I was sitting, you did have Benn: another, you did have another word. Okay. So this is another sort of, I don't know, bizarre way to imagine the world, but it seems like where we're gonna go. My, my view of the like Claude Code thing is, I mean, one angle of it anyway [00:37:00] is a lot of the, like when chat GPG first came out, everybody was like, chat GPG has these weird isms. It uses Delve and it uses M dashes and it always lists things in certain ways and there were all these like telltale signs of someone using chat GBT and like they have, they broadly being like AI companies have changed that some, but for the most part, like these things struggle not to talk in this sort of uncanny slightly like mechanical little bit overwrought way. Like it is very hard to you. You Hugo: raise a really good point, Ben. Benn: Yeah. Hugo: Sorry. That was me just being an LLM. Benn: Oh yeah, exactly. Yeah. Oh, interesting. Feed. Like they, they have the same little isms and it's like really hard to beat them out of them. You could tell like people have these like threads of, oh, I tried to tell it a thousand times to stop using M dashes. It's like, thanks M Dash, I will no longer use M and it's just like my God and I, my view that basically is like there's a certain amount of like pre-training in these things that you just cannot get out. [00:38:00] It is really hard, especially just with prompting. To, to like beat the pre-training and probably some of the post-training, but like the pre-training outta these things, like they, they will fall back on kind of their natural way of being inevitably. And there's, that is a particular character of writing that has a very particular smell. I am sure that is true in code too. People will periodically complain about, like, you can always tell code that is written by an AI because it has a bunch of comments and it's like overly verbose about certain things and it does stuff in very particular ways. And again, it has these isms. However, if you don't care about the code that you write, if the point is I'm putting code in a repository, if it works, then everybody's thumbs up on it. The isms don't matter. Like whatever. Write mechanically and in weird ways and do like obnoxious ticks, I don't care. I'm not gonna really read it or I'm gonna read it like through an AI that reads it for me and tells me what's up. So like code basically works that way. There's this shared repository of like facts, it's the code itself that we read less and less of, but we're all kind of interpreting. So I'm like putting facts in the repository. You're putting facts in the repository. We're both like interpreting those facts, but we never actually see those facts. And therefore, because we [00:39:00] never actually see them, we don't care that much about what it does. We don't care about, its ticks. Cowork is kind of like saying, Hey, we will write your emails for you, write some text messages or whatever, like things like that. It's not exactly that. It also will organize files on your computer. What? It'll do a bunch of stuff around. It'll do write a report for you on how you work. And that stuff is all fine and good. And this is why I don't actually think cowork can't work, but like that stuff is all well and good. But the idea of producing an artifact that is a not like, not code largely means producing an artifact that I'm saying I made. It's like, write a blog post for me, or write an email or write a report that I will share with somebody else. I'm presenting that to others people as me. I am saying like, I did this, and if that's the case, those ticks matter a ton. Like I write a blog, I am not gonna write a blog that, and like Claude writes, because my God, you'll be able to tell, it will be like weird and awkward and I will hate it. And everybody will be like, did you write this thing or did Claude? And I'll be like, oh, wrote Claude did. And everybody will be like, shame or it'll be bad. And the reason for that is you cannot beat these ticks out of it. Or if you can, it takes a really long [00:40:00] time and you certainly cannot so far get it to write it in like a voice that sounds like you, and this is true I think for emails or anything, it's like really hard to make it sound the way you sound. But you certainly could imagine a world where like every communication that we have, where we cared about the personalization of it, 'cause it's like this is coming from us. I'm signing my name to it. I want you to think that it was for me actually goes through the same sort of intermediation that code does. Where like I write code, I put it in a repository. You never read the code, you read your own kind of view of the code. What if that was emails? What if I write an email? It doesn't actually go to you. It gets like basically dumped into a bucket of facts and you read the facts that you wanna read 'em. You don't have to worry about the ticks that I write because who cares? You're not reading 'em anyway. There's a version of this that's like this sort of, I, I wrote an email and it turns into bullet, or I wrote two bullets and it turns into email and my email gets sent to you and it turns back into two bullets when you read 'em and like, ha ha ha, look at how dumb AI is. Yeah, there's a joke there and I think that's fine and well and good, but there's like actually a little bit of a useful thing there too, where Hugo: incredibly. Benn: You don't have to worry about how you say stuff. It's also like [00:41:00] black mirror and bad, but Hugo: it's a very banal dystopia. There's like gray mirror. It's not even black, right? This version. Benn: It, it's not, I mean it like, it's not entirely if like we never speak to each other except through these intermediation. That's pretty weird. But I think that's part of why Claude Code works is because we stopped speaking to each other through the intermediation. And so you, I like it is hard to imagine there not being some cases where this happens where we stop largely speaking to other, except through these intermediation so that I can write facts. They get dumped in a bucket in whatever format. Claude wants to rewrite 'em. I don't care. You read the same facts out of the bucket. I don't care how you read them because that was your own doing of how you interpreted them. And so like all we are doing is exchanging these facts and I don't know, like I does that happen certainly doesn't seem crazy and it certainly doesn't seem crazy in the, the way that so much stuff is getting written through. I'm like we have to create so much context for agents to do things. Basically we're all just trying to put context in a bucket. If we have agents that go off and do stuff for us, a lot of it is like I just dump the context in my head into a bucket, something else, picks [00:42:00] it up and smashes together and does all these things and pulls out whatever context they need. And like it is a little bit inhuman. It is a little bit like weird and non-collaborative that we don't ever actually interface with each other directly. But it's like if you squint at it, what we're starting to do, Hugo: I find that so interesting. And let me give you two examples of things I use AI for. So you could imagine instead of getting an AI to draft an email for me, I chat with it a bit and say, can you just tell me the bullets that I need to include? And then it gives me the bullets and I draft it to Ben Stansel also, because I, I don't view emailing you yet over the years as. At TAR or something. Oh, I gotta email Ben. Like actually enjoy writing an email to you. And most email, like a lot of people I do that with. I do think most emails have never needed to be sent or received as well. But I also, something I do is after we record this, I have a, a, a skill, an agent skill, which takes the transcript of this conversation and from it it generates some candidate cold opens. So that'll be a clip of you saying something [00:43:00] and then it will generate a, like a few bullets that I can then use in the introduction. And it gives me like a little bit of the introduction that the thing I say every time, like, this is high signal, I'm Hugo Bann Anderson, and so on. But it does speed me up in those respects as well. The other data point I think is interesting with respect to this conversation, we had a prep call, right? Mm-hmm. Where we chatted what we wanna chat about on this podcast. Previously I'd use, I'd take notes and I still do take some notes, and then I'd go away and draft topics so we have a skeleton for the conversation and. I put the transcript into, into Gemini now and get a summary and then what types of questions we discussed on the transcript. But I don't let it decide like the structure of the conversation or anything like that. I think it's very important for me at least, to jump in and think about what conversation I want to have with Ben. Mm-hmm. And then I use what I've written down, what it's done to craft those questions. The other thing that's worth [00:44:00] mentioning here is I really enjoy doing those things and perhaps don't, we shouldn't start giving up the things that we actually enjoy doing to agen systems as well. I'm, yeah, Benn: well go ahead. Hugo: Just your reflections on, on, on those data points and processes. Benn: So I think that is the, there is an optimistic way to look at this. I think that is kind of what you're describing where you were like, oh look, we email some. You're like, I enjoy writing those emails. I don't want an AI to write 'em. Great. Sure. Didn't write those emails. There are though a lot of emails probably where that is not the case and it's. My, my interpretation of this may be wrong, but my interpretation of this is a lot of times those chores are not because you don't know what to say, it's because you have to say it in a way that's a little delicate. There's decoration. You have to be like, oh, I haven't talked to this person in a while. What do I like? Say, oh hell, you're doing whatever. You don't mean any of that. Maybe you, not to say people don't mean it, but like there there is a certain amount of courtesy and stuff that goes into a lot of communication. If you're writing a Slack message [00:45:00] and you're writing like an update and it's like I'm posting an update to a big channel, like you kind of massage it. You're like, I don't wanna say this the right way. I don't wanna offend anybody. I want this stuff. There is a lot of communication that is not trying to communicate facts and not trying to say it in a particular way because you want to, but trying to say it in a particular way because you have to, because there's no other way. You can't put three bullets of facts into a message. People will be like, who is this guy? He's just, all of his emails are just three bullets of facts and just like. Brain dumpy fashion. Like how rude of him to not write sentences. Execs will do this. Now you'll notice like some execs do this because they're too important for us and we tolerate it 'cause what else are we gonna do? But that's the way they like, that's the most efficient way to communicate is basically like brain dump, bullet, bullet, bullet send. Don't care how it's received at all, but we can't do that. 'cause there's just this, there's decorum that goes into everything. Some of that stuff again is, it's personal and like you're writing an email to somebody you care about. You might care about how exactly you say it. A lot of emails you write, the decorum is only there because I think that's how it works. [00:46:00] If you don't have, if, if you can put three bullets into a thing and it gets sent to me and I can read it in a way that is different than the three bullet, I never see the three bullets you wrote. I don't know what you said. Like all I get are the facts presented to me in some relatively flat way, but I understand that you didn't write those facts. This is an intermediated set of facts. I don't read that much into that. Like I'm just like, whatever. Okay. I didn't know these things before. I now know them before I now know them. Like. When I Google something, I don't get upset about how Google told it to me. I'm not like, how offensive it told me this in a rude way. It's like, no, I just asked a question. It gave me three answers. There's certainly, you could imagine that happening in this way where it's like a lot of this communication to me is cleaned up and sterilized in a way, not because, not to totally remove the human element from it, but because the things that we did to make it nice and like polite, nobody cared about in the first place. Or we did, but we only cared about 'cause. That's the culture that there, there are certainly places where you can work where it's like this radical transparency and honesty and all that, and this stuff doesn't exist. There are no like niceties in emails [00:47:00] and it's basically like a shared thing where everybody's like, yeah, we agree to that. We understand that nobody's trying to be rude. This is just how we communicate. It's very hard to impose that on people, but I think if you have systems in the way that intermediate everything, you start to maybe be able to have that everywhere if that is what you want. Hugo: Totally. And I do, I mean it went so far that we have memes and because we put this on YouTube, I won't use the cuss language, but memes around like, I hope this email finds you well, I once actually sent someone an email saying, could you reply just to let me know you, you got this. And they sent an empty email as a reply and I really appreciated that also. Benn: Right. And like some, like that's all you're trying to say is like, yes, I saw it and there's now like thumbs up, ewas or whatever. But like there's a certain, if you get that, you have to be like, how do I respond? Do I say yes, I got it. Do I like say yes, exclamation point? Do I like, that's a very simple example, but there's a little bit of overhead and all I wanna say is just yes, don't interpret the fact that it has a period, it doesn't have a period. It's a capital. Yes, it's a yes we next, but none of that matters. I'm just trying to say yes. And like thumbs up reactions are basically where that is where it's like you can give these little shortcuts, whereas [00:48:00] that's what if we just had a lot more of that kind of thing. I like, again, it's not, hadn't thought of this way, but like in a way if we communicated through emojis. There is something that happens when you do like the emoji like kind of reaction where a lot of the nuance gets basically blown away. 'cause it's like an emoji is pretty basic, then it makes it a little easier because you're like, nobody's gonna be worried about exactly what does this particular thing mean. You're just trying to give like a fast reaction and everybody understands that. And I think it's like kinda, you get that where you can do it with any message. Like I can give you five paragraphs of something. But it's the same effect as like, here is just like my emoji reactions in a very complicated way. Hugo: And that of course, that's why when we work closely with people in organizations, we move away from email, maybe use it for external stuff, what we move away pretty quickly. I am of the opinion that for the most part, email is a scourge in pathology on our civ civilization. I will put a link to calendar report's, book a World without email, which I'm a big fan of in the in in the show notes as well. I am interested in your thoughts on though [00:49:00] we hinted, we, I hinted at this briefly earlier, a lot of AI interactions, uh, one-to-one. Currently. So I'm just wondering your thoughts on the emergence of like multiplayer AI and it starting to be embedded in Slack and I was chatting with AI and pull requests and CICD and all of these things and AI being on teams, being team members, I suppose. Benn: I don't know. I don't have that much, I don't, people seem to try these things. They seem to get like weird basically. I don't have that great of a sense of it. And when people have had like the collaborative put chat GPT in a group thread, what happens? It seems to be like hard to get that right. Um, I Hugo: think it's over fitting well because of, its, um, because training the instruction, tuning in, in, in particular, I think it's so much post trained on question answer pairs between a user and the system that perhaps doing some post training on multiplayer conversations may fix that, but I honestly don't know enough about it. Benn: Yeah. And I, like, I don't, I, [00:50:00] the multiplayer thing, I mean on the pull requesty stuff and things like that. I don't know. This again feels a little bit like really what it is. Underneath, underneath the AI is a giant set of facts. It's basically like this thing knows a bunch of stuff and we are just trying to surface like the things under from underneath it that we wanna surface. Particularly on these kind of like pr e type of things or whatever. The, the point of having it in those conversations is anybody can like ask a question and get this a, like, it will surface things in a way. We can all see what it is surfacing, where if we worry about it will say different things to different people, which kind of, it does that. We don't have to have that set of problems. But there's another way of looking at it to me, where one of the ways you, one of the ways you solve that is you have multiplayer ais. Another way you solve it is you just have people that don't talk to each other. And again, I think like the code example is a little bit of what this is where everything is just intermediated by this ai. And so the code is [00:51:00] reality. Nobody actually looks at reality like that is the real thing. That is what's running, that is reality. But nobody actually looks at the reality. Everybody just gets their lens into it. And in conversations it's a bit weird because like you and I can have a conversation and the thing we're talking about is reality in a sense. This is like one layer of things that are happening. And then if AI stuff pops into it, it's like it is pulling from a source underneath us that is a different layer of reality. And so by having us talk to each other, you end up with like conflict basically. Whereas everything was just like pulled up from the source underneath it. We would never have that problem. Like we would always just see the same thing. The problem is like there can be a reality. This sees a reality that we see and that's different. And I think of like in some ways, one way to solve that is you don't allow us to see different things. Hugo: Yeah, I, I like that framing a lot because I mean, one thing you are really talking about there is doing the work to establish shared reality. And I think when the main paradigm of information technology was broadcast media, we had these big groups. That had shared reality. What? Like TV, dinners, radio, [00:52:00] listening to the same shows, that type of stuff. Three cable networks, what whatever it was. The internet fractured that in a variety, a variety of ways. And now with a lot of these one-on-one systems, how do we as humans establish shared reality? And I am interested in your why cohort can't work post, and this is related, you just gonna quote yourself to yourself. Um, but you wrote, A new repository of knowledge is starting to emerge underneath us. Dozens of tools are absorbing all the things we say to each other and presenting it back to us in a chat bot or search bar. It's a second world, a map to the territory that lives in Google Drive and Slack and Outlook. How will we maintain both? If we are doing our work by asking what's on the map or having robots do this, why wouldn't we just update the map directly? Why wouldn't the map become the territory? And you hinted at this earlier, but I suppose I'm wondering. Like what this new repository of knowledge looks like and how we interact with the map and the territory. Benn: Yeah, this is basically what I was just saying though. That was probably better said than I'm saying on a fly the way a lot of things [00:53:00] work. Neil, like take Notion eight. You mentioned the notion thing. The way that works essentially is we have a message in Slack, we're trying to figure something out. We're trying to schedule this call. We talk back and forth in Slack and we figure out what the time the call is and we say, oh great, it's gonna be Wednesday or Thursday or whatever day it is where you are. Two, I don't know what's going on. I don't know how time it's work. We have a time, this is the time the thing is, and now like that is like an artifact of the conversation that we had. And then somewhere along the line notion AI comes and reads that Slack message and sucks it into its giant repository of whatever notion AI is doing. And theoretically someone could be like, Hey notion ai, when Ben and Hugo having their conversation and it would be like, well I read it from this, I interpreted it from this message that we're having their conversation then. And so like there is a, there is a reality in the sense that we agreed to something in Slack and then there is like this kind of inferred reality underneath it that is. The giant database of understanding like database in a loose sense, but like giant repository of information. That notion AI is absorbed from all the things that are happening in like the physical reality [00:54:00] of sorts in which we're having this conversation. Okay? However, if we increasingly are just like asking when are we having this conversation of that sort of AI generated reality underneath us, why are we bothering to have it in the first place up here? Why not just say, Hey, and this is the conversation part doesn't quite work here because we had to figure it out. But say I wanted to tell you something. Say I wanted to say, Hey, I can't make this call. Can we reschedule to Thursday? If I wanted to tell you that we could have that conversation and then Notion can absorb it and now it's like it maybe understands that. And if you ask like when's the call? Like you have to go to Notion that has reinterpreted the thing that we said and is like pulling that fact back up. Can we reschedule a Thursday and all that? But why don't I just update that brain underneath us directly. Why don't I just say, hey. Giant notion, brain reschedule the call of Thursday. I never tell you that because that sort of creates this weird conflict where what does that, does that brain know something different than what is hap? Like did it interpret? I don't know. And so you sort of could start to imagine a way where it's like, actually the only thing we're doing, if all of us are just like getting information from this brain underneath us, which [00:55:00] seems to be more and more what we're doing, why are we bothering to like update things in a reality above it? And then trying to have the brain create a map of that reality? Why not just say, ah, what the brain says is reality. That's the map. There's nothing else. And again, this is how code works. It doesn't matter what I tell you what the code is. What the code is. Mm-hmm. That's reality. Nothing else. It doesn't, the conversations we have above it are not reality. The code itself is reality. And there's like another set of generalized facts that you could see the same way where it's like I'm just trying to update the reality of generalized facts. They are real. 'cause that's what I say. And let me all interpret 'em rather than like. Them being this exhaust that trickles down from the conversation layers that we have about it. Hugo: We're gonna have to wrap up in a second, sadly, but I am, we talked about, you know, what product building looks like now, what it may look like in the future, the viability of building products and building companies. And I, we had Tim O'Reilly on the podcast a while ago to talk about the episode's called the End of [00:56:00] programming.dot as we know it. And his framing is that the explosion of what's possible with software is so much more interesting than so many things we've done so far. And something he mentioned to me is that at this point, I don't know whether he'd say the same now, he felt open AI and Anthropic were the a OL of the AI world, and were yet to see the Googles and social media networks emerge. And I'm wondering your thoughts on how to think about how to create value in the future and what these companies could look like. Okay. Benn: I mean, I, if I get it, if I knew that I'd be, I guess I'd be doing, it seems like there's, you know, you could, you could do, if you're the person who figures out how to create the Google or Facebook of whatever, 2040, maybe, you should probably be doing that and buying yourself an island in 2045. I, I, it doesn't seem wrong to be like, open AI could be my spacey or A-O-L-E-A little bit. The way that I've put this before is I don't know to what extent a [00:57:00] foundational model as much of a moat, like when you're paying infinite money for the people who are developing 'em. People largely do not seem to have like edges in those things long. They seem to be like, yeah, it was Google once and now it's OpenAI and now it's whatever. And like you spend a ton of money developing a foundational model that is like the best one for four months and now it's obsolete. And you have to basically keep spending enormous amounts of money to do it. So it feels like you basically then end up trying to create moats around other things. And you see that with open AI and anthropic, like, I think fundamentally open AI's moat is chat. GBT was the first one to do it. And so they're really popular. So people habitually type chat GBT into their browsers. Does that last, I don't know, may. It doesn't seem crazy to think it doesn't. And you can see that a little bit of like the stuff they're doing with ads is clearly like a, we need to make like the, I mean obviously they would say this, but like the Google people, I think today or yesterday said something about like, well, good for them. Let them try ads. We're just gonna keep making the product better. And their implication [00:58:00] was, I guess they needed the money. And like, you know, they, they have made a lot of promises about how money they're gonna make and Hugo: Google's been there as well, right? Benn: Yeah, yeah. I certainly could see there being. You have to find a way to be like durable as a foundational model that probably isn't just like we have the best model, like we are better than everybody else probably, isn't it? And Thropic has a slightly different angle. If we have best model for creating itself and maybe we have a compounding advantage if the model can write itself like not in an a GI way, but just like we can develop code faster than anybody else because we have tools to do it. That seems, I don't know, possible certainly, but yeah, like I OpenAI doesn't seem like it's in the best of spots right now. There's a lot of stickiness to having a really popular consumer app that people have a habit of using. So it doesn't seem like it's in a terrible spot by any means. And obviously like it's, it's done fine. But yeah, I'm a little bit like it these enormous seemingly to be like destined to be winners. Companies, there are ways they can decay pretty quickly and it does seem like we [00:59:00] have the best model is a pretty quick path to that. Doesn't mean it's inevitable, but yeah, it seems like there has to be another business behind it other than. We spent enormous amounts of money making the best model when somebody else will come along in six months and do the same thing. Hugo: Yeah. And it's super early days as well, right? Benn: Yeah. Hugo: Thanks so much for a wonderful conversation, Ben. It's always fun to chat. Sure. Um, yeah, Benn: thanks for having me. Hugo: For people who want to check you out, I encourage everyone to subscribe to ben.substack.com, BENN. Any other places people can connect, Benn: there is ben.website, which has links to the various other Ben do properties in the, I don't know, Ben do conglomerate. I dunno. It's like we've got links to various social medias and contact information if you are so inclined. Hugo: Fantastic. We'll put all those links in the show notes as well, but I do, and I'll link to the blog posts that you wrote that we discussed from your substack, but I do encourage everyone to check them out. Thanks once again, Ben. Super fun. Benn: For sure. Thanks for having me. Thanks so much for listening to High Signal, brought to you by Delphina. If you enjoyed this [01:00:00] episode, don't forget to sign up for our newsletter, follow us on Hugo: YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.