ai-isnt-breaking-pm-teams-overload-is-explained-stanford-phd-cpo-jen-wang === [00:00:00] All right, Jen, welcome to the show. Good to have you on. Hi Jen. Nice to be here. This is a little bit different than we normally do. I don't think everyone knows this. We typically have a whole outline research and all these things, and like we're kind of just doing this one live because. Well that's what AI has done to all of us, right, is we're all having to figure out on the fly. For a quick background, I think you're a great guest for this. You got your PhD at Stanford, joined the product team at thread up and moved your way up to VP of Product and Growth. Now your CPO and head of GTM for the digital side over at Framework and you have a great behavioral sciences background. It's, we can address both technically what is going on with ai but also like people are all over the, I hear about burnout. I hear about drinking from the fire hose. So like give us a quick background on how you came to be where you are. 'cause I think it's really interesting. Yeah, I mean, I think you've covered a lot of the highrise, but I have a very interdisciplinary background, so I kind of have a background in [00:01:00] biology. I worked sort of, uh, environments and natural resources at the block bank. I've always been really interested in systemic change and how we can have positive impact through systems change. And then I went and, yeah, I got my PhD at Stanford. I originally had applied to do some really heavy computational modeling and then took those interests and applied it to human decision making through judgment and decision making. Also, organizational science. It was a very interdisciplinary degree, but I'm kind of obsessed with how people think and make sense of things. Turns out it was actually a really perfect background for applying to product, and so I ended up joining Threat Up, which is one of the largest marketplaces for secondhand. Clothing, obviously that thread about sustainability and climate change and systemic change was very much a part of that story. With Thread Up, we saw a lot of change, kind of went through IPO with them and built up a bunch of function. Now I met Framework, which is a company that sells modular consumer electronics, so desktop laptops, and we're going to expand our speed [00:02:00] over time, but really with the mission of trying to remake consumer electronics to be friendly both to people and to the planet. I think one of the things that has really attracted me to both of these companies is in order for those businesses to exist, their very existence actually challenges the dominant paradigm of consumption and product. So with threat up, in order for that company to go public as well as a number of other players in the secondhand space, there was this education that has to happen over many years of teaching people kind of that this was a viable sector and that the way to measure and think about it is actually very different than how we would measure. Traditional retail, for example, and that consumers wanted it. I think we see something similar with framework where I think a dominant paradigm of planned obsolesce with electronic, I won't even get into sort of a environmental impact of that, but it also very much fits into this right to repair movement that we have seen grow over time. So I think there's been a bunch of themes across my journey so far, but um, I'm really happy to be here [00:03:00] today. I think there's obviously a lot to different mouth. That was a lot of what we were gonna talk about on the whole planned outline that we had like a eight page outline, and instead you came to us and said, Hey, we're actually thinking a lot about how to rethink how we do product. In the world of AI and what this means to us. And I won't be careful 'cause anything I answer now is going to be really, really different very shortly. And I said, great, let's just talk about that. That sounds fun. And here we are just shortly after that. Let's jump into it like what happened, like what's going on over our framework, how you guys think about this. 'cause I think this is something, I've talked to hundreds of product leaders and everyone feels like. They're behind. It's the funniest thing is this is the first time I've ever seen this kind of thing where everyone feels like they're behind, but it's not, I need a mobile strategy, I need a cloud strategy. This is like a more emotional feeling of I feel like I'm behind than I think I've ever seen before in these kind of things. Yeah, I'm excited to talk about it. I think maybe like one caveat to say at the beginning, which I think a lot of people can identify with is one, anything we talk about now might be outdated by the time this thumbs out. By the time we're [00:04:00] done with this recording in an hour. Yeah. And then number two, I think overall, and maybe this is sort of, I will try to bring in as much of like a human perspective as I can from everything. I've kind of spent time thinking about human behavior and, and cognition and things like that. But I think sort of the industry as a whole, both individuals and also the whole is we're engaged in this very intense period of sense making right now where we're just trying to make sense of what's happening and the rate of information. I'm sure everybody can identify with this, that the rate of information, how fast it's coming, how much is changing the pace of change. Is faster than anything we've ever seen. And I think from like the capability for people to absorb this much information and change is really a challenge. In the same way, I think when COVID happened, actually one of the conversations I often had with friends and those around me was, you know, there's always actually quite a large degree of uncertainty in the world. You know, we are walking around in the world, we actually don't know what's gonna happen in any [00:05:00] given moment, but it's actually quite unnatural for humans to be faced with that uncertainty. Very explicitly in their face all the time, and I think that's one of many reasons. COVID was extremely challenging over the time period that it was. We're kind of facing another real cognitive and emotional challenge with AI just because it is so intense and you know, as individuals walking around in the world, it is very hard to kind of zoom out and see the bigger picture. And I think all of us are trying to make sense of that right now. There's a guy, aji Dewe, who speaks at a lot of our dinner events across the country with us. And he has a interesting framework around kind of this day of like the three speed problem, where historically, right, it was like product teams could move pretty quickly. The bottleneck was always engineering teams moving slowly. 'cause it took time to code, it took time to build this stuff. So that was your most expensive part. And then. Go to market was always kind of like the joke which I identify with as leading marketing here at Log Rocket was we were always waiting on engineering to like launch the things. And now in a world where that middle [00:06:00] part has sped up so fast, even at like companies that aren't anthropic, and then you get anthropic, you run into the problem of go to market is capped by people in the end being able to adopt your products. It's a human element versus a building element, which is kind of uncapped. You can just throw agents swarms at it and things like that. As long as you're building the right thing. I know I, at times I'm just like, I don't know what to do. I, I can't keep up. There's like five things I need to know and I don't know any of 'em 'cause they all happened yesterday. So like, how are you guys looking at this? What have you found this means? Or like what was that kinda step back moment? You know, I should start by saying like, I feel like over the years, especially in tech, but then also previously in the different industries that I've hopped around. I've obviously seen different ways of leading through change, and I've learned a lot from others. I think one of the most influential frameworks for me has probably been from Ronald. I talk about his book a lot, leadership about Easy and Fears, but one of the conflict that he talks about is, you know, when you're trying to get people to absorb change or move through change, there is sort of this optimal level of [00:07:00] stimulation that people can take. If you are under that, people are understimulated. You don't get the type of growth or outcome that you might necessarily want. But then also, if they're above that zone, you very easily kind of walk into overwhelm. Mm-hmm. And that's not an ideal situation either, because I'm sure many of us can identify with it. And that stage it's very hard to focus or know how to prioritize and to really kind of keep that conviction of what you're doing and stay focused on that. So I've been using. Kind of that idea and that visual actually quite a lot because I think as all of this information comes in, it's good, I think to be grounded in some sort of model that is realistic to what is actually possible. As humans, we can have all these tools to help us absorb information, but there's also sort of in leading a team and also being aware of where I am personally. I can tell when I go above that, and I'm sure you can and others can [00:08:00] as well. But I think that's been interesting. And then I think the other implication of that is at framework, we obviously have a lot of different teams. We have hardware, supply chain, logistics, finance, customer operations, digital, everything in between firmware. And it's been interesting to observe also how different parts of the organization will adapt at different, and that has a lot to do, of course, with how they're operating right now. But also, one thing I've been thinking about a lot recently is the culture that you have started with both across the company and across functions. The more comfortable your team culture is with iteration and change, the better and easier it has been to adapt to the changes that we're seeing in ai. So there's a lot of little tactic that we do on our team to engender this culture of continuous improvements. So like we'll ask for feedback during meeting to help improve them, but we do it on a very, very regular basis, actually more than I've seen in many places. Part of why we did this in the past is, oh, we wanna build this culture of psychological [00:09:00] safety. We want people to be able to bring new ideas and anybody can speak up. But inadvertently, I think in this age where change might be coming at the pace out of every month rather than every year, I have been so grateful and feel really lucky that we had planted these seeds because it made change management way easier now, and even then, I think we're still constantly pressing up booking like zone of absorption. The peak of the zone of absorption, it's tough to know where it is. I'm on LinkedIn probably more than I should be, but you see everyone there. Like I got rid of all my SDRs, salespeople, marketers, and product team, and I just have agents now. And like I've talked to all these companies and it's just patently not true, but it gets clicks on LinkedIn and the other side is it causes this kind of like spike in stress or cortisol for people who are reading that going, I'm not there. Oh no. And like you said, you can only absorb so much and puts people into that kind of like above the peak zone of absorption. So I guess to double click on what you were just saying, what does this actually look like over at Framework and how have you moved forward through this? I know one thing [00:10:00] you said is you as a leader have kind of worked to take a couple days where you can work as nice C and go deep and understand it, so you can also lead through it. But like what have you done and then what has it enabled the rest of the team to do as well? Well, I think there are kind of two things that come to mind for me. So one is kind of like more broadly taking a step back is I often come back to this principle. It's like mainly from sociology of just like meaning is socially constructed. There's the actual technology, but how we use it, what it means, what does competence and success look like, is really socially constructed. And I remember in grad school I got very, very good advice from David Kelly, who was the founder of IDO. I went to him with a question I think about, should I take this job or this job? And like, how would that play out? And I remember him saying to me like, John, I don't know. I'm not in that industry. But if you don't know the answer yet, it's because you haven't talked to enough people and you haven't wrapped your arm around the social space that is constructing the meaning for that industry. I'm thinking about that a lot right now because [00:11:00] the second kind of idea that I often think about is, well, what is the job of a leader and what, what's the job of both managers and leaders? Somewhat distinct terms, but a lot of overlap. Like what's our job? And I think one of our big jobs is probabilistic forecasting, right? Trying to think ahead and see what are the different paths that things could take. And then probabilistically like, where do you want you to make your best? But the more experience you have or the more that you've seen, or the more that you learn from other people. The better your calibration for what you think is a likely outcome and, and you can allocate any resource that way. So in this current date, actually there are sort of two things that I've been trying to spend a lot of time on, although I will admit it's very difficult sometimes. One is getting really hands-on with the actual goal. Without sort of this ultimate goal of, oh, I need to build the ultimate stack, or that kind of thing. But actually just trying to get a sense of like, where are things moving and how fast are they moving and, and where are things changing? So I've been working a lot with our engineering manager and one question that we kind of come back to every single week [00:12:00] when we chat is. I kind of asked him to calibrate for me like, how do you feel we are, and the team is on experimentation versus narrowing. Mm-hmm. At what point should we narrow in on a tool set? At what point is the cost of experimentation actually costing us much more? Because we're not unified in how we do things, and I think every smaller company we have the advantage that we can go a lot faster. And we can also experiment a lot more legally. So we're trying to take advantage of that structural advantage that we have, but we also know that there are costs to not narrowing in. When you say tool set, I guess, are you saying like. Everyone on Claude versus Chat g, PT or Gemini? Or do you mean like specific tools? How do you mean that? I mean like starting with our engineering team, we've been trying to push on AI for quite a while now. Yeah, and I think when we first sort of made more dedicated investments at the beginning of last year, the mandate actually was, I want actually, everybody took experiment with whatever tools. Like my job, I'm gonna get you guys whatever budget you want, and then I want you to experiment as much as possible. And then we have to [00:13:00] have these learning forums where we're constantly sharing what best practices. I've been trying to work really closely with our engineering teams to understand from them like what's working and what's not working. So. I think at the beginning of this year, we did a survey at the end of last year and sort of the number of tools that we were experimenting with was over a dozen, across our little team. And it was then interesting this year to watch that there had been a convergence, right? Like last year there was a lot of switching between cursor and cloud code and you know, as different model came out, people are also getting more sophisticated about token usage and all of that. This year we're now starting to try to converge towards a way of working. Because I think things are sort of starting to narrow, but also we have enough practice switching as well that I feel confident that our team can actually converge on something and then if we need to switch, we have sort of these checkpoints that we can talk about it and then make a decision as a team. Whereas before, I think it was much more individually like can sort of like from the grounds up. I think it was non you who's over at Linear, who leads the [00:14:00] product over linear. Talked about the idea. There's harnesses for models and there's models in the future. We just need to start looking at some of these things where some of 'em just gonna have to hot swap. Some of 'em may be products, some of 'em are infrastructure, basically. And like how do we kinda set the difference between these things? I think one thing that we've been trying to really do is empower our own ics to really propose and kind of suggest where we want to go. I mean, there's a lot of chatter in our AI chat about the latest thing that had come up and et cetera. One thing I was actually talking about with our engineering manager this week even was, you know, traditionally we think of engineering resourcing as being like, let's say 80% product, 20% debt. And one of the things I was talking to him about is. Should we actually start to think about a model where 20% tech debt, 10 to 20% process debt, and then 60% actual product work, which if we get our process right and our tooling right, that 60% could give you 80 to a hundred percent output anyways. And what do we need to think about sort of to help people and develop like the processes to do that? I think that's kind of on the [00:15:00] engineering side, which I would say, you know, is the most advanced in terms of using these tools. And then on the product side and design, one of the things I've been trying to emphasize is that I'm kind of agnostic about which tools and which stack our PMs and designers use. Again, I think it's changing very fast and also the nature of different product teams actually necessitates different toolings. We use Gini internally, so that is obviously very helpful for linking to, we use Google Drive as like our internal stack, so that's been really helpful. And obviously we have a lot of concerns around privacy and safety and that kind of thing internally, but I also have encouraged our PM and design team to go out and experiment with things that are not proprietary to our company. Bring those learnings back. The thing that has come out from a lot of that experimentation, which I think a lot of product leaders can identify with, is it reminds me a little bit of when AB testing tooling became really available about 15 years ago. You know, in the era of Optimizely in a lot of companies. Around that time, you kind of [00:16:00] started to see this real popularity in AV testing as being the mean to measure Alcon. What I observed is that there was this huge movement where it's like, okay, now we have the tool, and then everything looked like an AB test and we tested everything. I'm laughing 'cause I very much remember this, and then I actually feel like there was then a backlash, right? Where people were like, oh, well you really need to develop product intuition and you have to make decisions based on what that sort of product intuition became. This like very large trend. And I think it was a pushback to how much people had started to rely on AB testing as a way of doing product rather than using it as a tool to verify their hypotheses of the product that they were actually trying to build. I think the parallel for me is that where I feel like AB testing enabled a lot of really fast like experimentation and measurement. AI actually kind of takes that principle and then makes that crazy where you can now do end-to-end prototyping really fast. You know, I can generate a dozen designs in less than however many minutes, [00:17:00] and the volume I think of prototyping and experimentation is great. I'm super grateful that we have this tooling. But it actually makes the core skills around product even more important. Mm-hmm. Which is really actually understanding what your user needs are. And I know that talked about a lot, but when I talk about that, it's really what are you actually trying to move? And is what you're measuring actually measuring the right thing? And if you get a result that comes back that is not indicative or it's it's negative to what you thought it could be. Was it that you were measuring the right thing? Like was it that your test was correct or able to measure the right thing, or were you even testing in the right area? Right. And this is something I've been thinking about for a long time. And I think that in AI, this actually becomes exponentially more difficult. And so we talk a lot on our team about problem identification. Like do you have the right problem at hand? Is your method for tackling it, actually getting at the problem? And I think that is becoming [00:18:00] harder because there's a lot more potential distraction. I heard like in the world of Infinite code where code is cheaper or pretty much free. Sure. You can build anything. But like Jeff Goldblum said in Jurassic Park, you were so busy worried about if you could, you didn't think about if you should, but also there's a trust issue. You can also very quickly erode trust with the wrong thing if you're not careful. Yeah, I think it's been interesting 'cause I think prior to, you know, this latest generation of gen ai, a lot of what we spent time talking about in our team. How do you actually prototype really quickly? Mm-hmm. How do you build a set of tools where you can do low fidelity prototyping and prototype like 30 different ideas versus prototyping and production or testing and production and a hypothesis that was like under baked. Now that's not the problem. Now it's actually like you can do low fidelity testing very easily. But the question is what are the needs that you're actually trying to address? And I think this is obviously compounded by the fact that AI technologies are rapidly advancing. So there's a lot of talk out there about how [00:19:00] oftentimes I think, to try to build robust product experiences you're actually trying to invent or come up with products for which the technology does not exist yet. We have experienced this directly, actually, like coming into 2026. We had a roadmap for the year. And then with sort of the advent of some of the latest model advances, and especially I think for me, the moment where I was like, oh, we need to go back to the drawing board, is really seeing like how long running these models could independently run on their own. And the projections for where that was going to go this year. So I think it was February, uh, like two months ago, I went to the team and I said, I know we did all this work on the roadmap. I think we're gonna have to actually wrap that and start over and reimagine how we want to spend our time in the second half of the year. And we actually need to think about the way that we think about building is to just assume that the technology will get there. What are the core needs that will still remain after the technology is there? So for us, on the hardware side, no matter what [00:20:00] happens, people still need to figure out how their hardware works in person. Mm-hmm. It's a physical object. Our big mission and our whole value proposition is it's modular, right? You can own it. You can fix it yourself. You can personalize and customize it as you want. You can swap things out as new memory comes out. If you can get your hands on memory, you can swap that out as well. And so there are still some like very core customer needs and that journey that we will have to meet. And I'm excited, I think with the latest model capabilities and what I anticipate will come out later this year. We're gonna be able to offer some product experiences that were just not possible. Foster, I'm reminded of, you know, the Mike Tyson quote. Everyone's got a plan until they get punched in the face. It's kinda like everyone's got a roadmap until AI punches them in the face. At some level, you can build and go just ridiculously fast right now, but I think we agree there's some level of directionality you wanna adhere to. At the same time, if you move too slow in too much planning, you're just never getting anything done, you're gonna move slower than people who are winning. How do you balance, speed and plan? You have to plan [00:21:00] somewhat. You have to know where you're going. But it's probably not the way we used to. How do you like balance your product team? Having an idea where you're going in a roadmap data collection and knowing the right things to do and this speed of execution that I think is way outta whack right now. But we need to get back under control and understand the new reality quickly. Yeah, I mean, I think it's hard, you know, I'll start by saying that Yes. Yeah. With all the clear caveat that like, I think this is, the answer is best as we can all manage right now, and it might be different in a week when Claude releases some magic new thing. I think the way that we've been. Trying to think about it is, in many ways, AI capabilities and product building becomes hard to, you know, and people will talk about this, it's like the easiest time to start a company, the hardest time to scale it. So defensibility I think, becomes one of the hardest things in this world because the technology is changing. If you're not a frontier lab, you're building on top of models whose capabilities you don't know what they'll look like in three to six months. So we've been spending actually a lot of time thinking about where is our remote. Mm-hmm. Right. And [00:22:00] as AI models develop, where will our remote still remain? So I think one thing that's like probably universally true about most companies is that when we think about the type of things that AI is really good at, they're obviously very good at working with sort of explicit rules and context. So anything that you can document, I know I'm preaching to the mouses here that already noticed, but anything that you can document is not. Create a defensible motive. It's available to other people as well. But any sort of data that you have internally or any sort of insights that I think are implicit to your organization or the way that you work that is potentially defensible. And then if you can productize something around that, that is something that is not just defensible but actually becomes a value add for your customers. And so we've been thinking a lot and really hard about, okay, we obviously have our hardware that we can offer. But on top of that, like what do we know uniquely and what experiences do we have as a company that previously we could not productize [00:23:00] in a software sense because it was too expensive to do so, or the quality would not be good enough. There's a lot now that was not possible before, but I'm excited 'cause it actually means that from a software and digital perspective, we can add a lot more value to the holistic experience of buying a modular laptop than was possible before. It gets back to kind of something we started to talk about is the problem we've run into here is so many historical tech problems or tech problems, right? Like, okay, your board wants you to have a mobile plan. Your CEO wants you to have, you know, how we take advantage of cloud for cost reduction, but like these were not inherently sole crushing questions. These were, let's just think through a mobile app. This world of AI has not just like, it's good 'cause I think the right focus are how do we build better products? How do we solve people's problems? Better this way, but we have the human level of people looking at it and going, I am so far behind. I don't understand what they're talking about. I don't have cloud skills built. I don't have agents running all my emails for me and replacing 10 people on my team. [00:24:00] I'm a failure and that kind of behavioral change is really, really hard and really, really scary. And that's the difference here. Do you have tips people can look at of like, how can I, I tackle, like how have you guys tackled those problems of like solving some of this and moving forward in a practical way of like, let's build stuff, let's do stuff, let's take advantage, not be scared, kind of thing. Yeah. Kind of two things come to mind, and I'm always curious how do people are doing this as well. But I think the first one is that for me, when I first started, I had made the assumption that, oh, I need to understand all of the technical underpinnings of how this works. I pulled out my linear algebra background and I went really deep into matrix transformations and things like that. I mean, it has been useful to have like some basic understanding, but I would say probably the most important thing is just to get started and just to play around with tool. I think the way that I'm thinking about gen AI in, in the most kind of optimistic lens is that it really could have the potential to be transformational for how, especially I'm very passionate about the subject, but especially like how adults learn and [00:25:00] develop. Mm-hmm. So we basically know that. In general past any formal education, the main pathways through which adult development and learning happen tends to be through close, intimate relationship or through the workplace. But I think many of us can speak to, you know, there are aspects of those mechanisms that are very effective and aspects that are not as effective. And I think what's really interesting about sort of gen AI is that it opens up the possibility of learning in a way that has never been possible. Like the internet also was a version of this. I remember being in college and just being like mind blown by how much I could learn on my own with no payment. You know, with like as a student, gen AI is like that, but at a crazy exponential rate. So in my most optimistic view of ai, I'm really excited about this possibility and I hope that people take that view and just use it to learn. There's no better time to take a beginner's mindset and I really don't think anybody needs any background to get started because you actually have a tutor at your hands, be it through Claude or [00:26:00] or something else that can actually teach you and help personalize that education to you. So that's, I think, the optimistic view. And I think something I would encourage everybody, like, don't worry about the credentialing, like get started and very soon you'll kind of be in the mix of things. It's very normal to, I think, feel overwhelmed. And then I think on the other side, the thing that's obviously very top of minded, this technology is very powerful. It's moving very fast and it has tons of implication for safety and how people engage with each other and how it's going to kind of change the fabric of our society. So there's been a lot kind of talked about on the economic front of course. And of course my focus has really been thinking about how does it change human behavior? How should we think about these implications? And I think on that side, I would actually really urge people that there are obviously always transformations that are happening, but now is really a time where we have agency and have a voice to try to push the direction of this change to be more positive. So one of the things that I've been really heartened by actually is. [00:27:00] Connecting with and reading about and trying to connect others who I think are really concerned about AI safety. And sort of the last I'll maybe read on this is I'm Canadian. I grew up in Canada. I was recently catching up with friends and one of them had a ring on his pinky, and he's trained as an engineer. And he was sort of telling me the history of this. If you graduate as an engineer in Canada, and I think this is now also in the US, have been tradition for a long time. There is this tradition of giving this ring that's like the engineer's ring the iron ring. I think you could read about it more, but it's sort of a reminder and a physical thimble of the professional responsibilities of that profession and what it means to actually sign off on a design. And it kind of reminds me of the Hippocratic Oath in medicine, and I'm just kind of throwing this ass there to the world. I'm, people would be curious to hear what other people say, but. Increasingly, I really feel like I wish that there was a sense of that kind of professionalism and the effects of what it means to be [00:28:00] somebody working in software and product. The products that we put out into the world have real impact. They really change how people interact, not only with the objects that they have, but also with each other. And my personal kind of urge is we have the ability to influence that and I hope that we take up that mantle in a thoughtful way, especially in this current time. I think that is a great way to end. I appreciate you coming on chatting through it with us and just being honest about where everything's at and how you guys are thinking through it. Yeah, it'd be great to have you on again, like maybe down the road to just see how things have progressed and we can talk more about it. Thanks for having me. And you know, I actually realized as we get to the end, I didn't even talk about how memory is impacting us on the hardware side. I know that your audience is mainly software, but that's kind of probably been one of my major stressors at the company. But that's fair. So we're feeling it from all angles and I've been really appreciative of people out in the world who have been also very real and candid about what it like to go through this. So we're all going through it together. This is fantastic. And. Can't wait to talk again. [00:29:00] Hope you have a good rest of the day. Thank you so much. Thanks, chef.