AMOS: So, Mikal, we’re not that organised. CHRIS: This is very standard for us. This is basically how this whole thing goes. MIKAL: Okay. So it’s already started, yes? ANNA: Yes, it’s started. AMOS: Oh, it’s definitely started. Now how much of this will make it into the show is uh…. CHRIS: He’s already picked up on one of our classic bits - ‘Is this the show? Are we gonna edit this out?’ AMOS: ‘Are we on yet?’... So should we start out with you teaching the world how to pronounce your name? I mean, us Americans, we all think that it’s just Michael, so… MIKAL: I’m generally fine with Michael, or whatever is your local version of that name, ‘cause every language seems to have one. But the original would be [Mikal]. AMOS: Mikal. MIKAL: Yeah. AMOS: Well, I’m glad that I got it at least partially right. Sixtieth time is the charm. CHRIS: How is the conference - first of all, before we get to that, thank you for joining us and like, recording with us. MIKAL: Thank you for inviting me! AMOS: Thanks for coming. I already wonder when you sleep. It seems like no matter what time of day a twitter message goes out, you respond to it. You and Sasha. I don’t think Sasha ever sleeps either. MIKAL: Yeah, that’s possible. ANNA: Yeah, your timing was great yesterday, Amos. You Slacked me saying I need to meet Mikal, and I was like ‘oh, he’s standing right next to me, I just did’. But the conference is good. It’s going really well. The talks have been really good. CHRIS: Anything that has stood out to you, or that has caught your eye so far? MIKAL: So I attended a talk about uh, open census - it’s this new approach to tracing across multiple services that you connect. This is very interesting as a way to be integrated deeper in the language. CHRIS: Yeah, I’m really excited to see some more coming out about that. I know Ben Marx has been really interested in that, so we’ve been talking about doing more of that at Bleacher Report. So, I’m excited to hear that there’s a talk out there that I can watch soon. AMOS: How quick are these normally out? I know some conferences are like ‘day of’, but… MIKAL: I think theirs is usually pretty quick publishing them. ANNA: I think so, usually. Usually pretty quick. AMOS: And you spoke today, Mikal, right? MIKAL: Yes, I spoke today. AMOS: What did you talk about today? Config? MIKAL: No, that’s still before us, I assume. Uh, I talked about optimising… It was basically lessons learned with things I did last year, and building the Jason library and also contributing some optimizations to Elixir and to Erlang and things like that. AMOS: So, if there’s one take away you could give somebody over audio, about optimizing for the beam, what would you say? MIKAL: Well, I would say, measure first. That’s like, think optimizing langley. It’s very hard and you usually get it wrong. And this also was part of my talk. I also talked a bit about tools you could use for monetary neek and then also for matter man built for performance and things like that. I think it’s very important, before you start doing anything, to recognise where your problems are and set some goals for what you want to achieve. CHRIS: You kinda have to set your hypothesis, right? You kinda have to call your shots a little bit. Or else, you’re just flailing around blindly, trying to find what the problem is. MIKAL: Yeah, exactly. ANNA: Are there any particular tools for the folks listening that you would recommend that you were talking about? MIKAL: Um, so for the simplest things, I’d say Observer is a very useful tool. And there’s also like the CLI version - Observer CLI - you can use that in the shell directly. And then there’s the web based one so, it works in different setups and you can run it in production just fine. And for the second step, once you have information about the system, the second step would be providing and then there are the three profilers that are built into Erlang, which are prof, c-prof, and aprof? And they all have mixed desks helping them, which are pretty nice, I think. CHRIS: It seems to me that one of your biggest contributions - at least visible contribution - is going in and fixing libraries and optimising stuff and like… that’s - so much of when I’m lurking in IRC, you’re talking about we should do this or that, or here’s ways to optimise these different situations. I know that was a part of like - with Jason, when you’re approaching those sorts of problems, do you just have an intuition about this stuff, because you’ve done it enough now? Or do you go look at the code, or… how do you start to formulate that hypothesis. MIKAL: Well, I’d say those things happen mostly accidentally. So I’m just exploring some code that we use at work or somewhere in Elixir, and I just go ‘that’s not how I would do it. Let’s try finding out which way would be better - maybe could be faster or something like that.’ And after dong that enough times, you get some intuition. What things are common, but not the best for performance, for example. So one of the optimizations I contributed something similar to a lot of projects - was handing binaries. So, imagine a situation where you’re transforming one binary to another. So there’s a string. And there’s often a situation when large parts of the binary are not actually changed. For instance, HTML skp. Most of the vapor is actually not touched at all. So instead of copying everything byte by byte until you find those that you need to escape and then do something, what it’s better to do instead is to count those bytes that you do not need to change, and then create a sub-binary from that and then build an IO list from that, instead of one binary copying everything. CHRIS: Right. Because - IO lists are gonna be optimized at that point ‘cause they let you do things like swap out pieces of the list. MIKAL: Yeah, exactly. Then like, for most of it, you have those sub-binaries, and you haven’t copied that, so it’s actually more efficient. And actually, in many cases, you actually can roll with those IO lists. You don’t need to concaptionate them. So, HTML escaping is a great example, right? ‘Cause then you push to the sockets and it can handle IO lists just fine. This is a core part of Jason as well, if you have passion escapes from Jason, it gives you escapes as well - like, most parts are not escaped. And something similar is, for example, in Elixir when we use integer parse, there’s even better because you just have to count how many digits you have, then use Erlang’s binary to integer function, which is pretty fast because it’s implemented in C. CHRIS: Right… It feels like so much of optimizing for some of this stuff is figuring out how to get down to the pieces of Erlang that break all the rules. There are so many little escape hatches of things that are special cased to make them fast. And it feels like so much of Erlang is trying to work your way back down to that level so you can take advantage of it. MIKAL: Yeah, I think that’s true. AMOS: So, it sounds like a lot of this is algorithm design too - deciding how to deal with functional data structures and figuring out how to minimize that copy aspect of functional programming. So what have you used yourself to figure out more about functional programming and data structures? Or is it just hard-won experience? MIKAL: So, I don’t like to admit it, but I’m not really a programming books person - I rarely read programming books - but I do read a lot of article and papers. Probably everybody knows the thing where you’re on Wikipedia looking for something, and you end up half an hour later looking at something completely else. So yeah. That’s how I do it mostly. Not only Wikipedia but blog posts and things like that. AMOS: Click all the links until they’re all a different color. That’s kinda my go-to. I’ve been trying to read some books lately, but those are slow going for me. But you said papers. I know Jose has talked recently about John-Huges-driven development, so I know those papers have been pretty influential. What is an influential paper for you? Maybe one of your favorites. MIKAL: I’m not sure I have ones… But I recently, for example, was exploring the different papers around the HAM team they just structured? And the way the Erlang maps work - Erlang maps and like, closure maps and sets, and all of that, so there are a couple papers that were quite interesting. And they also give you ideas. Like, how could you do things in functional environment with the data pipes and use them for other problems - not exactly this one. Those can reuse some parts of the techniques that they use. CHRIS: The Hash RA map tri-stuff is really really fascinating. And I think it’s so interesting - the original Bagwell papers - The Ideal Hash Tree, I think is the name of that one - it shows how to do this stuff, but if I remember correctly, they weren’t pure - or, they didn’t support immutable data structures, and it was Rich Hickey who figured out how to do that and then put it into an immutable context, but he didn’t write anything about it. He just did it. And that was - the underlying closure data structures were all based on these ideas. And then since then, everybody just had to go look at how those data structures were made, and go rip them out and use them. ‘Cause it’s so good. It’s just a good way of doing things. AMOS: So go read data structures built into programming languages. That’s the ideal way to learn if I’m understanding. ANNA: Yes, exactly. And Mikal and I were talking a little bit yesterday - and I know we’ll eventually get to this, but the discussion we were having last week about config things. MIKAL: The elephant in the room. CHRIS: I was just gonna void talking about it the whole time. ANNA: But that’s why we invited him on the show. Among other reasons. CHRIS: I thought that would be like a really hilarious troll though. Like, we just bring him onto the show for this, and just don’t talk about it at all. AMOS: We could title it Config Part II and then never talk about it. CHRIS: No, yeah, so let’s talk about config though, right. Twitter is what twitter is, and I’m not convinced that we actually disagree all that much. I think it’d be good to get that nucaunce and extra context from that side of people who experience a whole different set of problems than we do. MIKAL: I think it would be useful to say first and state what I think the problem with the way it is currently done. So I think that in general, most libraries abuse application .impf for config. So, for example, you have like an api client or something like that. You shouldn’t use application.impf. It should only accept, like the api keys for example, as arguments. That’s it. And I think we all agree on that. AMOS: Yeah, yeah, I think… moving on. Check. CHRIS: At this point, I don’t think anyone in the community disagrees with that point. MIKAL: But the problem is, we have a lot of libraries that do that and we have to deal with them somehow. And the way mixed.config works is that most have a compile time thing. In case where you use mix from, it becomes a runtime thing as well. And I think this is the big problem, right? It is those things. I personally, I think that we should go for splitting the runtime and the compile time config. Jose does not agree with me, so… yeah. We’re not a unified front. CHRIS: So, we joked about it - and I joked about it on twitter as well - it’s just, the quickest way to solve the problem is to stop running the mixed config scripts at boot, and the whole problem goes away - or, the problem doesn’t go away, but it makes the problem obvious. But that’s not a reasonable solution either. MIKAL: The primary principle we want from a solution if we arrive at something is not to punish library users, but to try to push library authors in the right direction. But like, try to make it as easy as possible for the users even if the library is not doing the right thing. Because there are so many more library users than authors, so you’re confusing more people if we impose strict rules that everybody must follow. AMOS: It’s too late in the game to just throw in a really hard rule. CHRIS: Well, as silly as a giant breaking change, you know, like not running mix on boot would be - it’s such a breaking change that you’re talking about Elixir II at that point - that’s the realm that you’re entering into when you do that. So I understand the desire to not want to break any of that…. So I have a - this question is not as leading as I mean it to be - I mean mix.config has behaved this way for a long time. Either you don’t use a release - I think a lot of people just don’t use releases because of this - and for those of us that have been using releases, we’ve just figured out ways around the problem at this point - so, I’m curious to know - why is this a thing we’re fixing now, so to speak. And I don’t mean that to be as critical as it sounds. I’m legitimately curious. MIKAL: I think that’s a good question. So one thing is that - you have a new person come into the community, and they want to run their Phoenix app on the roku and one thing that you have to do on roku is allocate for dynamically and make the base url dynamically. And they go to look at some documentation and some say use system doubles, some say use innit callbacks. Some say… And it’s like, it’s confusing. And maybe that’s one reason. Maybe arrive at one solution that we recommend. It could be like the - the way to do it. And I’m not sure we can arrive at something like that that would be ideal. And so, that’s one part of it. And the other part is we figured out how to do this with reading from system environments. What if you want to read from this key or volt or even like a file with Jason. It’s not very easy with the file with Jason because you don’t have the Jason library when you run the config scripts. We learned to manage the problem, but that doesn’t mean the problem does not exist. CHRIS: Yeah, I agree with that. I ended up talking to Jose on IRC after we recorded the last episode, and he broke it down really nicely, I think, into three distinct categories of problems. The first is, you have the compilation configuration stuff. These are things like - and you’ve alluded to this stuff too and said this is gonna change in Ekto III, but right now, Ekto needs to know what adaptor you’re using at compile time so it can add it like the correct code and that kind of stuff. And there’s other use cases for that, like Logger right now. It uses some of the values in mixed.config at compile time in order to purge logs and stuff like that. The second tier of that is bootstrapping. Which is - I’m gonna get part of this wrong, but as far as I’m concerned, bootstrapping is just ‘I’ve got to turn on all my dependencies, those start before my applications starts, I’ve got to start kernel if I’m using distributed Erlang, I’ve got to like. Configure this before that comes up… So that’s like, part of the bootstrapping process, and that’s a whole different set of config issues. And then you have the final thing which is - you know, your application configuration. Being able to configure your application at runtime. That’s kinda the easiest one to figure out. ‘Cause, ya know. It's your application. You can do what you want at its start and just kinda start pulling in values, and you can make it call to vault and choose to shut your app down or keep running based on the response you get from vault. So… having broken those things into the different tiers… You know, for me, I think the hardest thing to solve is the second one - it’s the bootstrapping problem. If you have to configure dependant applications, like… What’s the right way? What’s your take when it comes to configuring that bootstrapping problem. MIKAL: I’m not really sure. So like, one idea I had at some point was maybe you could have a non-unified config. Maybe say something like ‘run this config before the computer boots. And it requires that and that application. For example, you could have like, the CD application starts before the Ekto application, and run some config there. But yeah, I’m not sure there’s a good solution for ti. And interestingly, you mentioned Kernel, which is like, even worse, because it has to be configured by the time em boots. And I talked with Ingala a the conference. She’s the one working on the SSL model for the TLS distribution. And they also have a single problem is that the kernel starts in distribution but the distribution with the TLS distribution - the secure distribution - is in the SSL application, but when kernel boots, the SSL application is not booted yet, right, so you have - it’s hard. CHRIS: How much of that is just the legacy of how Erlang uses applications? You wrote a really good blog post on config - it was a while ago, but I think it’s relevant even now, and you talked about the different ways in which library authors could manage config and I think you pointed out you can provide a behavior that the user can start in their application… and one of the things you talked about in there was applications. Like, if you’re supplying an application, this is the worst possible - I don’t think you used these words. I’m using these words. But it’s - this is the hardest way to configure this. If you use an application, it’s the least flexible. Like, it’s the hardest to configure. How much of that is legacy of Erlang, and to that point too, is that a thing that we, as a community, should be trying to find new patterns for or move away from? Those sorts of things. I’m asking you a lot of very pointed questions. MIKAL: So, my quick response is my that I agree with you that it’s a lot coming from Erlang. My longer response is, the reason why you have those libraries as these applications is because it’s easier. You’ve got your global process and you’ve got your atom, and it just works, right? And design gets so you can start movable instances of the thing, right. It’s much harder. And I don’t think it’s that you don’t know how to do it, right? Just design your processes around the idea that you can start multiple of them. They are not single thoughts. And it’s not very difficult. It just takes slightly more work to do. And mostly you don’t think about the fact that maybe you’ll need more of them. CHRIS: Right, and applications - and doing things inside of applications often - it gets tricky, right? Because you have multiple libraries that all want to depend on that single application. It works fine if you have one of those - let me explain this correctly. If you have dependant application C. And you have libraries that you specifically depend on A and B, but they both depend on C for their configuration, well, now C is global ‘cause it’s just an app. And now you can have collisions, and it’s trickier to figure out how to collaborate together and work together, you know… And that’s hard to deal with. MIKAL: Yes. CHRIS: Sorry, that wasn’t really a question. More just me making a statement. AMOS: You killed it, Chris. CHRIS: Sorry. AMOS: That’s alright. I’m sitting here taking notes. I’ve got like two pages of notes, so I’m not saying much today ‘cause I’m just writing all these things down that I need to go look up. CHRIS: So I wanted to address one of the more - I don’t want to say controversial, ‘cause I fully understand where y’all are coming from and the problem that y’all are trying to address. It seems like the conversations - just because there’s so many other things that are competing going on, it seems like - like, I know that form thread kinda got closed as we need to regroup and think through this again - one of the main solutions was kinda that on-boot section of the mixed config, and I have - haha - *mixed* feelings about that. AMOS: You should have seen his face when I brought that up in the last podcast. CHRIS: Yeah, I mean, so… I’d love to chat with you about that now that we have more than 140 characters. MIKAL: I talked with Jose yesterday about it. And so, the things - we want to write releases with Elixir. And that's the goal, and why we started addressing the config problem. Because we want releases in mix - we want to have it all the way through. We don’t want it to work sometimes - we want it to just work. It’s not really clear now how they are going to work. Maybe it’ll go that whenever you run mix.run, it’ll also run mix. That’s a possibility we were considering. So, if that will be the case, you need a completely different design for config. And that’s the reason why these discussions should go. Let’s maybe solve the release problem first and see what constraints this imposes on the config solutions we look at. CHRIS: I think that makes a ton of sense to approach it from that perspective. I have no idea what the implications or the level of effort that this would require, but I think the idea of always doing mix.run and having that run a release is a - let me back up and say, even if that is not what we normally think of as a release - even if that’s different from what we would call a release today, if it’s a release - meaning like an executable thing - that seems like a really good place to be. That seems like a real good win. MIKAL: So, interestingly, I talked yesterday with a guy from Whatsapp - I forgot his name, I’m sorry - but this was - we also touched a little bit on it, but the way they use Erlang is very very different from like - they do hot code loading modules, they’re like doing things in the shell all the time, and they don’t run releases. They basically start a shell, compile modules in the shell, load modules dynamically, then start them. Or at least, that’s what I understood. So, yeah, and if we should use OTP releases or maybe some other form of releases. That’s also an open question. I think what you like to have is like - build your application as a single thing that is complied, place it on your server, and then you run it. I think that’s the goal - to have a feature like that. But so far, like OTP releases are probably the best content, so like, run clear solutions. CHRIS: Right, yeah. It’s easily the most used - the most battle-tested. It seems like, anyway. Like, trying to come up with a new thing and just saying ‘well, we’re gonna invent our own releases’ - I mean, the Elixir community - I know you’ve already had firsthand experience with this - already gets accused of reinventing the wheel a lot. ANNA: Yeah, someone said that to me already today. MIKAL: Someone said that to me a couple times today as well… CHRIS: There’s a lot of that going around - you know, ‘these Elixir upstarts. They’re ignoring all the good we’ve done in Erlang and they’re just doing it all themselves’. MIKAL: But even releases in Erlang - the way relics does it, right, the tool that I think is most popular and the base for - it was the base for the predecessor to Distillery. XRM? Before that, there were many other tools in Erlang to build releases. Even OTP has two tools to build releases. There’s rails tools and cis tools, and they both kind of build releases, right? Yeah, I don’t think it’s a closed question. It’s a very hard problem, basically. CHRIS: I think as much as we try to rely on the stuff that’s out there for Erlang - and I actually do think that the Elixir community does a really good job of that - I do think sometimes we have to - it’s sometimes good to step back and reevaluate the problem and say ‘is this actually working for us? Does this actually do the thing that we need it to do? Or are we just shoehorning it in there ‘cause it’s been built and it’s easier to reuse somebody’s effort?’ And that might be the case here - I mean, we might just need to rethink - just kind of a top-down approach on how to do releases overall. I don’t know. And again, I don’t even know the level of effort that it would take to do that. I’m basically sitting here and assigning Paul and whoever else a bunch of work. MIKAL: As I said before, OTP releases and like, the Relics way, and the Distillery way seems to be like assertions. But like even for example, with top loading, so… One idea I would have - what if for example we might do releases with upgrades and just restarted all the applications, right, and then you don’t have all the problems of like ‘how do you have great processes? You have to write all that code?’ And you still get this idea of upgrades - maybe in some limited way, but it’s much simpler and it works. Right, so there’s a lot of nuance there and like a lot of solutions that could be made. CHRIS: Do you have a sense of how many people in Elixir are actually using hot code upgrades, because it seems like the constant refrain from almost everyone is ‘they’re a lot of work, they’re complicated, don’t do them’ - ya know, all that kind of stuff. Like, that’s the thing I hear most from folks, so I wonder if we’ve turned off a lot of people from that. AMOS: And they’re not that difficult CHRIS: I’m not saying one way or the other. I’m just saying that that’s what you hear. ANNA: Yeah, you definitely hear that paradigm quite a bit. AMOS: I wanted to do it so bad that I purposely went out and did it after hearing everybody say how bad it was, and I thought, ‘this is not really that…’ unless you have to change a whole supervision tree - if you’re just changing a process, it’s really not hard. You get some state in, you mutate the state - just like every other function - and you turn the state back out. Hey… MIKAL: You can just change some pure function modules, right, and don’t do anything. It just works. I think a lot of the fear of upgrades is also inherited from the Erlang community, where they use records or the state of the process and upgrading from an old version of the record to a new version is so hard and error prone - just like records are with compile time, and you can have just one definition in the main, then you can manually unplug and replug the couples, and it’s very easy to get it wrong. That’s my understanding of one of the reasons. CHRIS: That’s really interesting. So this idiom might make no sense, but down here in the south, we always have a joke about ‘that’s why mom cut the end off the roast’. Have you ever heard that? AMOS and ANNA: No. CHRIS: Oh. So this is a southeast - this is some home-spun southeastern wisdom right here. AMOS: Dandelion greens and black eyed peas… CHRIS: No, so it’s like - every time Mom makes the roast she cuts the ends off of it, and you’re like ‘why do you do that, Mom?’ and then she’s like ‘I dunno. It’s what my mom always did’. And then you’re like ‘Grandma, why’d you always cut the ends off the roast?’ And she’s like ‘I dunno. It’s what my mom always did’. And then you’re like ‘hey, Great Grandma, why’d you cut the ends off of the roast?’ and she’s like ‘oh, it wouldn’t fit in the pan otherwise’. MIKAL: I think there was a related social psychological experiment - I think it was with monkeys. That they - I have no idea if I remember this correctly, but there was something like there was a ladder with food on top, and they would sprinkle the monkeys with water if they tried to climb the ladder, and so basically whenever one of them tried to climb, the other would take it down because they were afraid of the water. And after some time, they stopped - and they were changing the monkeys in the room constantly, and after some time, they switched off the water and they weren’t sprinkling them with it. But whenever one monkey would try to climb the ladder, the other would still pull it back from it. And even at the point that no monkey was the same as the initial ones with the water, they were still doing it, and they had no idea why. And it’s the same story. CHRIS: Yeah, exactly. All the monkeys were cutting the ends off the roast… But I also wonder to what degree - I mean, this is tough because this is such a ‘hacker news’ way of looking at the world, you know. I wonder to what degree Elixir companies if they’re using Docker or deploying kubernetes, how much they can actually take advantage - or are ever going to take advantage of upgrades? ANNA: I’ve heard that before, right? Everyone, as soon as you start talking about reloads and someone says Docker and deploying kubernetes, it gets hard. But, I mean, I think you choose which - sometimes you can’t use all of them, but you choose what tools to use, or a particular set. And sometimes you have to make tradeoffs. CHRIS: But I have to imagine it’s one of those things where it adds a - having to handle upgrades as part of the release package or procedure - it adds a lot of complexity to - I assume it would add a lot of complexity to what’s out there, I would assume. As opposed to building something that’s the equivalent of an E script. Like, if you could package your whole thing as something equivalent to an E script and just like, not care about how the upgrade process works - if that was the entirety of the release tooling, that’d be fine. But because you have to deal with a world in which someone might be using upgrades, it adds that level of complexity. ANNA: Yeah, that’s a good point. AMOS: I use upgrades or a web app that’s running Elixir or Phoenix. CHRIS: You’re such a hipster. AMOS: Well, most of my stuff is Nerves, so it doesn't matter. But the one Phoenix app that I have going, I deploy to a digitalocean droplet with distillery and I do upgrades. From day one, I said, “I wanna be able to do hot code reloads, so I’m gonna do it this way.” And that was more just to go against everybody saying they were had. It’s not been that bad though. I got away from Docker and heroku right away because I wanted to be able to do this, so I would like to make sure that the tooling still supports that and maybe see more people in the community starting to try out that kind of code stuff. When I was doing Java and Ruby and other web development stuff, it was all ‘hey, how can we minimize our downtime and is there a way we can upgrade with no down time?’ MIKAL: I think another problem with updates is what to do when you’re upgrading your dependencies. Right? Because most applications that are like packages on non pecs don’t publish popups and don’t handle like code upgrade callbacks in their modules. So it’s another problem like you’re barred in some of your dependencies. Or even your security issue. And you want to upgrade it, but it’s not prepared to be upgraded, and now you need to restart your system anyway. If you have to be prepared for a complete restart anyway, and there’s a question like, ‘does it make sense to go with upgrades?’ It’s trade offs, obviously. Like everything. But this is another component to consider. And most libraries, when they’re published, they don’t include upgrade instructions. I think the only thing that does is like the occupations that are a part of them. CHRIS: Well, and to tie this back into our configuration discussion, if you do support upgrades, how does that play with potentially running mixed config again or running the on-boot section again, or like… how is that all gonna work out? MIKAL: Yeah, and like, how to inforce if like a process started and you need to write the config in it. How do you enforce that it rereads it after you upgrade the config? That’s yet another problem. CHRIS: So this is why to me, potentially coming at the problem from a different direction. This gets to why I created that project called Vapor, right? Which is aptly named because there’s zero code. But I’m thinking about ways we might be able to come at the problem from sort of a different direction, assuming you can bootstrap the app. ‘Cause then maybe the solution is to have a process that you can control that maybe you can do upgrades on if you want to that can handle code change or whatever you need to do. This is where I feel like we might start to add a lot of complexity to mixed.config if we start to say that it’s gonna handle all these cases or it’s not. You’re still going to have to figure out the specifics for people, or it’s still going to be surprising. MIKAL: I agree with this. I think for the first version of the releases, the plan is not to handle upgrades. I mean, even without upgrades - like if you use volt, the reason why you use volt is that you can rotate the configs. So, what happens when you rotate them. For example, you have the ekto repo that you started with a base URL that you took out from volt and how do you change that, and so on. Those are still questions that are not answered. And I don’t think the config discussion would answer them. CHRIS: Right, ‘cause it’s too big. It’s too big for the core team or for mixed config or whatever to be able to handle all of the use cases that people will want to try to use it for at that point. ‘Cause yeah, exactly to your point - maybe sometimes I want to talk to vault and maybe sometimes I can and sometimes I can’t. And how do I handle those errors and when is it a hard stop and when is it not, and when do I get a new config… it seems like that’s a lot of app code, ya know? It feels like that’s going to be very dependent on your application. I don’t feel like there’s a lot of conditions we can stipulate for people for that - for like, real complex use cases like that. AMOS: I think a lot of what we talk about with config and the problems that we discussed last episode and on twitter and in the forum and every other place… It really is an education issue, and maybe an experience issue that we need to figure out how we want it to be done, then how to spread that education, ‘cause I don’t think that a lot of the things we talk about are gonna be solved with technology. They’re gonna be solved with an approach to using technology. MIKAL: I think it’s a little bit back to the blog post you mentioned - Chris - in the beginning that discusses building those componentes and you can start your own supervision tree instead of having full applications as dependencies. ‘Cause then you have more control as a user and you can choose to make a start of the process after the config is going to change… basically you already control, and I think that’s what the libraries should strive for is to have a design that places the user of the library in control and not the library itself. AMOS: I think that’s where compile time config really hurts you. ‘Cause you can’t have a choice as a user at that point. If it’s a runtime config, there are ways that I can build my own compile time config into that and just pass the same stuff if I want to. But if it’s a compile time config, getting it to act as runtime is really difficult CHRIS: I mean, you could conceivably do it at compile time, but it means recompiling modules like, on the box…. AMOS: You mean hot code upgrades? MIKAL: So with recompiling the modules, ‘cause I recently changed how the mind library works, ‘cause it’s another example of a compile time config, and it comes from your extra mind… There’s a - I think there’s a poll request we could link to - but the way it does it is it records the options of compile time, and if it moves with different options, it will print a warning and recompile the module with the new options. Printing the warning is like, I think, we all could agree on. But the recompiling - I think it’s a good feature. CHRIS: Yeah. I mean, some people are definitely gonna… MIKAL: At least, having the warning is a good thing to have - ‘hey, please recompile this ‘cause we changed things for you at compile time’. AMOS: It’s a nice low-hanging fruit - pretty easy to implement - put in place. MIKAL: So, when I saw that, I started fantasizing about a solution to this automatically. Basically whenever you read some configuration at compile time, and then you start your application with the config changed, maybe you should issue a warning or something. It’s very hard to do automatically, especially because when you have like a keyword or something as the option, and you read everything at compile time, but you reuse part of it… CHRIS: You’re gonna have so many false negatives from doing it like that. Like if you had something like ekto or the phoenix endpoint. There are so many things that go into that. It’s hard to know what you’re actually using versus what you’re not… I know y’all have to go soon, it sounds like. AMOS: There’s a party, right? That’s what I heard? CHRIS: Well, is there anything else that y’all wanted to throw out there before we sign off? MIKAL: I think it’s good we’re having those discussions in the community. Even though it became very heated at some points, I think… I still think it’s a good thing that we’re having the discussions and learning about different perspectives on things. I don’t think we should stop, but remember there are always people on the other side. That became very cheesy. ANNA: No, but it’s true. AMOS: I didn’t feel it was too heated in the discussion but maybe I wasn’t one of the people that got all heated up. ANNA: Maybe you weren’t receiving the brunt of the opinions. AMOS: That’s probably true too. I did a lot less talking - I always just try to take a step back and say ‘well, why would they want it that way. Why would they want that in their development that makes that solution right for them?’ I think that we should all do that once in a while. Chris. CHRIS: I wasn’t even all that fired up! Hang on. Wait a minute. ANNA: Healthy discussion is good, but maintaining perspective so that it stays healthy… Well, Mikal, thank you so much! CHRIS: Thanks for being on with us and chatting with us. This was so fun. AMOS: And now, official friend of the show! MIKAL: That’s great. AMOS: Well, thanks! Enjoy the party. ANNA: Bye!