Local-first Apps with ElectricSQL and React with James Arthur === [00:00:00] we start the show, we wanted to let you know that the state of JavaScript 2023 survey is live. There is a which programming related podcast. Do you listen to question? So if you love pod rocket, we'd appreciate a vote there. A link to the survey is in the show notes. All right, onto the show. . Hi there. And welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket helps software teams improve user experience with session replay, error tracking, and product analytics. Try it for free at logrocket. com today. My name is Paul, and joined with us is James Arthur. James is the co founder and CEO of ElectricSQL, and we're here to talk about local first apps, and specifically with ElectricSQL and React. Welcome to the show, James. Hey, thank you for having me. Nice to be here. It's great to have you here and it's nice to talk about local first development. We've had some folks on in the past that are putting out frameworks and other tools that are helping [00:01:00] push this narrative of local first development. And now we have something coming out that you're working on with your company and organization that is poised to make it reachable for folks maybe haven't even thought about local first development at all. So it's really exciting. before we hop into your product, electric SQL and how it pushes that narrative. Let's maybe rewind a little bit and just talk about what is local first development. Maybe people didn't listen to our other pods here on PodRocket. But yeah let's talk about what is local first development, why is it so important, and why it got you personally interested in the space. Yeah. Cool. So local first is, it's a pattern for building applications. And there's actually, there's lots of interesting aspects to it and there's lots of reasons why you would choose to develop local first. If you step back and think about like how most software applications are built today. If they're built with things like, say REST APIs, or kind of web service APIs, it's this category of cloud first systems. And if you have the network on the interaction [00:02:00] path, then the user experience of your application, the availability, it's all actually out of your control as a developer. Like, how well your application performs and the kind of experience your users get is at the whim of their network connection, and whether you have Backend uptime. And so like local first is this pattern where you take the network off the interaction path. So you have your application code talk directly to a local embedded database. And so you get like instant reactivity and there's zero latency and the apps feel just super snappy and then data syncs in the background. So you have this sort of background sync where typically what you're aiming for is active, conflict free sync. And so that brings like multi user collaboration and then applications work offline. And what you're trying to do is make them work offline without conflicts. And so in a way it's like a different way of doing the sort of state transfer piece of an application. So you might swap out something like REST or GraphQL and you move to this sort of local first pattern where [00:03:00] it's like you, you write to the local database first and then data syncs. And it just makes your apps more reliable. It improves the user experience. And then probably also a lot of people may have come across Local First through, for example, there's a research group called Incan Switch who wrote a manifesto back in 2019. And it's a fantastic document, so Google it and check it out if you Google Local First, it's like first link that comes up. And what that talks about is a bit more some of the kind of ethical considerations around, say, data ownership and privacy. So you look at SaaS software and big cloud systems and the business models behind them. It's like you have data that's siloed behind the cloud service and then lots of businesses there, their business model is to basically exploit that data. And so, How do you have software that, for instance, has the same conveniences, but has user ownership of data built in? And so this sort of local first architecture, where rights start on the local device, and then they only sync if users actually want them to, is a sort of way of changing the kind of dynamics of data ownership and exploitation on the internet. So it sounds like we're [00:04:00] just taking data, writing it locally, reading it locally. But of course, if you have an application, you need that data layer, whatever it be, Postgres, MySQL in the background. How do you view this? Narrative that we're describing right now different than let's say local storage in the browser, because that's pinning at the same thing, but maybe doesn't have the same end result and like hygiene when somebody is building an app. Yeah, I think so like in a browser, say you have a local storage API and also just, if you have things like native applications, like mobile apps and desktop apps, you can store data locally. And that's fantastic. And so you get like a local only application and say a local only application has the same. Like it starts off with it works offline because you don't need to go over the network to get data in and out of local storage. They're more resilient. You have the kind of data ownership and privacy built in because the data is not leaving the machine. But what you typically have with like software nowadays is you'll have some, like you're putting data in there and you maybe want to share it [00:05:00] across devices where you want to share it with other users and you have teams and collaborative software. If what do you use software for if it's things like. Project management tools, messaging systems, they don't really work if the data just stays on your device and doesn't get shared with other people. So the sort of local first approach is that you start off local only, and it's like equivalent to say storing data in local storage, but then it's like What, how do you then add this background sync when you want to start syncing data off the device? And what's interesting is that it's a great fit because like almost as a developer experience, you can do like local only first. So you can build out an application just working with say local storage, or in our case, we do embedded SQL light. And then when you're ready, you add this sort of real time sync once you want to go from like a single user paradigm to a multi user system. But then when you do that, when you have this combination of local rights and multi user concurrency, that's when all the challenges come in and you have a whole bunch of distributed systems concerns that you suddenly need to engineer [00:06:00] around. What are some of the industry players out there right now, when we talk about a distributed data store And the replication behavior between their shards or tenants or whatever they want to call them. Yeah, so what's the industry standard out there in the playing field? Yeah, there's so many cool projects and the landscape moves really fast. So, For this type of kind of thing of like, how do you move the data from the front end to the back end? You have traditional state management or state transfer protocols. So like REST, there's a state transfer protocol, representational state transfer. You have GraphQL. GraphQL builds in primitives around like subscriptions they also have an optimistic right model with systems like Relay. So that sort of moves a little bit forward to less like REST is just typically like a sort of online kind of web service API. And so with older systems like RPC, where GraphQL, you start to get a little bit more of a kind of abstractions in there that allow you to move the experience on a bit. You have a load of cool stuff in the sort of real time data sync kind of space. Systems like Firebase [00:07:00] Superbase, Liveblocks, PartyKit, etc. There's lots of really cool projects which are able to kind of, instead of doing just say static queries, you start to move into more this sort of live kind of data binding between the sort of client and the server. You've then got a lot of cool projects in this local first space. So you have things like Vulkan Evolu are doing like relational sync with like conflict free data types in the data model. You have say RepliCache is a super cool system for kind of doing this sort of real time server authoritative sync. So there's a whole bunch of cool stuff emerging in this sort of Sinclair technology that is in many cases is building on the same type of kind of core technology. And then, you've also got like the kind of more, say distributed database systems and edge systems. So like you've had systems like CockroachDB and UcobiteDB and Google Spanner for a long time. And the sort of slight difference is that if you take a system like Cockroach or Spanner, like they went from a single node database to a multi node database, and they use replication consensus, [00:08:00] basically all the nodes talk to each other to keep the data consistent. And what these kind of offline local first kind of sync systems now do is they don't do this sort of talking to each other in advance. Instead, you accept the right locally, and then you have strategies to handle things like conflicts or get to a consistent data state. And I, if you're familiar with it, there's this famous cap theorem. And so the cockroach and things are on the CP side of the cap theorem, which is this sort of strong consistency. Everyone talks to each other. And then what's been happening in this space is that the AP side of the cap theorem, where systems can work independently offline, it's just becoming a sort of stronger and stronger programming model. And I think that's one of the things we're seeing in the space now is that where you didn't really used to be able to do this stuff without getting into quite a lot of problems. You're now, there's a whole batch of these kind of new sort of AP local first systems where the programming model is actually becoming quite sane. It's quite, you can actually code against it quite simply and you don't need a PhD in distributed systems to build out a sort of Figma or linear style experience on it. [00:09:00] And speaking of Figma, that must be something they've put a lot of resources into. Because they have the desktop and the web. And then it has local first sort of like behavior. It feels live, but then it's synchronizing to the cloud. So behind the scenes, there's probably a lot of complicated things like we're talking about here with the local first and synchronizing. And I would guess they rolled their own solution. yeah and they've done some great blog posts about how they built out their sync technology. Similar with companies like Linear, for example, have done some great posts as well. There's the new Facebook Messenger, , there's some nice engineering posts about switching to a similar architecture. But, like you look at Figma and design software design software used to be Photoshop and Illustrator and standalone desktop kind of stuff. It started moving into the cloud, but Figma has taken it to this new level where you actually have a proper collaborative experience through the browser. And in their case, to be able to do that so you could have, for instance, multiple users editing, say the same part of a design at the same time, but being able to [00:10:00] resolve it without conflicts, like for them, it was a bunch of custom engineering. And they developed like a CRDT like kind of sync system. And I think they put a lot of focus into sort of building that as a core layer of the software architecture, but it's been totally transformational in terms of then the product experience that they can deliver. And then if you look at that whole sort of design software category, Figma is just totally one, right? It's like literally everybody in the industry is using them. It's a standard platform. I think Adobe is Adobe buying them for 20 billion. that's such a great example of like. commercial success of putting the effort into engineering this type of Sinclair technology, moving to a local first architecture, it makes the product so much better that actually it completely disrupted the whole industry. Now, if you're a developer and you want to reach for that sort of synchronization behavior, local first experience, you're bringing to the table right now, electric SQL, it's poised to sort of like, allow you to do things similar to what Figma is doing that we're talking about, but with Postgres, everybody's favorite. Kind [00:11:00] of like open source back and what made you pin this whole build around Postgres and I don't want to say pin because that might be an inaccurate representation of the adaptability of the product and framework. But why is Postgres SQL like the poster child of the use case of electric SQL. Yeah. What kind of pin is right? We're definitely, we're all in on Postgres and I mean, there's a short answer, I think, which is that. I don't know, for me, like I'm a generalist developer, you start a new project, no matter what you're going to build it with, you just reach for Postgres unless you've got a reason not to, right? It's just the standard relational database and there's so many benefits on building on such a sort of solid technology. For us as a company it's quite an interesting story because when we started the company, we thought we were actually going to build like a new type of database. We're working with some academics and we're building on some of the research work that they did actually for quite a long time. There's two of the three inventors of CRDTs, for example, Mark Shapiro, Nuna Pregurita, [00:12:00] and there's also Professor Annette Bieniussa who was the lead developer of a system called AntidoteDB. And so AntidoteDB is a very algorithmically advanced geodistributed database it's built in Erlang, and it combined a whole load of uh, research work into basically making this sort of AP side of the CAP theorem stronger. So it provides what has been formally proven to be the strongest possible consistency and integrity guarantees for a database system like that. And so it's totally cool, right? And so we were looking at that and going Antidote's awesome. Let's build CockroachDB for the AP side of the cap theorem. And we looked into basically doing that. And we started looking at what would be, what we would do to productionize the system. And we realized that the sorts of propositions that you could deliver around low latency and consistency were quite. In terms of who they really mattered for. You have to be running quite big global workloads for it to matter. But also just building a new database kind of, it takes 10 years to build it. And it takes 10 years for people to trust it. So [00:13:00] we basically thought about that and we looked at references like say super base and say Hasura and. Like their systems are basically built out on existing standard open source databases. So like super based builds on Postgres. And you can see how actually for developers it's really attractive, because you get to use the kind of standard tooling you want, and then they build sort of additional stuff around it. So we rethought what we were doing to use this AntidoteDB system as the replication layer behind existing open source SQL databases. Now, that took us into a sort of basically doing this kind of system we're doing on top of Postgres. But actually over time, then Antidote is super advanced algorithmically, but it's actually quite operationally complex to run. You run this sort of intermediate database where you had durability requirements and you had to run a kind of like a distributed system, almost like running ZooKeeper to run Kafka or something in order to run that kind of layer. We did a bunch of work to basically be able to provide the same convergence and consistency [00:14:00] guarantees, but just do it directly on Postgres. And so our system then became like much more operationally simple and it became much more Postgres centric. So that was slightly our journey into kind of working with Postgres. And nowadays, yeah, we're basically a kind of local first sync engine for Postgres. And you drop electric onto Postgres and it. allows you to do this kind of active replication between Postgres in the cloud and web and mobile apps where you have a local embedded database. And we do this sort of sync system behind the scenes that deals with the sort of concurrency challenges and means you can just code against the local database, as you would against a backend database. So that's really interesting. You had the first kind of MVP build leaning heavily on, forgive me, what was the other database name that you mentioned? AntidoteDB. It's a sort of naming convention where there's a protocol called the Cure protocol, which is the cure for consistency under partition, which is the sort of CAP theorem failing. So it's basically a Oh, look at that. CAP theorem challenges. And so the Antidote was again, the similar kind of naming convention. And then over time you folks were [00:15:00] realizing that the big value add wasn't necessarily in leveraging antidote, but in taking learnings from how it dealt with the distributed data sync and then building that as a direct layer for Postgres. I think, maybe if we'd been left to our own devices like Antidosis is an incredible system, , algorithmically extremely advanced, it's probably the most advanced geodistributed database out there. It's really rigorous implementation, it has a bunch of great tooling around it. If we could have, it would have been amazing to build that out into this next generation geo distributed database. But as it happens we're like a, we're a startup, we're on a funded trajectory. And so you have to find a kind of pathway where you can do something that people want faster. And the thing that kind of, you could do with Antidote is a sort of, as a, global database system was just challenging to be able to get to the maturity level fast enough with a, with basically a totally kind of new database system. So it just made more sense to work with just standard open source tools. And I think like for me as well like my background is I'm a generalist software [00:16:00] developer and we went on this journey and we found ourselves in a place where we realized that this cool replication technology. could be used to just help people make apps in this different way. And it came full circle for me because that was the sort of stuff I was just used to doing. I build a lot of apps. I played with lots of different state transfer systems, database systems, things like CouchDB, LiveView patterns, GraphQL, different ways of wiring apps up. So it was quite nice to come through this journey and arrive at a place where. We realized what we had was this very cool technology that could be used to really just simplify a kind of optimal pattern for building software applications. James, I really appreciate you taking the time to go over like the history of distributed systems, local first development. I know for our listeners, maybe this was a big first half of the episode and in a big dive kind of into some like history and technicals without even talking about electric SQL specifically, but that's about what we're going to get into, because we want to look at everything we just talked about the benefits of local first development, given the [00:17:00] challenges, and then talk about why electric SQL brings this to the masses in such a palatable way. Right before we do that, though, I just want to remind our listeners that this podcast is brought to you by log rocket. So if you're building an application, no matter how big or how small, You can use LogRocket to find errors and debug your application faster. So that means spending more time building your app and less time in the console. Things like session replay, heat maps, AI to surface errors that you may not have otherwise noticed, and all sorts of other good stuff that just help you find bugs, find inefficiencies, and build better products. So head over to LogRocket. com today. You can try it for free. So turning to electric SQL, we talked about the landscape. If a developer wanted to take advantage of local first development, what is the steps to integrate it? What's like the quick start, to get it going if they already have a Postgres back end. Yeah. [00:18:00] So say for example, if you say you built out like a mobile application and you had a Postgres backed kind of backend, you've probably got like a backend server. Maybe it's like a Node. js thing, maybe it's Rails, and then you've got some front end code, and maybe at some point it's like React Components or React Native Components, right? wE swap in as this state transfer layer, so if you had something like a GraphQL layer or REST APIs, you basically swap that piece out. And so like literally what you do is you, we provide a sync service, so it's like a kind of a stateless web service. So you basically deploy that, you connect it to a Postgres database with a connection string, just the same way you'd normally connect like a backend framework. It bootstraps and stuff inside the Postgres. So it sets up like a schema and some shadow tables and things. And then you basically opt tables into our sync machinery. So if you've got an existing data model, say you've got projects, issues and comments or something, you come along and you say I want to electrify the [00:19:00] projects table and I want to electrify the issues. And it means those tables are available to the sync machinery and then in your front end application. So the first thing we do is we build out a type safe data access client based upon your database schema. So you run a, you basically run a command that sort of pulls down the electrified subset of your database schema, and it turns it into a sort of Prisma like type safe TypeScript library. And then you import that into your application and you basically pass it a DB connection to a local SQLite instance. So we work for like web applications JavaScript based mobile applications Flutter applications. And so all those different target environments, like they support SQLite, but they have a slightly different driver. Like in the web browser, we use a project called WA SQLite, which is a super cool kind of WASM index DB SQLite. In React Native, you just have a kind of native driver. So you basically open up a connection to your SQLite as normal, just using the standard driver for your environment. And you pass it to [00:20:00] an electrify function along with your database schema. And it basically just sets up this sort of magic two way sync and it just gives you a kind of standard like Prisma style data access client so you can do queries and writes to the database. Now there's a couple of interesting things. So one is that if you've been building an app previously on a more static model where you maybe do like explicit data fetches. This is like a sort of realtime sync model. So you end up, what you do is instead of say doing static queries, you end up binding live queries to your components. And then, so say you had a component to show a list of projects. You basically set it up where you get a handle on your, like a database connection and then you say I want to set up a live query to get me all the projects and it's full SQL. So you have a query builder interface where you just drop down to raw SQL and that query is the local SQL like database. And it binds the results to a local state variable, and then your component just works and naturally re renders when the state variable changes. And [00:21:00] what's nice with that is if you then, say you do a write locally, you just write directly to the database, you don't need any state management layer. Because the changes are just automatically picked up. The components just re render. So in a way in the client, it certainly for certain circumstances, it can replace out some of the state management patterns you have with things like say Redux or MobX. And you just use this embedded database as your kind of client side store. But also you get the same reactivity if anybody else changes the data somewhere else. Because if it streams in over the replication stream, the same reactivity stuff fires and everything just stays live and up to date. sO that's the sort of first picture. Now, there's one other just really important element. Maybe actually there's two if you'll allow me. So one is that you basically have a database schema that's shared between the embedded SQLite in the client and the Postgres database in the cloud. So to evolve your database schema, what you do is you actually manage it through Postgres. So you apply migrations to Postgres to say evolve your DDL schema for your database. And all that just propagates [00:22:00] through the replication machinery and just naturally updates within the local applications. And actually when you build a static app, you also just bundle in like a copy of the database schema. So there's a sort of one way flow for evolving the database schema. And then there's another really key thing, which is basically. So say you imagine you're building, say a SAS application, you would typically have all your customers data or all your users data in like a server side database, but somebody logs in and they obviously only want to see like their data. So you're replacing that sort of query model to get the data with this real time sync. So you need a system that controls which data syncs onto the local device and which data syncs off the local device. So we have a system which is called Shapes, and Shapes are a bit like an ORM query with an association graph. So with the example I was saying earlier, you could say, I want project 1234, and I want all of its issues and comments. And you say keep that kind of live. So you establish these shape subscriptions, that controls what data actually syncs onto the local device. And then all the [00:23:00] data on the local device is reactive and live and just bound to your components. So the shapes layer is like a declarative way to say what you want to live subscribe to. And is that a bi direc bi directional? Yeah. Exactly. So it's bi directional active sync. And the shapes, they're dynamic, right? So you actually call an imperative API to establish a shape subscription, and then you can either close shape subscriptions or you can add new ones. And so, As you navigate around an application, if you know once you go into a certain root or a certain sort of subset of a component tree, that it needs certain data, you establish the shape subscription. You can kind of await for a promise to resolve to be sure that data is synced in and then carry on. And then the system aggregates all of the individual shape subscriptions into your sort of unified shape that's synced into that client. , so if you imagine just starting from scratch,, so you've opened an application, but you don't have any data in the local device, you establish a shape subscription for like project 1234. What happens first is it [00:24:00] basically we, that gets synced to the server, the sync service queries the database, gets all the data and streams it up over the replication stream into the local database. Then the promise resolves and you know that you can query that data. But then once that subscription has been established on the server, It's watching the replication stream, and if any data then comes along, which should sync into that shape, or if there's any changes to the data, which mean data needs to be removed from the shape, the system just takes care of kind of keeping that in sync. It feels like a very good thing to use of react server components, the new paradigm, because you can just load up your server component, await the promise to expect the shape, and then feed that into your client component. And then it'll be live with everything with all the handlers ready to go It's interesting because I think There's a use case where, for example, yeah, if you're doing server side rendering , and you want to render a component, which would say render equivalently on the client, it is interesting to just be able to say we sync the data onto kind of both sides of the network. And so you render against the server side data and then you [00:25:00] would render the same against the client side data. I've chatted a bit to some of the guys working on server side components with react and actually like they've pushed back a bit and gone look if you have this local first architecture maybe it isn't such a great fit with server side rendering because in a way you don't really need it for the same sort of reasons. So I think they maybe see this sort of local first architecture as more of an alternative where you go all in on this sort of local embedded database pattern and actually rather than doing the sort of server side rendering of components, you're moving the rendering more explicitly client side. But you're keeping the data in the custody of the customer in that different paradigm. So that is fundamentally different. I get what they're Saying there, it's interesting to hear your side of it. sO what do you think are some apps out there right now using electric SQL that you feel like are a good example showcasing how this is used like we mentioned Figma, but they're using probably their own custom rolled stuff. [00:26:00] What's out there in the wild for electric? there's quite a few projects that have been building on it. Like there's a cool kind of platform called QuickMix. There's some, there's a team from Manabox building stuff. I think if you think about what people are building on it There's a category of stuff where people are motivated by the quality of the user experience. So you get this sort of instant reactivity from the local rights, you get built in multi user collaboration, and you get the sort of combination of those. And so that's people building things like say, interactive dashboards, or kind of professional collaboration software, that kind of thing. And then there's a category where You have this kind of resilience and this offline capability. So they're local first, you don't have the network on the interaction path, the apps can function even if the connection is down, even if the back end is down. And so that often maps to situations where like offline is a hard requirement. And that's things like outdoor software, things where you're moving around logistics, kind of retail software. Also just systems where you really don't want to sort of block stuff just because the internet's having a [00:27:00] problem. So for instance like we're doing a few projects where it's point of sale software or checkout systems, and you don't want to block taking the transaction because the network's wobbly. So those are sort of two families where in some cases, it's like you're trying to get this sort of Figma linear grade user experience. And in other cases, it's like maybe just trying to build a system which is properly tolerant to working offline. What are some of the next milestones that you guys have in the next quarter or two? we've got lots to do. We're, we're still in public alpha as a platform, right? So we've got quite a big kind of platform in terms of the stuff that's working at the moment. But we made a bunch of enabling simplifications in order to be able to get a kind of platform that you can actually build applications on now. And there's still a bunch of things that we're trying to fill in the implementation on to get to what we would think of as, say, a sort of beta scope where you can maybe properly build applications you would deploy on the internet and they're reasonably secure and performant. So one of the things is we've designed a permissions system. It's a bit like sort of row level security. [00:28:00] You have sort of database rules. And so basically that's one of the things that's just in progress that then adds proper security when you deploy the applications. And also our implementation of the shape based sync that I was talking about. At the moment we have some simplifications on that. So it sort of over syncs data and we're just doing some work to make that more efficient. And deliver on some of the kind of like actually fill in the sort of expressiveness of the shapes API. So those are a couple of major things. We've been working on things like support for data types, so you can just drop it onto your data model and it just works and you don't have to change things. And there's also quite a lot of sort of tooling around it, so just making it easy to come along, run a starter script, have that work on your local computer, , be able to, for instance, iterate an application by generating out standard components and data models. So it's kind of, we have this like core replication layer technology where it's this sort of database replication style stuff, and it's quite hard distributed systems work, but there's quite a lot of productization around that. So then actually make it a sort of easy developer experience. So when you come [00:29:00] along, it's actually, you can integrate it with your sort of favorite tools and. It's a nice, efficient way of building applications. If there's one thing that a lot of developers have heard, it's that distributed systems are hard. And so I can imagine there's a lot of work that needs to go into creating a lesser and lesser amount of friction between the actual reality of what's happening in the APIs and surface controls that you guys are presenting. One thing you mentioned, James, was in work that has been done and work that you're still doing is Being able to drop a data model on and not having to change much in the world of Postgres. One reason why folks will go reach for that is the JSON type. It's infinitely ,, useful, but everybody knows that's not necessarily out the box supported in SQL lite. Is that bridged right now? If not, are you looking at Bridget? And how do you see that happening? Yeah, we just actually dropped support for dropped as in we just actually shipped support for for JSON B types in Postgres, which then come through the replication stream and get hydrated into JSON objects in your JavaScript [00:30:00] code and stored so that you can query them with JSON functions and SQLite. So , we just have proper support for JSON integrated into the Postgres model now. But there is more to do because what we want to do is extend the sort of convert. The sort of concurrent update logic to support more granular updates nested in the JSON object. So technically it's like making it more of a kind of proper JSON CRDT data type. But yeah, I think that stuff's super important, right? Like when you think of this type of say, kind of conflict free sync stuff you often think of like libraries, say YJS or like collaborative text editing. So like you have a popular system called like ProseMirror. Or like TipTap, which is a really nice way of dropping in like a collaborative editor into your application. And that actually stores stuff out in a kind of JSON object. And so one of the things for us is iterating on that, on this JSON support to be able to have really nice efficient support for this collaborative text editing, with that sort of stack on top. So there's more to do, and I think it's a really important thing where, yeah, you can then have these sort of unstructured parts of your [00:31:00] application or more flexible data types, but have them integrated into your main Postgres model so you don't have some sort of silo instead of having some real time system for collaborative editing that lives in another database, and then you're having to sort of bridge it over to your Postgres One of the things that we're trying to do, so we're like open source, standard Postgres, standard SQLite, is just do it so you don't have these kind of silos in the data model. And your point before, yeah it's it's our job as a platform to try and deal with the distributed systems complexity, keep the complexity in the box and make the programming model simple and just a standard sort of SQL as possible. And there's, and it's not my specialism, right? There's some awesome guys and girls on the team who are like really smart and very good at the distributed system stuff. And also we are standing on the shoulders of giants because there's just been so much great research in this area over the last sort of 20, 30 years that we're now lucky enough, like we and other. projects are building on to actually be able to finally do this stuff and deliver on that promise. Like it is just a drop in, you can keep the complexity in the box where if you think of earlier systems for instance, [00:32:00] CouchDB, you have like explicit conflict resolution and it bubbles back up to the app Or if you have some sort of server authoritative sync, and then you have rollbacks and tentativity. And again, the complexity leaks back into the app, whereas these sorts of proper local first conflict free systems. Now, finally, you can code against them without having to have a PhD in distributed systems, which I think is one of the cool things about why actually so many people are starting to adopt this as a development pattern now. It's a natural next step. You're improving user feedback cycles with your front end and you're increasing ownership of data. And it's the easier way to get your feet wet in it and actually build something cool. One thing that I want to touch on too is, we have a bunch of folks come on here and talk about platforms and frameworks all that. there's always like self hosted open source or whatever. So we got to discuss the deal. With electric SQL. So you mentioned like you could take the layer and you could go host it somewhere. So you can do that. You can host your own electric SQL layer, right? Yeah. So it's Apache 2, it's permissive, open source, and it's explicitly designed for self host. [00:33:00] we're not sort of operational hosting people, so you know, we're not trying to run a managed service around it. What we wanted to do was provide it as like as close to possible, like a sort of software library that's like a tool in your toolbox and you can integrate it into your stack. So like for instance, can come along and you can, if you're using Superbase to deploy a project quickly you can just run electric in front of Superbase. You can deploy it where you like. It could be like DigitalOcean, AWS, whatever your sort of preferred platform is, you integrate the app publishing with your kind of stack. It's all the way from that through to working within some sort of more complex kind of Kubernetes Terraform deployment, sort of private cloud deployment. What we wanted to do is not have some sort of lock in around it and just make it easy for people to be able to install it in their environment also, because it's like. It's not our Postgres, right? We're like a sync service on top of the database, and the database is always going to be something that is like, it matters, it needs to be within your network security. So it doesn't really make sense for us to be somehow like, running a service and routing stuff over the public internet.[00:34:00] On that topic though, do you guys have a cloud offering or plan to, if not, so we don't, and we don't plan to have one relatively soon. So, I mean, What we are doing is we're working on just iterating some of our deployment recipes. So some of the things I was just mentioning we've just been setting up better one click deploy to DigitalOcean. I think if you look at platforms like say, Superbase, like Fly, like DigitalOcean, like Render, these platforms are just awesome for deploying software. That's what they do. And so for us, it just makes much more sense to integrate with a platform like Render or DigitalOcean to deploy rather than us running the software and we're a startup and so we obviously have to have a monetization strategy and that monetization strategy for us is more about like first things first, we need to Build this system so it works and it delivers value. And then we'll be able to build on some tooling on top, which helps people run it at scale and solve some of the sort of harder problems around this type of architecture. So we're looking at in time in future, we'll build out a commercial product, but it needs to be predicated on the open source edition, delivering [00:35:00] proper business value, and that's not something that we want to offer as a service, it's just much easier to deploy it wherever you'd prefer to deploy your own software. I know that's going to sound like a breath of fresh air to a lot of listeners about the fact that there's not even a cloud offering because that's not the ethos of what we're trying to do. It's completely free to use. And then of course, if people find anything happening, they can go submit an issue and raise it with the team. Yeah. So we got a discord. We're trying to be as helpful as possible. There's still quite a lot of sharp edges, right? We're definitely here to chat if you're trying to build an application on it. We'll get on the phone and chat it through, chat about the roadmap, try and help out on any sort of design stuff. And yeah there's increasingly more people building applications and that's giving us really good feedback in terms of like bug reports and user feedback. And so that just guides the roadmap. Hopefully we shave off some of the sharp edges, so it becomes a nice seamless experience. James, if people wanted to keep up to date with what's coming with Electric does Electric have a Twitter and do you have a Twitter or another? medium by which [00:36:00] you post. Yeah so, there's an ElectricSQL Twitter, we're at electricsql, no spaces but also there's the Discord community, so we're electric sql. com. If you go on there, there's just like a, there's Discord, GitHub, Twitter links, but the Discord is a great place just to keep up with development. We're also sharing some of the kind of pre releases there. It's a good opportunity to just chat about the project. So I think the kind of project Twitter and Discord are great. My personal, I'm through flow everywhere on the internet, but I think following the electric SQL one might be more interesting. James, it was great having you on. I'm sure some people are going to listen to this and feel excited that they don't need to have two different database schemas to support real time messages and things like this. Head over to the electric SQL, GitHub, the discord, check it out. And thank you so much for your time coming on. And also just talking about the history of like how we got here distributed systems, really enjoyed that first half. Thanks for having me. And yeah, please do check out the project and we'd love to get you in there and any feedback you have. So thanks for [00:37:00] having us on. Yeah, I really enjoyed it.