Nader Dabit === [00:00:00] Hi there, and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket helps software teams improve user experience with session replay, error tracking, and product analytics. Try it for free at LogRocket. com today. My name is Paul, and joined with us is Nadir Davit. He's on, once again, here on PodRocket to talk about React Native AI and Indie Hacking. Nadir is currently the Director of Developer Relations over at Avara. Welcome to the show. Thank you for having me really happy to be here talking about this stuff today. Yeah. Indie hacking got me excited too. We were like going over what we were going to put in the pod for this time. And Indie hacking is something that feels like it's getting more and more popular these days. And I'm excited to hear what you have to say about that because we're going to be talking about your project too, Notter. React Native AI. Yeah, it's easier than ever, I think, to be an indie hacker. So it's a cool time to be a developer as it's always been, I think, but I think even cooler today. I mean, Let's hop right into [00:01:00] indie hacking. Why not? Because last time we had you on, , our heads were pivoted in a slightly different direction. We were talking about data. We were talking about some web three stuff. Maybe not necessarily. We're talking about hacking computers. When we say indie hacking, we're talking about just building stuff, right? So how would you define indie hacking? And when did you consider yourself? Was it this project, React Native AI, or was it brewing for a while? I think that a lot of developers that I know and me, myself got into software because we thought of ideas and we wanted to build those out. And we wanted to, Have a way to maybe launch an app to the app store and make money in some way. And then along the road, you start realizing sometimes how challenging that is, and sometimes it's actually much easier to work for someone else who just pays you a salary and you make money, regardless of what happens with the code that you're writing. it's a more comfortable life sometimes than being out there trying to build your own [00:02:00] products and get users and make money and stuff like that. . From the very beginning, I've always been hacking on various different things, putting out open source. I haven't really tried to monetize a lot of the stuff that I've built because I've done a lot of consulting. I've had a lot of full time jobs and stuff, but here and there I've. Thrown out there, different ideas and different projects and products. And with the AI LLMs and stuff that became accessible developers like me in the last year, I guess it's just been really fun again to hack on new ideas and it's made it, I think, easier than ever to build. And easily build a product that the average person in the world can find value in. So I think it's kind of renewed my personal interest in this space of being able to just go out and build a product, launch it, see how people like it, and then, go from there. Now, when you say go ahead and build a project, when you say indie hacking, are we talking about like getting funding? Are we talking about. [00:03:00] Bootstrapping, or are we talking about I'm in my basement, just like throwing something together, like where on the spectrum would you peg indie hacking? Is that like for React Native AI, like where you find yourself? Yeah, that's a good question. I mean, I would say yes to almost all of that. But really, of the essence, I think of being an indie hacker is just being lean, building something, maybe even being a one person team, you know, you're an independent developer. And maybe if things do really well, you do consider going into raising money. But I think a lot of indie hackers are just bootstrapped And they don't have a lot of liability. They don't have a lot of overhead. They're just out there creating a product and they're trying to improve and increase their monthly or their yearly annual recurring revenue from the thing that they've built. For me, react native AI actually came out of another product that I launched called AI buddy and AI buddy is in the app store. It's making a few thousand dollars per month. It's my second AI app that I've launched publicly. That's done [00:04:00] pretty well. The first one was called roam around. Roam around was something I launched eight or nine years, eight or nine months ago, that just really was a really simple way for people to build out travel itineraries and. No one had really built a nice UI, on top of an LLM to do this. So when people saw it for the first time, it just went viral on, on Tik TOK, and sometimes that's all it takes to get people's eyes on your product. 60, 70, 000 users per day, like unique. So that did really well. Ultimately that ended up costing me a lot of money. For infrastructure costs that I had not monetized it. So I entered a deal where I sold most of the company and held a small portion of it to someone else who then was like the CEO and the current founder, and then was able to raise money. And it's still, in development, people are using it still, obviously, but still being improved. They just raise it like a 10 million or so valuation. And it was the first [00:05:00] AI product that I had built that did that well. And then AI buddy is My attempt at building another product that doesn't run into that issue of, having not thought about monetizing upfront. So in order to use AI buddy, you can download it for free from the app store. But once you hit 50 requests, you then have to subscribe or you can subscribe on day one. And therefore, no matter how many people download it, it's always making money. So that's where react native AI came from. When I launched AI buddy. A lot of people were like, Is this open source? Because pretty much everything I do up into this point has been open source. But when I launched roam around, I made it open source. But someone actually copied. Everything about it, including the name, including the design, including everything and launch to the app store without changing even anything. Now, I don't really care if someone had literally stolen every piece of code I wrote and at least change the name. But when someone like does that exactly, [00:06:00] it rubbed me the wrong way and made me rethink if I should open source AI buddy. So I did not open source AI buddy. I built react native AI. Which is essentially a framework that you could use to build something like AIBuddy. Yeah, let's hop right into talking about React Native AI. We touched upon indie hacking and like where how you see yourself in the builder. ecosystem which is interesting. And I also love to hear that you're like emphasizing the low responsibility, like low risk. It gives you the breathing room to really create being an indie hacker. You've done so many projects to date and it's, remarkable that you're not like weighed down. By investors, by a board, and you can just like iterate quickly and continue on these ideas. So yeah, focusing on React Native AI, it's a framework, right? Yeah, it's a mobile framework, full stack framework for building out on top of different LLMs and different type of image models. Building full stack apps, And so this came after AI Buddy, which was your previous [00:07:00] endeavor. right? I had quite a bit of um, good response from AI buddy, from people that were wanting to build out similar apps. And then people, you know, because I have a decently sized networks, people wanting me to do consulting for them. Oh, can you build us? Something like this or build this feature into our app. And then , quite a few people were saying, again, is it open source? And how can I, check out your code? So with react Native ai, when I built AI Buddy. I ran into a bunch of repetitive stuff that I was thinking, man, it would be so cool if this was just abstracted away. And in the future, when I wanted to build another app like this, I can just grab these pieces in these components and then not have to rebuild them from scratch and waste days and weeks at a time. So for instance, when you're building on top of LLMs, you have like chat interfaces, when you're dealing with images, you have to think about downloading and uploading images on the server and dealing with. API keys and all the stuff along with some [00:08:00] really repetitive server client interactions that you just end up, they're not hard to write for someone that's somewhat experienced, but they're just tedious. So the framework just abstracts all of that away. So you don't really have to think about. I need this function to download an image and then upload this image and then take that image and return it to the client and think about proxies for your API. So you don't have to store your environment variables on the client. You have to build out the server. It just takes all of that and gives it to you in a package that you can then kind of theme and add your own logos and modify to suit your own needs. And you can launch an app and, really a few hours or something like that, I would say. Wow, that's amazing. And so what type of apps are you primarily targeting to help launch? I mean, Obviously AI powered stuff, but how far is the breadth and reach of the groundwork that you're laying out here? I think that with the flexibility of what you can do with these custom, like LLMs, there's quite a bit [00:09:00] of flexibility for what you can build with React Native AI. There are two main use cases. I think one that a lot of people, I've only talked to maybe four or five people that are doing this with the framework, but to me, that's a decent amount considering that it's fairly new. They basically forked React Native AI. And they already have a react native app and they need these features implemented in their app. So they're just like copying and pasting different views and different backend components and integrating it into their existing app. So taking the components that are there, using them in a separate app. So not actually using the entire code base is one use case. And then the other use case is just people entirely taking the whole framework and just building and extending it by either removing or adding or changing, different components. That are in there. So for instance, you can now easily train your own custom AI using a chat GPT assistance. So if you wanted to come up with some [00:10:00] really cool new way to interact, you could basically train just the backend or just change the backend endpoint slightly, change the logo, and then ship your app to the app store with everything else already there without having to change a lot of. Code and it would be really simple to ship that way. And, And the other groups of people that I've discussed so far are building in that way. So I would say about half so far, I've used it to completely just take the whole framework and ship that. And then the other half seemed to be taking different pieces and components and then using those. But essentially, I guess I could imagine having a bunch of Lego bricks that allow me to interop with all these LLM APIs that are out these days. And Just anything that could use those Lego bricks. this could be well suited. Okay. Yeah, totally. And it's interesting because it's almost like we have right now only a small, like even a few months ago, there was only really one [00:11:00] LLM that everyone was using, right? It was chat GPT fast forward. Now, six months later, you have Claude, you have co here, you have Gemini, you have Bard, and I think six to 12 months from now, there's going to be even more. Like, I think we're going to have like probably 10, 20 that are good. And then maybe you're gonna start seeing more and more specialized ones. So this LLM is good for coding. This LLM is good for this. This so AI buddy was a way to aggregate all of those into a single app. So you don't have to keep jumping from chat to BT app to this new app that comes out and it's a framework that's already ready to roll. And that's what react native AI also enables. So if you wanted to give users the option to choose between different models, I want to continue adding new models. So right now it. Support six different models. And I want to continue adding more. Like for instance, when Gemini has an API, I want to add that. And I'm sure we're going to see, like I mentioned, probably a lot more that are out there that people can then also [00:12:00] easily integrate through what I've done so far. I'd love to ask about some of the six different models because you've been on the road, rubber meeting the road and working with them. Before we do that, I just want to remind our listeners that this podcast is brought to you by LogRocket. So if you're building a web app, no matter how big or how small, and you want to spend just more time building and less time debugging in the console and dev tools, head over to logrocket. com today. And you can use tools such as AI power. Since we're talking about AI powered trend, finding stuff. You might not have noticed heat maps, session, replay, all sorts of error tracking and analysis. So head over to LogRocket. com today and check it out for free. So Nader, you mentioned you were working with six different AI models with the current iteration of React Native AI. And you also touched on hey, maybe chat gbt is good for this, and cloud has a strength in this, and then coding is good for that. What'd you say are the most commonly reached for? wrappers or API integrations that you've built that maybe [00:13:00] you've seen issues getting raised from seeing your current clientele asking you questions about and then secondary have you seen as like any interesting dividing lines between the strengths and talents of the different models Yeah, so definitely, and obviously chat GPT and open AI is the most well known one. And they've also shipped more cool features that everyone else doesn't quite have yet. So current code interpreter and I would say assistance, retrieval, vision, all of these features are now part of the open AI like slash chat GPT platform, even Dolly. So I think in terms of overall breadth and quality chat, GBT and open AI have the number one spot, but I'm personally in terms of getting help writing and speech and organization, a bigger fan of anthropic Claude uh, anthropic Claude. Is similar to chat GPT. It has a double the size token window [00:14:00] though. So 200, 000 tokens, meaning you can copy and paste most novels into that and then ask questions about it. And Claude also has a turbo or a like faster version of what they have, similar to how chat GPT and also chat GPT, I forgot what it's called, like turbo or something like that. Yeah, I don't even remember the name of that. But yeah, they basically have the really good one and then they have the fast version. So Anthropic Cloud also has something like that. It's not quite as good sometimes as coding as chat GPT is. And then there's Cohere and Cohere. Is good and they have a really strong team and a lot of backing and a lot of smart people there. So it's one that I'm also integrating and also keeping an eye on. And one of the cool things that Cohere shipped that isn't really quite available, at least to the best of my knowledge and even trying it out recently this week is something called Cohere web, which automatically integrates every single web site that's indexed. In existence into [00:15:00] the search results. So what you can not do with chat GPT is get up to the date information, but with cohere web, I can say, summarize the top news articles in the past week. What are the coolest new restaurants that opened up last month and Manhattan, things like that. So I use each of these models. For what they offer that the other ones don't offer and having them all available in a single app is pretty cool to me. And then last thing is I'll add is The API for assistance is actually like an order of magnitude, more complex than the regular API for people that have not built with it yet. So with chat GPT, it's really simple to use their API. You just send a prompt with an API key and it sends you a response. Really, Really simple to use. But with the assistance, you actually have to send five or six recurring API calls back to back. And then you have to have this listener that kind of waits for the response to be ready. And then you have to fetch that response. Anyway, it's super complicated [00:16:00] to build compared at least to the regular API. And having that built into react native AI has been a big value add, I think to a lot of people that we're starting to get started with the assistance API, regardless if they're using react native at all, have actually checked out my implementation of that and said that they really liked it and copied basically all of that. I don't know, I think I got off track a little bit, but yeah but the main thing was that yeah. Each of those different models, I outlined what I like about them the most, and I think we're going to see even more segmentation. And more differentiation happen going forward. topic of your actual implementation for these different models in react native ai there's like you mentioned this like dolly image i mean everybody knows chat2pt has this image processing capability now Can you talk to us a little bit about how you made images actually like tooling, toolable in your stack and pluggable into different AI [00:17:00] frameworks and Yeah, totally. When you're dealing with images, it's a lot more challenging than just dealing with texts because you're dealing with things like base 64 encoding, and you're dealing things with buffers and different formats are supported in different ways. And the APIs to manage those are different. And also the actual APIs that deal with images. On the back end are also different. So some of them require you to upload an actual hosted image. So you have to actually upload this image to S3 or to some other image service provider and that service provider also has their own API and they also support certain ways of uploading images. So like putting it all together is just annoying as hell to be honest. So that's like one of the things that I liked about having this framework built now. When I want to spin up a new idea, I don't have to think about and deal with all of that. Dabbing into the docs, dealing with file systems and file storage and the difference between fetch blob versus the [00:18:00] file system API for expo and react native. And also is someone getting started with react native? I think it's super complicated and overwhelming to look at all of these different options and not know which one to use. So when you're Googling. Most of the results for dealing with files will show you to use react native fetch blob, which it was a really great framework. And I don't even know if you would call it framework, but it was basically an NPM module and allowed you to do everything you would need with the file, downloading it, changing the type of file and uploading and stuff like that. But. React Native Fetch Blob is not supported by Expo. It hasn't been even maintained for a couple of years by the original maintainer. Therefore you might actually build something with React Native Fetch Blob and then realize, Oh, I can't actually, use this feature that I want because it hasn't been updated in a while. So anyway, not having to deal with any of that shit is pretty nice. And that was one of the things that I wanted to make sure people didn't have to worry about [00:19:00] when they're dealing with that type of stuff with the React Native AI, it just has it all built in already for you. And it has a lot of opinions that are made for you that you can go and look at and be like, okay, this is the right way to do things. At least the right way, in my opinion. Which services require a hosted image? Is it all of them? I don't think all of them, I think some of them allow you to upload like a base 64 encoded. Some of them allow you to upload like a buffer. But most of them now, from what I can, at least from what I remember, or I would say not requiring, but preferring like an HTTP uploaded image. So you just like reference the image source and then, it gives you back another hosted image. And another thing is that some of these image services give you a temporarily signed image, meaning that. If you want to share that image and you copy the hosted image that they gave you, it doesn't work like a few hours later. That's something that you consider. And what I've been doing [00:20:00] is just re uploading that to my own file storage. So like S3 or byte. Forgot the name of the the service that I've been using actually lately. Forgot it's bite skill, I think. Yeah, buy, I like bite skill and uh, yeah, using one of those, you can just re-host it. So they'll host it for you, but it's temporary maybe. So you just okay, lemme just go ahead and re-upload this, that way This image lives forever for the user, because let's say they might use it in their app, or, I don't know, just share it with their friend on social media or something. So React Native AI, does it have ByteScale integrated into the backend? It does. So like when you initialize a new React native AI project, it'll just prompt you for a bunch of different API keys and one of those is bite skill. Now bite. Skill is only required for two of the different services, so it's not a requirement. I think the only two requirements for just starting with React native AI are an open ai, API key and a foul AI API key, which is the image hosting. AI that I've been using for the most part, So [00:21:00] of the different guardrails and polished surfaces that you've been laying down for React Native AI and integrating with these models. If somebody were to reach for this framework and they're, and they have this mindset of I need AI stuff, but I also need some other stuff. Like maybe I'm new to React Native and the file blob. library, the module that you mentioned, like you have some tooling built out to help people work with images. Is there any other sort of good use cases or modules that you spend time developing for React Native AI that you would deem like pretty useful and that people should look at whether they're using AI or not? What maybe my theming implementation. I've used the same theming implementation and another app. When I was working at AWS, we had this framework for building conference apps. And it was interesting because like I was looking for a way to build something that would show off all the different services that I was currently being a developer advocate for.[00:22:00] But also a product that would be useful to a lot of people. And I was going to a lot of conferences and speaking at a lot of them. And I actually was helping this conference called chain react, which is a react native conference in Portland. It's run by some of my friends, actually. It's to me the best react native conference in the world. And they were wanting to add some features. So I jumped in and I worked on their app for a couple of days and I was like, Oh, it'd be so cool if next time they wanted to launch a new conference or anyone. They could just get the app up and running in five minutes. Now that sounds crazy, right? But when you think about it, , the same components that go into every conference app are pretty much the same. You have the schedule, you might have a form, you might have the map for people to go to the location. And that's it, but people end up building a new app from scratch over and over. So what I did was like,, with infrastructure as code, you can describe the services that you want in AWS. And then I added like the ability to just go into this file and [00:23:00] change out the three or four theme colors and the logo. And then you can basically just spin up a new conference app in five minutes, just with your theme and maybe five or six speakers metadata, like their name, their Twitter and their description. And the time of their talk and their talk title. Anyway, so that was called conference app in a box. And that did really well. A lot of. People used it. But most interestingly, someone actually sold an implementation of that app for someone they, they were doing consulting. They basically forked or clone the app, ran it and sold the deployment for 15, to a client. And it only took them one day to actually make it because all the code was already there. I thought that was a really cool story because they did deliver 15, 000 in value because they got the client was happy. They got everything they wanted, but they didn't have to spend a lot of time working and doing anything. So the theming that I implemented there was really basic. It's really simple. You just provide the like highlight color, the base color, those sorts of things. [00:24:00] And with React Native AI, you can go in and add a new theme and six lines of code and it automatically just works and you go into the settings and the theme is there and I think what you end up probably wanting to do, not all the time, but maybe, if you're building a proprietary app is that you'll set like a theme that you want to use and then you don't really offer any other themes to the user. You kind of remove That setting, because you might want to ship like a unique design that no one else really has, as opposed to letting the user like choose their theme. Or maybe you choose like a light and a dark theme or something like that. But the theming is kind of built in. And I think was one of the most fun things to build into it. And a lot of people like it. is next on your list in terms of features? Is it AI specific stuff? Is it? generalist set up like theming I Want to continue adding, yeah, a lot of the new LLMs and I want to continue adding and improving the existing features of what I have there over the next six to 12 months and just \, see where it goes. It's so far, it's opened a lot of doors already. A lot of open source [00:25:00] does with a consulting opportunities, speaking opportunities, even this podcast and stuff like that. So it's a fun thing to work on and it's calling a lot of attention also to to AI buddy. So I've had, I've had a good number of. Of new customers there and signups and stuff. So that's pretty cool. I'm trying to find maybe a better way to advertise AI buddy within react native AI. So maybe I'll put a little link that says this powers AI buddy and link to the app. So I don't know, something like that. and for folks listening, if you want to get people brainstorming about how can we like marry good ideas of this framework, like I could just pick up this framework that Nader made and spit out an AI app with a little engineering creativity. I'm sure people are thinking like, wow, like what's possible, what's out there. And so are there any passing thoughts that you have in nine, you're like, somebody's going to use this to do blank curious what one or two of those blanks are in your head. Cause you don't have time to do them, but you were thinking Yeah, totally. I think the easiest way to start getting ideas is to actually [00:26:00] just start building with it and getting a feel for what creativity gets spurred. Through just running the app and playing around with it. I think that's the first place that you can start. And all you need is no JS installed on your terminal. You just run NPX. RN dash AI, and then that's it. You're off to the races. You just enter your open AI API key. And then you grab a file, a AI key API key, which I also link in the terminal when you're running that command and just run the app and start playing around with it and start thinking of some new ideas, maybe change the logo, add your new theme color, and then maybe go into the open AI assistance and create like a new chat bot that is specialized and whatever your favorite thing is And then shoot it to your friends and let them start trying it out, giving you feedback. Very important. Change the logo. Nodder wants you to change the logo and the name. The logo is completely free and open for anyone to use, but you know, you want to differentiate a little bit, but Of [00:27:00] course. So Nader, if people wanted to explore ais and LLMs a little bit more, pivoting our heads away from React native AI specifically people could go to chat GPT, they could go to open ai, check out their API, their A Ps I'm sure a lot of people listening have done. So, What are some other resources that you'd recommend people poke into to sort of like expand their library of what to reach for? yeah, I would definitely start checking out some of these new services that are popping up that are hosting all of the most cutting edge different models beyond just large language models. So foul. ai is the one that I've been using. And I've mentioned a few times also replicate is really great. And you've probably heard or use that if you're in this space at all. And then there's also stability AI. So Fowl AI, Replicate, Stability AI are three really cool ones. Anytime a new model becomes popular on social media, like someone, some really smart researcher from some place does this demo and they [00:28:00] like open source the model within a few days, those models are available for developers to start using via API from one of those services. So understanding how those services work. is going to open the door to a lot of opportunity. I think just keeping an eye on Twitter, what's hot because if a new model comes out, someone hosts it for the first time via an API, you could be the first one to build out the first cool user experience on top of that, that anyone's ever used. Because this is how fast this stuff is moving. New models offer new types of UIs that no one has ever experienced before that you can create in a day because all you need is that API endpoint And uploading an image or some text and you have like a new app that no one's ever had before So I think keeping an eye on those different accounts on twitter following them seeing what they're tweeting about And that alone will probably give you a lot of cool ideas and opportunities. Another great resource to follow is not or himself, not, or I know you have a [00:29:00] Twitter with a plethora of other socials, and you keep up to date with what you're doing, what you're looking at. So what's your Twitter? And what's another location that people can go read about what you're doing? Sure. I will plug. So dabit three D A B I T and the number three on Twitter, you can go to natter. codes. That's N A D E R dot codes that has all of my other links. It has my YouTube, my sub stack and just information about me, which is probably the most concise way for you to get all of the links. Awesome. And if people wanted to check out AI buddy, just to see what this framework can produce that's on the app store, right? Yeah, AI buddy is on the app store. If you Google react native AI, the first thing that comes up is the framework. So those are the two things that we touched on the most here. Definitely check either of them out AI buddy is free to download. You can try it out. React native AI is obviously free to use. You just run the command, your command line, but also go to the GitHub to learn more, you can dive in a little deeper [00:30:00] to some of the features and feel free to submit questions, pull requests and stuff like that. Nader, thank you for taking time out of your busy day to, to talk to us about React Native AI and about just AI models in general, like how they're actually getting used to build the new stuff. Cause the new stuff's what we're all interested in here. So, It was great to have you on. Thank you. Yeah, totally. It was really awesome here chatting about this today. So thanks for having me.