Paul: Hi there and welcome to PodRocket. I'm your host Paul, and today we're joined with Hassan El Mghari. So Hassan, he's from Philly, and you founded a company called UltraShock. It's a strong community on Steam right now, over 500,000 strong. It's my understanding, you sold the company and you're working over at Vercel. We're going to go into a bunch of details today on this podcast. Excited to have you on. You work at Vercel right now and we're going to get into some of your doings and interesting explorations in Chat GPT. So welcome to the podcast, Hassan. Hassan El Mghari: Thanks so much, Paul. Appreciate you having me. Paul: So when you founded UltraShock, how long ago did you begin on this journey? Hassan El Mghari: So this was back in 2014. This is back in high school where I just started this company. Paul: Almost 10 years. Hassan El Mghari: Oh my God. It's been a while. Wow. Yeah, almost 10 years ago I guess at this point. Kind of started honestly kind of accidentally. Used to be a big gamer back in the day and I actually had some friends that were game developers and so I watched them put hundreds of hours into making these games that honestly a lot of the time were crap and that's fine, but a good amount of them were pretty good and they just couldn't get them on these big PC platforms like Steam. So it honestly just started with trying to help some friends get their game on these big PC platforms so they can actually sell them and make some money and kind of snowballed a little bit from there. Paul: So you helped your friends sell their games and then you ended up going on to selling UltraShock. Hassan El Mghari: Yes. Paul: And how many years ago did that happen? Hassan El Mghari: So that actually happened relatively recently is I think 3 years ago, is a bit of a sad story. But I ran it for a while. I got a pretty good offer to sell it and then I ended up just holding onto it for far too long until I completely lost interest and eventually kind of just liquidated assets a few years ago. Paul: I feel like that's not the most uncommon story with a genuine passion project as the first thing that people run with. If it's your passion project, it's your first entrepreneurship run, it's hard to let go of the first thing that made you wake up so to say. Hassan El Mghari: Absolutely. Yeah, I'd put in so much. Paul: So don't beat yourself up. Hassan El Mghari: Yeah, I know. I'm not, honestly. I'm over it at this point. But it was kind of like a 10 x multiplier for how much I could have gotten for it and how much I ended up selling it at the end because I made that mistake. But like you said, it was my first kind of entrepreneurial thing and it was a great lesson learned for the future. Paul: You're at Vercel right now. We could say DevRel, is that an appropriate title for what you're doing? How's that going? Hassan El Mghari: Yeah, I know DevRel is the right word for it. Developer advocate, some people say that as well. It's going great, honestly. I love how much flexibility I get in this kind of role and how many different teams I get to interact with. Some weeks I might be working on content, recording YouTube videos or writing blog posts and other weeks I'm planning a conference in San Francisco and another week I'm writing a conference talk and going to deliver it somewhere. It's so diverse, I honestly just really enjoy it. Paul: It's funny that this is even a role in today's day and age because it's such a fast changing field that we need people like you Hassan, to glue the slow speed that we move as a body with the technology that's being developed by these niche teams such as Vercel. So you mentioned I will make conferences, I'll make videos... making videos, that's fun. I guess it could be as long as it doesn't get too tedious. But that's like a total switch from what you were doing because UltraShock is you're writing code, you're making platforms, you're putting a community together, half a million people strong on steam and now it's less technical. I would like to say it's less technical. Okay. So you kind of on camera you're like, maybe. So I'd love to dig into what are some of those technical things that you're working on Vercel right now? What are some exciting things that maybe you can share with us in the public that is changing at Vercel that you're excited about and have your fingers in? Hassan El Mghari: Yeah, for sure. And just a point of clarification, I didn't really write much code when I did UltraShock gaming. That was mostly kind of running a community doing game giveaways. It was definitely stronger on the marketing side. So this role actually, I take some of the marketing knowledge that I learned with my last startup and some of my technical skills and I get to kind of apply both. I do write quite a bit of code at Vercel to give you some examples. Every year we have a really big conference called Next.js Conf that happens in October. And so for the conference last year I ended up building our registration site for the in-person conference so people can go and register. I built the check-in system that we used on site. I built a site for our schedule. I built a dashboard that tracked all of the invites and stuff like that. So there's quite a bit of engineering work there. It's not core on the product. I don't work on Vercel itself, but I do these kind of supplemental things. If we need a website for a conference that we're doing, I'll go and do that or collaborate with the engineering team to do that. I built this image gallery to showcase images from the conference and right now, honestly just been building a lot of example applications, especially with the new AI wave that's coming out and the fact that we now have access to GPT-3 and all these really cool APIs. Yeah, there's so many things you could be building and it's incredibly exciting. Paul: We have to get into that next. You have a blog post out and if you're curious about how can I use GPT-3 or Chat GPT, if you're interested to either of those keywords and you want to figure out how can I start using it, Hassan has an excellent post. He walks you through... what is it, Hassan, I think it's creating a Twitter bio, right? Hassan El Mghari: That's right. Yeah. Paul: Yeah. So when did you start using GPT-3? Because it's been out for a little bit now. I think the invite only beta has been like two years and then Chat GPT dropped and everybody was like, wait, this is a thing, oh my God. So when did you step onto the platform, into the scene, start to integrate? Hassan El Mghari: So my first exposure to it actually was about a year and a half ago, I was working at another C state startup and we got access through the beta and so we were playing around with it a little bit. Even then it was pretty mind-blowing. Then I kind of forgot about it for a while and then GPT-3 came out and I saw people making all these cool apps and I picked it back up honestly just last month, end of December. So about two months ago, and just been building just little applications using it. Paul: In your blog post you're walking us through how can I generate a Twitter bio using GPT-3? Do I have to have messed with AI to read your blog post? Hassan El Mghari: No. Paul: No. Okay, I could just hop right in? Hassan El Mghari: Exactly. And that's the beauty of this stuff. At the end of the day, I'm just a web developer who has used some AI APIs here and there, and that's what I love teaching. I love just talking to these front-end or full stack web developers and showing them how easy it is to integrate AI in your application. Again, you can go really deep into this and you can go and train your own model and deploy it to the cloud and do all this really fancy things. But a lot of these platforms will just give you access to the API and it's just a simple API call most of the time. Paul: So in your application, we must be using this API somehow, we're connecting to the API. So what happens between me deploying a normal old Next.js app, I have slash interesting AI page and I go to slash interesting AI page and my API route in my Next.js app. Where does that get glued into the GPT-3? Where does that step happen? Hassan El Mghari: Yeah, so it's just honestly an external app that you're calling from within your Next.js API route. The big provider right now is OpenAI if you want to use GPT-3 specifically. And the way that operates is just a prompt. So you can think of it as just making an API request to OpenAI with whatever prompt you want, getting the output and sending it back to your front-end to display to your user. In that block post specifically, I think the novelty is specifically in using edge functions because you can kind of stream data back. Because one of the problems of these AI APIs is they do take a while, sometimes, especially if you're generating a lot of text. So think if you're trying to generate maybe four or five paragraphs of text, it may even take up to 15, 20 seconds to get a response back from OpenAI. And that's just bad user experience. If you're trying to generate a blog post and you click generate blog posts and you just sit there looking at a loading spinner for 20 seconds, that's not great. But what's really cool is OpenAI let you use this streaming functionality that you can use with Vercel Edge functions, which are very similar to serverless functions, but just smaller and faster. And you can kind of use those technologies in tandem to build something that has a great user experience where you can actually show the user data as it's coming in from OpenAI, so the user click submit or create blog posts and within not even a second, they'll start seeing the blog post being submitted. Very similar to how Chat GPT works, right? Paul: It's like the stream of text coming in. So there's two key technologies or product offerings that you noted that make this a possible experience. So we talked about serverless functions and Vercel edge functions. What's the difference between them? Hassan El Mghari: Yeah, so serverless functions are basically Lambda functions. Something that a lot of developers are familiar with and have used and edge functions essentially run with a more limited runtime, and that causes them to just be a lot smaller and a lot faster so they don't suffer from the same cold start problem that Lambda functions suffer from, where sometimes due to cold starts, your function may be delayed by one or two seconds. With edge functions, they're virtually non-existent in single digit or double-digit millisecond delays. And the way they can do that is because they run in this more limited runtime. But the trade-off there is that you can't really use every single node.js API in your edge functions. So they are significantly more limited using something like a heavy kind of node.js library, like Prisma for example. You can't really run in edge functions right now. So you kind of have to choose when you do use these. But for things like making simple fetch calls, because in this case all we're doing is making a fetch call to OpenAI and getting it back. When you're doing stuff like that, edge functions are perfect for that use case. Paul: When you say orchestrated node environment, it kind of reminds me of stepping into looking at Temporal's technology where you can define a workflow, it can run forever, and when I call on that workflow, it starts up instantly because they essentially serialized start unstoppable V8 machine. Is that similar to what's going on in the background with a Vercel edge functions and why we don't have a cold start time? Hassan El Mghari: It's a little bit different. Temporal uses servers to enable that technology, and I don't know enough about Temporal to dive in and look at the kind of similarities and differences, but actually edge functions under the hood, if you're familiar with CloudFlare workers, that's actually what they use. Paul: CloudFlare workers I know has been out for quite a little bit, but they have had some really compelling iterations in the past year or two that have made them much more usable. That's interesting little tidbit for anybody that wants to peel under the hood a little bit. So coming back to your example, how again does the Vercel edge function implementation of this piece of backend infrastructure enable the illusion of text streaming in as an OpenAI is returning its bits and bytes over the wire? Hassan El Mghari: So edge functions honestly just use standard web APIs and then that's the beauty of them. We're not using proprietary technology that you have to go and learn the ins and outs of. It's just using a web stream with the standard web API and that in combination with the fact that OpenAI does let you specify whether you want to stream certain responses, just makes it work really nice. So you just define kind of a new stream in your edge function. You, in the payload to OpenAI, you specify that I want you to stream this back to me, so you specify the stream variable as true and then it just works. Paul: And on my side, on the Next.js side, I don't have to worry about any crazy HGTP guts or anything. Hassan El Mghari: Exactly, yep. On the Next.js side, all I do is define a piece of React state, and then as the data is being streamed in, I continuously update that piece of React state. So in my JSX that I return, all I do is return the piece of state right there and you can just see it updating as data comes back in. Paul: In this example, you're having GPT-3 sort of complete a cookie cutter prompt, let me create a Twitter bio. Do you see this sort of interaction with GPT-3 becoming individualized on a personal level? So on Facebook, there's not one global timeline. There's like my timeline. I wonder is there going to be a mini version of GPT-3 out there that is going to be making little Twitter? It knows me, it knows my own fine tune model. Do you see that coming onto the playing field very soon? Hassan El Mghari: For sure. Yeah. A lot of people have started enabling you to do this, where you can take these base models like GPT-3, like a Stable Diffusion, which is a machine learning model for generating images. You can take these models and you can actually train them on something specific and then use that in your website. And that's actually what a lot of these AI startups do to differentiate is they will use these base models, but they'll just train them on really good data. I think this is what CopyAI does, for example, they use GPT-3, but not directly GPT-3. They train it with tons of data from essays and from specifically the type of writing that they want to implement within their service. So the end result is actually better than the generalized GPT-3 model because they're training it with very specific data. In the same form, with something like Stable Diffusion for image generation, you can train it on very specific images. And a really good example of this is generating AI avatars of yourself where you can take a Stable Diffusion model and you can feed it 20 pictures of yourself. Then you can say, generate a picture of Paul riding a bicycle and it'll do that with your face, which is kind of crazy. Paul: That is crazy because it needs to understand Paul and it needs to understand me as a human being that I have legs and feet and it needs to paint that on the bicycle. It's a little creepy, but it's so exciting. In the new Stable Diffusion too with the replace, oh, we see Hassan put a hat on Hassan, there's going to be a hat on Hassan. Everything else remained the same. It's not like it garbled the rest of the image because it is able to tokenize in such a great way. It's really amazing. There's a podcast with the founder of Stable Diffusion that I saw earlier where he was touting sort of, they went from 32nd generation to one second generation. Now, they're at 900 millisecond generation. And when Stable Diffusion 3 comes out, they're targeting 200 millisecond generation. We're talking about simulating a real world. And one thing he said was, imagine the capabilities this is going to have for the serverless world and for user interfaces. And I wish they went more into the podcast on that. Because I was just like, whoa, wait a minute, I can have a user interface that is my user interface. It's different for everybody. I don't know how that would transcribe. So Hassan, as somebody who has built a community and seen maybe how people offer, they're like, I want this, I want that and you needed to change and pivot your product and all your ideas throughout the past 10 years on this journey, how do you see AI manifesting itself, this technology manifesting itself in the front-end? Your example goes over in the back end. I'm curious if you had any thoughts in the front-end. Hassan El Mghari: Yeah, honestly, with the innovations that are going to happen in the front-end, I think are mostly based on this backend and what you said with the idea of making generations faster, because that's a really big thing. Right now with training, for example, if I wanted to go and train a Stable Diffusion model on my face, it actually takes a while. It maybe will take 20 to 30 minutes to do that for the model to train. But like you said, they're working on new versions where it won't take as long. And right now, if I wanted to just build a website where people can get AI avatars of themselves, I'm kind of restricted on the front-end a little bit because of the fact that it takes this 20 to 30 minutes. So you'd have to actually go to my site, you'd have to upload the images, and then you'd have to click generate, and then you'd have to wait 30 minutes, which is kind of awkward. And so I'd probably ask for your email and then I'd send it to you later. So that's a very different flow than if this process takes under a minute, let's say, I can just show it to you on the site right after it's done. And it's just going to be very different in that sense. And I think as these machine learning models kind of progress, they just give more options to front-end developers to build really cool things with them. I think we started with GPT-3 that came out, and then as more and more of these machine models are coming out, there's services like replicate.com, like Hugging Face, where some really smart AI researchers will go and work on a problem for a while and publish a paper and publish a Python model. And then people will take that Python model deployed in the cloud and just give you an API to access it, which is insane. You're taking advantage of all of this research from these astronomically smart people directly in your application with an API call. And it's like there's no limit to what you can build with that. Paul: If you go on Rapid API, I see some of these APIs and I'm just thinking, I definitely saw this in a GitHub project somewhere, but nice. Cool. Little market opportunities that pop up. Hassan El Mghari: Yeah, and people use them. I mean, just to give you an example, I made this website called restorePhotos.io, and it's just using one of replicates APIs that helps restore old blurry photos. This API, I send it a blurry photo and it gives me back a more clear photo. And so I built this side project in a day, over a weekend, and I launched it and it kind of went viral. And I have think 130,000 users right now for this site, for this just little side thing that I just published, which is crazy because anybody can go and build that really quickly, right? The API is there on Replicated, and my project is even open source, so people have cloned it and have built a lot of similar-ish apps. But yeah, it's still the amount of people that I still that I've gotten messages from that have been, oh my God, this is so impressive. How are you doing this? This just never fails to amaze me because- Paul: I'm just Googling, my friend. Hassan El Mghari: Yeah, exactly, man. Paul: Just googling. For the audience and if anybody's watching, still getting into AI. Hassan, what is Hugging Face? Hassan El Mghari: Yeah. So I actually haven't used Hugging Face very often. So you might have a better definition than me, but as far as I understand it, it's this kind of AI community that's there where they kind of publish a lot of these models, they publish a lot of these data sets that you can just use, and a lot of them will have APIs. But I've seen that a lot of machine learning engineers will use something like Hugging Face, whereas web developers may use something like Replicate, which doesn't really get into the nitty-gritty. They just give you a very easy API endpoint that you can just hit and get data back. Paul: One thing I like about OpenAI is they still provide very easy to hit API like that if you want that. And they let you peel back the layers as you go. If you are an AI researcher and you want to do weights and biases and God knows what else, you can, they have support for that. And I think it's one of the most compelling things about their product. And I love that it's integrated right into your blog post. And one more time, Hassan, the name of your blog post is Building a GPT-3 app with Next.js and Vercel Edge functions. Hassan El Mghari: That's correct. Paul: People want to Google it, that's what they could go look up. Hassan El Mghari: Yip. Paul: Right now within Vercel, are you guys looking for any new employees? I know the market's confusing, so people are firing and people are hiring. What's the state of things over there? Hassan El Mghari: Yeah. So no layoffs here as of yet. We're tracking pretty well. I think we're in a really good place compared to a lot of other companies because we very fortunately raised a very large amount of money right before everything went to crap. Paul: Right in the nick of time. Hassan El Mghari: Right in the nick of time. So we did our series C and our series D a few months apart from each other right before everything went to crap. Paul: Wow. It's almost like they had the hair stand up on their necks and they're like, hmm. Hassan El Mghari: A lot of the employees at Vercel were kind of confused on why we raised so much money in a short amount of time because we're diluting shares and everything doing that. And we had these questions to the executive team of, Hey, why are we raising this much money? We don't need it. And the response we got back, I remember this very clearly, was that we're predicting an economic downturn in the future, so we want to make sure we have money in the bank. So we're in this very fortunate place where we have raised, I don't know exactly how much it was. I think it was 250 million around there total between our two rounds at the end of 2021. So we're in a pretty good place. We did, I think, slow down hiring a little bit. So that's a thing. But we definitely still are hiring in general on our careers page. Paul: Just to close things out, because I could chat to you about Chat GPT and GPT-3 all day because the possibilities are endless, but we do have limited time here. So focusing back on Vercel specifically with your involvement as a developer relations person, are there any things you might want to have the general public be attuned for, looking out for in the next quarter? Hassan El Mghari: I'd honestly say just look out for more cool AI templates and blog posts and yeah, that's primarily what I'm working on. I'm working on some secret stuff that I unfortunately can't share right now, but yeah. Paul: Okay. Secret stuff. So can people follow you on Twitter if they want to know specifically from Hassan when the secret stuff is coming? Hassan El Mghari: 100%. Paul: What's your handle? Where could people find you? Hassan El Mghari: It's Nutlope, N-U-T-L-O-P-E. Paul: All right. Thanks again, Hassan, for your time coming on. It was a pleasure. Hassan El Mghari: Yeah, thanks for having me, Paul. This was fun.