December 20, 2022

Episode 140: Rijul Gupta, CEO & Founder of DeepMedia

Rijul Gupta is a Synthetic-Media expert and thought leader with a degree in Machine Learning from Yale University. Rijul has spent the past five years developing patent-pending synthetic media algorithms while pursuing various ethical-only consumer, enterprise, and dual-use applications of the technology. He has been featured in Forbes Magazine for his work as a pioneer in the burgeoning Synthetic-Media landscape. As CEO of DeepMedia, Rijul has won pitch competitions at Harvard and Yale and maintains relationships with Professor Zucker (Machine Learning) and Professor Staib (Signals Processing) at Yale University. Rijul is a Thiel Fellow Finalist and a patented inventor whose entrepreneurial career has revolved around the study of DeepFakes and the technology behind them He possesses rare and unique technical knowledge that is necessary to program and create a solution to this rapidly evolving field of technology.

Julian: Hey everyone. Thank you for joining the Behind Company Lines podcast. Today we have Rijul Gupta, DeepMedia CEO and Founder. DeepMedia is a revolutionary AI platform company that is setting the standard for responsible synthetic media use. Rijul, it is so exciting to chat with you and, and get to know your expertise and your knowledge within this space because I think a lot of people are, are hearing so much news about AI coming out.

We. Chat, g p t come out. Um, I've played with the tool in, in, in tools alike like, uh, mid journey, which are building, um, you know, more pictures and, and now with media and all these deep fakes, I think there's a lot of questions about AI technology, but I think there's a lot of excitement. Um, and, and you know, some people are affected by it, some aren't and, and it's becoming more and more sophisticated.

So it's gonna be excited to chat about that. But um, before we get into all that, uh, what's the most impressive deep fake you've seen with your experience in AI.

Rijul: Hmm. Well, the most impressive deep fake I've seen to date was actually just released, uh, a week ago. Yeah. There was a video of, uh, someone who deep faked his own face against Mark Zuckerberg.

The audio and the video sounded great. I think it's that match, which hasn't really been seen until very, very recently. Where you can simultaneously synthetically manipulate a face and a voice and put that together. Um, and that is really exciting, but also very, uh, frightening to someone like me.  

Julian: Yeah. How does it impact the audience that's, um, you know, receiving the information in your, in your opinion?  

Rijul: I think it's, it's the combination of having a synthetically manipulated face and voice adds another level of realism and believability that wasn't there. Right? Yeah. Like a lot of the deep fix that have come out previous to that, um, were very, uh, much considered gimmicky, right?

Yeah. Uh, still not entirely taken seriously either as a threat or an opportunity because it was clearly fake because the voice didn't match, or that the face wasn't exactly perfect. It wasn't high enough quality. There were these edge case. Is. But what we've began seeing for the past couple of weeks, and we'll continue to see in the first few months of next year, is the combination of face invoice technology, getting to a place where all people are watching this and they cannot tell with their eyes and ears what's real and what's fake.

And that's, you know, essentially getting us to a place where we've become post-truth, post reality on digital media platforms. Yeah. . And you know, that's one of the reasons why we work so hard on detection, AI and detection software is making sure people are safe from that type of content.  

Julian: Yeah. It's, it's incredible the sophistication that, that's becoming more and more fine tuned to, to be so real.

Um, but back, back to, back to you and, and your experience with ai, uh, for, for the audience who doesn't know, describe your relationship with AI and, and how you got started and, and what essentially led to the inspiration behind DeepMedia? .  

Rijul: Yeah. You know, the company DeepMedia was founded in 2017. Um, it was founded. I saw a deep fake video for the first time. Uh, it was one of the old ones where it was Barack Obama and Jordan Peel and Jordan Peel's face, and his voice was being matched perfectly to Obama, and I saw that, and I knew that this technology was going to change the world, that in five years from that date, we would see hundreds of thousands of deep fake videos.

There'd be a lot of scams, a lot of fraud that people would need protection from, but also that the opportunity when used ethically could be huge. And you know, now, five years later, deep Meaty has partnered with the United Nations, the, uh, United States, d o d Where're in discussions and POCs with some of the world's largest movie studios and YouTubers to make sure that the ethical applications of synthetic media, which in our eyes are universal translations are advanced while simultaneously. Yeah, we can make sure that the unethical applications, which are face and voice swap, um, can be stopped in its tracks.  

Julian: Yeah, describe what the technology does, you know, whether you can get as detailed as possible with it or not. Um, but describe how the technology works and, and how it continues to self-improve.

Or is there, um, is there any, I guess, limitation to the technology based on the information that it has?  

Rijul: Yeah, so right. All generative AI is, or at least the vast percentage of generative AI, is based on either a transformer model or a generative adversarial network model. And they're both very similar.

They have, uh, n code or decode or structures. What that means is that it takes as input, either text, face or voice. Mm-hmm. can develop an AI that systematically reduces the dimensionality of that data and can then, um, operate in what's called this latent space in this low dimensional space where you can edit parts of a face or a voice just with sliders.

Right. And then it. Decode the edited information, the output looks like a face. It sounds like a voice. It feels and looks very real, but it has been synthetically manipulated. Yeah. So that is the underlying AI structure that is used again to generate most, if not all, synthetic face, voice and text manipulations.

And then there's also, um, a large amount of post-processing work that goes into making these final outputs look and sound. So, Now most other corporations do that post-processing work by hand in tools like After Effects and Flame DeepMedia internalizes that post-production work through more classic computer vision and audio vision processing.

So, you know, working in OpenCV and pillow in Lissa, doing the type of, um, lower level, closer to the machine engineering that all of these other platforms are built on top of, right? Mm-hmm. . So instead of having to use something like after. Well After Effects, just uses classic computer vision that you could do in open CB or with raw C.

So we do that abstraction layer beneath it, which is much faster. If you do it right, it's much easier and it is significantly more cost effective and scalable.  

Julian: Yeah. Are, are you working within just, uh, detection or, or are you also uh, uh, developing media for companies as well?  

Rijul: Yeah, so, you know, generation and detection are two sides of the same.

Our goal here is to make sure that we have the best detectors in the world that are six to 12 months ahead of anyone else trying to generate synthetically manipulated threats, right? Yeah. The only way to do that is to simultaneously pioneer generation, so we have generative networks that can synthesize face and voice.

In 48 kilohertz in 4K video, the highest quality face voice manipulation networks to date exist at DeepMedia, and that allows us to build detection software that can detect fake videos the moment they come out. We don't need to train on any, you know, on any videos out there in the public because we have our own algorithms, our own data.

That let us operate at a very high level, right? Detection is always a cat and mouse game, and the way that we're better than everyone else at detecting and better than the bad actors are generating is the fact that we do have our generation product, which is the universal translator.  

Julian: Yeah. And, and in regards to, um, you know, when companies are on kind of the innovative forefront, how do you maintain that momentum?

It must, it must come at an extreme price in terms of the velocity that you have to go at and, and how do you manage that level of speed? Um, when honestly, there's limitations to any company's ability to. Kind of have all the information or all the technology at their disposal, because now it's so dispersed and, and it's so widely available.

Um, and people are just getting better and smarter at using technology like this. Um, yeah. How do you, how do you manage the, the velocity that you have to, uh, to move at?  

Rijul: I love that question because I think it very clearly, describes what's so unique and different about DeepMedia. Yeah, so at DeepMedia, our challenge was to figure out an application, a generative application of synthetic media, fake faces and voices that was ethical, right?

Not voice swaps, not face swaps, not giving consumers the ability to swap their face with celebrities. None of that really fits our brand. We consider that to be contributing to misinformation. And so after a few years in the space, we came up. AI enabled localization. What that means is automatically subbing, dubbing and lip reanimating content in 20 different languages.

That product and the development of that product lets us do a couple of things. It lets us raise capital and dedicate resources to generative AI technology. Right? Yeah. It allows us to build a product that can scale through product led growth when we generate these output videos and deliver them to our customer.

They deliver it to end users and other people in the industry who see it. The content we're creating is the advertisement for our service and product in and of itself, right? So basically it sells itself. And then finally it lets us develop data sets. So when we're working with some of these creators, by perfecting our technology and making it better for the actual end user customer, that allows us to develop the AI in a way that generates profit and revenue from day.

And then that development is simultaneously fed into our detection ai, which means that we are getting better at detecting while generating revenue on the generation side.  

Julian: Incredible. It, it's so fascinating. And I guess my follow up question is how do you define ethics, um, as a company? And, and is, is there some kind of, you know, standard or, or, or, um, you know, with, with like doctors for instance, or even, you know, if you're going through research, there's always a committee that then kind of, um, you know, challenges your ideas or challenges your, your methodology, your thinking to make sure.

Within the bounds that you want to continue and, and within, um, you know, things don't get over outta control in a way that you control too much of, of the, um, uh, uh, of the direction of something. Um, how do you, how do you manage ethics, um, in a company that, like yours?  

Rijul: it's something that's very important to us to set up properly. Right now we're thinking five to 10 years down the. When you know, DeepMedia is the pipe between content creators and content consumers, and we have the ability to completely synthetically manipulate any words or faces in any type of video that we control, right? Mm-hmm. at that moment, it's going to be critical that we would have set up ethical boundaries.

Right now, essentially what we're talking about and what synthetic media and deep fakes are is the battle for. in digital media, right? Yeah. And so with that in mind, we have set up an AI ethics committee that includes both members of DeepMedia and external third party people who aren't associated with the company to set up these ethical boundaries and to make sure we're doing it correct, that AI ethics committee has come up with two ethical pillars that we have set up, written down, defined in our core to our.

The first one is about consent. Right. I think a big problem comes from people generating synthetically manipulated faces and voices without the consent of that person. Yeah. If that person grants their own consent, then a lot of those issues melt away. Right. That's a big one. Uh, making sure people just grant their acceptance for this.

Right. But then the second one that's equally important to us is about. Yeah. When we generate content, we are only doing it with the universal translator. We just like everyone else in the space, you know, other people are doing face swaps or voice swaps, or giving that technology to consumers or enterprises.

We can do that. We have that ability to generate these products and deliver them as software or consumer services, but we have specifically not done that because even with the consent of someone who owns that face and voice, that technology can be used to contribute to misinformation. Right. Yeah. We will never, and have never build products that could ever be used to contribute to misinformation.

The only thing that we do is universal translation. Yeah. We take content in English, generate versions of Spanish that look like that person, that sound like that person that are fully accurate. But the trick there is they said those words in English, the content, the message, the meaning is accurate. It is not misinformation to translate that.

In Spanish or the Arabic Hindi French. Sure. They don't speak French. And if you're watching this in France, you might think that you can see Tom Cruise speaking French. Right. And I get that that might be a little bit of a gray area, but we have decided that that was, uh, meets our two pillars of consent and, and, you know, not contributing to misinformation.

Julian: What, what gets you the most excited about sharing the information across different, you know, cultures and being able to communicate the original message? I mean, I hear all the time, I mean, I, I'm, uh, my family's from Mexico, and so Spanish to English is still a little bit of, there's a, there's a difference when I'm speaking to, you know, you know, my grandmother, Mya.

Um, in how things are communicated and can be slightly, you know, skew. Um, and same with my partner as, as, um, speaks Mandarin. And so there, there's all this, these different subtleties in languages. How do you, or what one, what gets you excited for translating the message accurately? And two, how do you make sure it's accurate with the different subtleties in languages being so, um, so I guess obvious.

Rijul: Yeah. So for that first part, what gets me most excited is the idea that we can. Cultural experiences that exist across language barriers, right? Like the internet and digital media was supposed to connect us together, right? Like Facebook and video chats and Twitter. It was supposed to bring the world together and it didn't do that, right?

Like we are still so isolated, like when I get on Facebook or. Or TikTok, I see content that's either produced in the United States or the United Kingdom, and that's it. And the only reason for that is because the content is still within its own language barrier. Yeah. Now, there's been a lot of work in translation over the past 20, 30 years, but all of that translation work was based on text.

Right. But nowadays, more than ever, people communicate and consume media in video. That is how we talk to each other. That's how we watched movies and TVs. Right? Right. And so if we actually want to build a world, a society where we can collaborate, communicate, and exist as global citizens first. Where we can solve global problems, communicate and collaborate globally, we have to fix that language barrier in media and communication, and it has to be done in video.

So I'm most excited about bringing people together, right? So people in America and India and China can hop on a video call, can chat about the most recent Game of Thrones episode or YouTube video that they like, and we're suddenly having some type of cultural bridges that we can communicate and connect.

right? Yeah. Now to your second point, in order to make that work, the translation not only has to be accurate, it has to be more than accurate, right? Like Google Translate right now will technically accurately translate something from English to Spanish. But the accurate, literal translation is not always the correct one, right?

You have emotion, and when you say something in audio and video and that emotion doesn't come through, Text. Things like sarcasm and humor on a text to text based translation model just effectively just doesn't work, right? And so for us specifically, we have built and pioneered and have patented AI algorithms that take us input, not just the text.

Our translation takes text and audio and when available video and translates that entire chunk of information into foreign languages. And so that is how we're able to get these translations. That a, the text is more correct than just a one-to-one translation, but also the vocal synthesis has that emotion to it.

So when I say something in English that way, I'm saying it is translated as well. And that also flows to, to the video when it's there. So we manipulate the face, so the emotions in the Spanish, French, German output are transla. in the audio and the video as well as the text.  

Julian: Yeah, it's in, it's incredible to, to think about the sophistication of that because, you know, with the subtleties of sarcasm, you know, in, in so many different languages, it's hard, it's hard to relay that sarcasm, uh, in a way that, uh, is effective.

Um, but is there gonna be a time where, uh, whether it's DeepMedia desert or, or someone else, um, where there's kind of this efficient, um, generation of content, going back to my previous point about Mid Journey and, and you know, chat, um, uh, G P T where it's like generating content that's. You know, uh, I guess unique and, and also extremely efficient.

Um, is this gonna happen with video and, uh, audio? Say, I'm Tom Cruise and I wanna make a, um, a, a commercial in five different languages. Can I use a company like DeepMedia essentially to, to push that content formula to develop it and then distribute it? Or, or, or, or maybe not the media, but is there gonna be a future where there's gonna be an increase in an efficiency of, of, um, creation of content.

Rijul: Yeah. You know, when I look at the generative AI space as it relates to content creation, I think it's important to realize that the technology being developed right now is a tool, and like any other tool, it will be used by human beings. Not to replace content creation, but rather to augment content creation.

If you look at something like the invention of the camera, right? The ca, excuse me, the camera, whether it was an audio or, or sorry, a video camera or an image based, that let people capture images in near realism for the first time. Yeah. That didn't replace painters. Painters didn't go away. It created a new type of art form, photography and then videography.

So we are going to see, with the creation of these new tools, a new type of art form take place. It's not going to necessarily replace old types of art, but it will augment that art. Mm-hmm. , what that means is that yes, content creation will become more. , it will look and act and feel different, but it will still have and still require the same things that make art great, which is emotion and humanity and a story saying something with that art, right?

Yeah. And so that's where we're going to see content creation go in the next five to 10 years. Now the challenge is generative AI is not there yet, right? If you want to create content, um, from. Generative AI can't do that yet. G P T three, G P T in general can't write a novel, right? And so that's why DeepMedia focused on content augmentation.

What synthetic media can do when applied properly, and you have the right engineers, you can take a commercial in one language. , clone that person's face and voice and spit it out in 20 languages. That's something we can do today. That's something we're actively doing today with the world's largest movie studios, the United States Federal Government, the United Nations, and the world's biggest YouTubers that's happening right now.

Right? And so over the next two to five years, we will grow and expand and develop new capabilities, new augmentations, new tools, but for right now, we're 100% focused on universal.  

Julian: Incredible. Tell us a little bit more about the traction. You said you're working with the, with the couple governments, some, some creators, um, some large studios.

Um, what's been exciting, not only about the growth of this year, and if you can attach numbers to it, that'd be awesome. Um, but also for the future. What kind of partnerships are you, uh, working to develop that are gonna be, you know, even, even that much more successful in, in broadening the impact of the technology.

Rijul: Yeah. You know, this year for us was largely about that product market fit, which is often elusive, and it's one of those things where it's important to find customers that match the level of the quality that you can, you know, contain right now. Yeah. Like, you know, pro, especially for a company like DeepMedia, like we are on the forefront, the cutting edge of generative ai, so we are figuring out things that have never been done.

This is not the, uh, a different tech company where you are, you know, there's a clear problem, there's a clear lack of efficiency, and you can use an AI to solve that. We are at the place, like we are inventing the car, right? We are inventing markets. This is something that has never been done. And so if you look at something like just universal translation, for example, there are markets for that that already exist, dubbing, right?

And movie studios have entrenched relationships with dubbing platforms, dubbing. And those doublers know how to do it. They're efficient, effective. You know, the audio comes through in 96 kilohertz, which is a hard benchmark to hit. Right? And so what we did when we were trying to find product market fit, instead of saying there's this entrenched customer base that has deep relationships with large companies that have existed in the space for 50 plus years.

They have massive, huge, high quality benchmarks. We tried to look at how do we. How do we approach this in an intelligent way? How do we create a market that doesn't exist? Right. And that's why we began focusing on the creator economy last year in 2021, I think YouTubers made something like $18 billion in revenue from the YouTube platform alone.

Right. And if you look at how many creators actually make more than a million, it's like 20 to 30,000. Wow. So there are 20 to 30,000 customers making between 500 k and a million dollars every single. In just their original language. Our technology benefits greatly from scale, right? It is insanely scalable.

We can process hundreds of videos a week. That's something that traditional dubs can't do, right? It is really effective in terms of price. Our, the price to generate these dubs is, I don't want to get into specifics here. But it's at least one 100th the cost of a traditional dubbing studio. So if we looked at what the benefits we have against our other competitors in this space, which are not AI companies, but rather human focused dubbing studios, we saw that again, we are much more scalable, we're much more cost effective.

But if we looked at some of the negatives to us, we don't operate in 96 audio quality, right? We're in 48 K. and it's things like that. So finding. The product market fit isn't always about product development. It is about taking the product and the AI you have and finding customers that can exist and live and be happy with that product right now.

And so that was what 2022 was about for us, is finding that product market fit, working with customers in the creator space that allow us to refine the technology, take that refinement, and then give that to some of our studio. Who again, we're not saying we're gonna take the next Marvel movie, get it out in 50 languages.

What we can do is take nature documentaries, right? Yeah. Take content that is lower level, lower quality benchmarks, something where people aren't yelling and going crazy. It's relatively standard and straightforward. It's about that getting a product up that meets a minimum viable level of quality. and then finding customers for that product, right?

Yeah. It's that combination of things that really make it work. Now, for us, 2023 is about product led growth. It's about taking our existing product, uh, and we're gonna be on platform with 20 creators starting January 1st, which we're very, very excited about scaling that engagement, optimizing those channels, getting really good revenue for our creators, and then letting that sell itself, right?

Developing this phenomenon where other creators see their friends, their. 10 x their revenue overnight by doing nothing. And then they come to us, say, Hey, can you do that for me? And of course we can. And so then we scale up that engagement, um, in a lean and, you know, way that's easy for us. It's a growth that we can handle and achieve.

It's basically, instead of hiring a bunch of SDRs and sales people, we hire a bunch of engineers, make the product kick. And then people come to us. I love  

Julian: that. Yeah. It's, uh, obviously it's taken probably, you know, so many hours of dedication to develop this technology, but I, I'm, I'm sure you're extremely excited about once, once it is so well developed that it does sell itself.

Because, you know, that's a, that's one of the biggest things for founders is, is getting people to adopt a technology. Um, especially if it's something that's so new and, and there's. Um, you know, ways to communicate the story behind it, but I'm sure once it starts, you know, once the ball starts rolling, it's, it's hard to stop it.

Um, what are some of the biggest risks that DeepMedia faces today?  

Rijul: I think our biggest risk is having too much customer interest, right? Mm-hmm. , we're on the precipice of being able to. completely change the world. I mean, we're talking about real time video calls in any single language in across the entire globe, right? We're talking about movies, films, radio advertisements, corporations, and any single language automatically.

Instantly, right? That type of growth requires a lot of scalable management, right? And being able to scale to meet that demand is going to be our biggest challenge in 2020. Um, I just killed companies before. Suffering some from success is definitely a very real thing, and it's something I think about almost constantly.

Yeah. Like that is on the top of my mind every single night as I go to bed. So it's definitely the biggest, um, definitely the biggest risk in my mind, but it's something I think we are very well positioned to handle.  

Julian: Yeah. How, how do you mitigate that risk? Is it saying no to certain customers and kind of keeping a very tight, um, uh, loop in, in, in the customers that you do kind of assist now?

Is it focusing more and more on the technology and the product to, to get really. Uh, sustainable and, and scalable. And what, what in particular are your strategies to mitigate that scaling risk? Because yeah, like you said, companies have had, you know, times where they scale too fast and and they implode

Rijul: Yeah, right, exactly. So for us, it's threefold. It is effective hiring. That's number one. Getting the right people on the bus is the most important. This, uh, means engineers, hiring engineers that are very, very high quality, that meet our culture, which is definitely hardworking, but also very empathetic to our customers and to our coworkers and colleagues, right?

But it also means hiring, um, support staff, designers, people who can, um, manage YouTube channels, right? And people who not only know how to do that, but know how to, or at least are very excited about taking their knowledge and internalizing. With our engineers into a scalable web platform. Right. So hiring's number one.

And then the second, which I alluded to just then is relying on the tech. Relying on the automation. Right. The most important thing for us is making sure we can develop, we can deliver really high quality content to as many customers as possible. Yeah. And that is not gonna come from scaling up to a 10,000 person organization that will come from building the best world changing AI possible.

Right. . So it's that, um, like, like those two things in combination. And then finally, like you mentioned, it's about Yeah. Saying no, it's about only delivering content to people that we are very, very confident can take this content and make something with it. Right? Yeah. You know, if there's a YouTuber out there who, um, You know, has extreme content where they yell a lot or cry a lot, being very honest and open with ourselves, where that type of, uh, we cannot commit to that level of quality across all of our languages.

So saying no to that and focusing on the YouTubers who are looking into the camera, you know, making jokes. Maybe they're being angry sometimes, but it's relatively normalized emotions, right? Yeah. Not a lot of edge cases like it's. Being honest with ourselves and being committed and disciplined to sticking to product market fit and not going  

Julian: beyond that.

Yeah, that's incredible. I, I love the, the thought and the, the strategy behind that. Um, if, if everything goes well, what's the long-term vision for  

Rijul: DeepMedia? So our goal in 2022 is to scale up in the creator economy end of the year with at least a hundred creators generating, um, between 500 k and a million dollars annual revenue per.

Right by the end of 2023, we plan to develop realtime universal translation, uh, which means that in broadcasts, uh, sports or in twitch streams or in video chats, we'll be able to have a realtime universal translator when people can speak 50 languages in their own voice with their lip synced up. Once that's live, by the end of 2023, uh, we will be scaling up across multiple, uh, revenue.

Uh, continuing the creator economy, but really developing and engaging in the professional film and television studios. Automating dubs across the board, but then also getting into video, uh, platforming communication platforms like this. Um, and then scaling up universal translation across all forms of digital communications and digital media.

Once that's done in 20 24, 20 20. Uh, and simultaneously of course, we develop and release for free our deep fake detection to make sure that people are safe from this technology. By 2020, uh, five, the synthetic media space will have been more developed. It will be more palatable to everyday users, both in consumer and enterprise.

And then we will start scaling up across different product verticals, such as, um, whatever those may be, right? That, that was more like generative AI content, which we talked about earlier, right? Once the te. And the space develops a little bit more.  

Julian: Yeah. That's incredible. Um, I like to ask this next question, one for my audience to research and, but also for myself for selfish purposes.

Um, whether it was early in your career or now, what books or people have influenced you the  

Rijul: most? Hmm. Good to Great. Is, uh, Jim Collins. Yeah. One of my favorite, uh, books I've ever. Um, that's where the idea of getting the right people on the bus before you know where you're going comes from, right? Yeah. So I take a lot of my hiring strategies from that Lean Startup classic to me.

Um, uh, competing Against Luck is another big one that I've read. Those are probably the three that matter the most in terms of business. Um, outside of business, uh, the Profit is a book that I read. . I also read a significant amount of religious texts, old Testament, new Testament, Koran, GEHA, um, Vedas. Those are things that I read on a regular basis.

And then, um, I also read a lot of, this is just a hobby of mine, but, uh, physics, so I'm very interested in quantum mechanics and so I'll often read a lot of papers that come out, um, with published research that people are.  

Julian: incredible. I love the, the, the diversity and the information you're getting on. I'm sure it helps kind of, uh, create a holistic approach as a leader.

And, and I and a lot of founders, you know, have that where, um, you know, they, they really like to entrench themselves in different types of knowledge. I think one, it's therapeutic, but two, I, I think it helps kind of broaden the, um, you know, the different perspectives that you might have. Um, amazing to hear the story and the journey and, and where you are.

Uh, I like to ask this question. I've been troubleshooting it and I, and I think I might keep it. Um, if you weren't working on this, what would you be working on?  

Rijul: Hmm. It's a really interesting question. It's hard for me to think because there's nothing else in the world. I'd rather be doing the mess, like the idea I'm, I'm a really big Star Trek fan.

Like that show means the world. And the idea that we can take a piece of technology out of that show, like the universal translator and make it real is just a dream come true for me. So if it wasn't, if I wasn't working on this, it would be something else from Star Trek. I don't know what that is. Maybe.

you know, a teleporter or something like that. .  

Julian: I love that. I love that. Um, well, I know we're at the end of the show here, and then I'm sure we could talk for hours and hours on end about different, you know, pieces of technology and, and different directions you can go with it. But, um, last little bit is I'd like to give my guests a chance to give us your plugs.

Uh, give us your LinkedIns, your website, your Twitters, where can we be a fan and support DeepMedia and, and maybe even get involved if we're someone interested in, in developing content in different language. .  

Rijul: Yeah, definitely. We are deepmedia_ai on pretty much every platform. YouTube, Twitter, Facebook, LinkedIn.

Um, you can email me rijul@deepmedia.ai or just go to our website. We are very interested in anyone who wants to take their content and make it accessible to people across the globe or anyone who feels threatened by synthetic medium manipulations. Yeah, please reach out. Um, we're happy to work together. In general, we, you know, a lot of our business model is revenue share based, so it doesn't matter what size you are, I'm sure we'd be happy to work together.

Julian: Incredible. Well, I hope you en enjoyed yourself fr and I'm excited to see the future of DeepMedia and thank you again for joining the show.  

Rijul: Yeah, definitely. Thanks for having me and thanks for doing this podcast. I think it definitely gives a really unique insight into, uh, what it's like to run a company that most people don't get.

So appreciate it. I know as a founder, I appreci. And I'll be, uh, you know, on the lookout for other people's as well.  

Julian: Incredible. Thank you again, Rachel, and I'm excited to, to share this with the audience.

Other interesting podcasts