TECH006: OPEN-SOURCE AI THAT PROTECTS
YOUR PRIVACY W/ MARK SUMAN
TECH006: OPEN-SOURCE AI THAT PROTECTS YOUR PRIVACY W/ MARK SUMAN
14 October 2025
Preston and Jevi unpack the double-edged sword of modern technology. They examine how social media, AI, EMFs, and LED lights impact our well-being, and offer real-world privacy solutions like Graphene OS and open-source photo apps.
From sleep disruption to self-sovereignty, they advocate for mindful tech use and explore how AI can act as a “second brain” while respecting user privacy.
IN THIS EPISODE, YOU’LL LEARN
- What Maple AI does that other AI tools don’t—end-to-end encrypted, verifiable privacy.
- How Marks’ time at Apple shaped his vision for secure, user-first AI.
- The threat models Maple addresses and how enclaves + attestation work.
- Why inference speed and efficiency—not open weights—are the new AI battleground.
- Where decentralized AI fits into today’s landscape.
- Which professions benefit most from private AI.
- How users change behavior when they trust the AI system.
- The risks and critiques of TEEs—and how Maple answers them.
- A step-by-step guide to getting started with Maple today.
- Marks’ vision for verifiable, private AI over the next decade.
TRANSCRIPT
Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present due to platform differences.
[00:00:00] Intro: You are listening to TIP.
[00:00:03] Preston Pysh: Hey everyone. Welcome to this Wednesday’s release of Infinite Tech. Just like Bitcoin, separated money from the state, decentralized inference is now separating AI from big tech. It’s a quiet revolution, shifting control of intelligence itself from the centralized data centers to individuals and small developers who can run powerful models privately, securely, and anywhere in the world.
[00:00:24] Preston Pysh: Today, I am joined by Mark Suman, founder of Maple AI, to unpack how this is being possible through trusted execution environments, secure hardware that protects both data and computation. It’s a glimpse into the foundation of a truly OpenAI ecosystem. And so without further delay, let’s jump right into the interview.
[00:00:46] Intro: You are listening to Infinite Tech by The Investor’s Podcast Network, hosted by Preston Pysh.
[00:00:52] Intro: We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthroughs shaping the next decade and beyond empowering you to harness the future today. And now here’s your host, Preston Pysh.
[00:01:20] Preston Pysh: Hey everyone. Welcome to the show. I’m here with Mark Suman and I’m really excited to have this conversation, sir, because this is such an important topic, like crazy importance and, I think it’s only getting started, but I think everybody’s going to come to the realization how important this topic is in the coming five to 10 years.
[00:01:40] Preston Pysh: So welcome to the show. Excited to have you here, and really excited to get into this.
[00:01:45] Mark Suman: Thank you. Yeah, I’m excited to be on here. I’ve listened to your show quite a bit, so it’s cool to be on here and chatting with you.
[00:02:02] Preston Pysh: On the privacy front, I think this is something that is super relevant to where we’re going to go with open source, decentralized ai, which is what you’re building here with Maple AI. But what did you see while you were there at Apple that encouraged you or gave you the motivation to go out and start what you’re doing right now?
[00:02:21] Mark Suman: Yeah, sure. So privacy has been part of my career from the beginning. I started off doing online backup software for people back in like the early, I don’t know the 2000s, right? And it was all about how do we save your computer into this new cloud thing that everybody’s talking about.
[00:02:36] Mark Suman: But we wanted to offer people a private way to do it because like you could back up all your photos to someone’s computer and that person who runs a computer can see everything. So we would provide people with this private key that they could use on their computer and encrypt everything before they sent it to the cloud.
[00:02:51] Mark Suman: That’s kind of where I got my start. And so privacy was always kind of part of who I was. Fast forward to when I joined Apple, and on day one, my new manager sits down with me and says, I want you to build this thing. That we’re going to use in the retail stores, but we have to do it in a way that’s totally private because Apple cares about privacy, right? It’s one of the core things.
[00:03:10] Mark Suman: What I can say, like it truly is one of the core tenets of Apple, seeing it on the inside. So from like probably the third week of my project, I was engaged with a privacy lawyer and they were kind of part of the journey throughout the whole thing. And it’s like, okay, how do we build this thing?
[00:03:25] Mark Suman: Normal companies would just capture someone’s face and capture their identity and look at their banking transactions and all these things, right? Normal companies would do that. Apple doesn’t do it that way, right? We have to separate all the stuff. We have to find ways to do it that is totally privacy preserving, so it made things difficult.
[00:03:41] Mark Suman: We had to innovate and invent new things and nobody was doing, we had to invent totally new tools for tagging and annotating AI training data and machine learning training data. In ways that we’re totally privacy preserving. So it’s some really cool stuff and I’ll just say like it’s truly part of who they are.
[00:03:59] Preston Pysh: So when I look at all this AI race and you see who’s emerging as the dominant players, you got Google, you got OpenAI , you got X AI, which is really coming from behind. And then you have Apple, like it was almost like an ongoing joke amongst friends with the Apple intelligence and how slow it’s been and, how they’re just really, they don’t seem to be playing in this space.
[00:04:25] Preston Pysh: You look at all the GPUs that these companies that I just named are standing up and buying and all the hardware infrastructure that they’re building out, and Apple just kind of seems to be standing on the sidelines.
[00:04:35] Preston Pysh: Do you think that this is because of their focus on privacy and why they haven’t been able to play in that space? Or is it just the bad leadership? What, is this like? Help me understand what’s going on over there.
[00:04:49] Mark Suman: Yeah, I’ll, definitely put out the plug there that I’m not speaking on behalf of Apple. I’m not representative of Apple, but from my own personal view and things that I can talk about that have been publicly mentioned, there’s definitely a privacy angle to it.
[00:05:02] Mark Suman: Because everything moves slower. Apple wants to do it in the way that’s most private. You know, they announced their Apple private cloud compute, what, two years ago now? And so they’re using secure enclaves. They’re trying to do it in a way that’s verifiable. They don’t open source their code, and so you’re trusting third party auditors who have maybe seen images of the server code.
[00:05:22] Mark Suman: So there is still a layer of trust there. They’re trying to do it in a way, from what I can tell that is responsible. I don’t want to use the word responsible because in the AI space, responsible tends to mean censorship, but they are trying to do it in a way that cares about the user. So I think that’s an element.
[00:05:38] Mark Suman: And then also, Apple’s just a giant company, right? They have over a hundred thousand employees. And so you’re going to have different organizations. You’re going to have the typical tier fours that go on in any large organization where someone wants to build out their headcount, and maybe there’s five competing products being built internally and one gets funding and the other one doesn’t.
[00:05:55] Mark Suman: So there’s probably a lot of that going on too. I’m not going to dismiss that aspect of it. So I think it’s both of those at play here.
[00:06:01] Preston Pysh: Yeah. Help educate us just on, let’s start here. So when you’re just looking at this from the viewer standpoint of open source, decentralized, whatever terminology, help us understand what you think the best terminology for this is.
[00:06:16] Preston Pysh: Preserving your data, everything that you put into these AI models. Is it possible? I’m assuming the answer is yes, but give us your 60-second overview of how you see the world as it comes to AI moving forward and how people can preserve and protect their data that they’re feeding into these models.
[00:06:35] Mark Suman: Sure. The word that I would use is verifiable, which coming from the Bitcoin space, it’s always don’t trust, verify.
[00:06:41] Mark Suman: And I apply that exact same thing to AI. And verifiable can mean different things, right? It’s not prescribing a specific technology, it’s prescribing an ideology of being able to understand, inspect, look at what’s going on with your data. So verifiable could mean open source code. It could mean running the LLM locally on your computer.
[00:07:02] Mark Suman: If you want to take advantage of really powerful cloud compute, then run it in the cloud. But use something like secure enclaves in trusted execution environments that can give you mathematical proof. You know, the server matches the open source code that’s on GitHub, that kind of thing.
[00:07:18] Mark Suman: So really it’s being able to inspect, it’s being able to verify everything that you’re running, whether it’s the LLM, whether it’s your data in your AI memory that you’re hosting. You want to be able to look at everything so that nothing is kind of hidden in there that you don’t know about.
[00:07:32] Preston Pysh: Help us understand the threat here. Like how bad could this get five, 10 years from now? Is this something that people should be very concerned with or just moderately concerned with? Because I can tell you I have an OpenAI account.
[00:07:46] Preston Pysh: I’m feeding stuff. I find myself feeding stuff into this. That three or four years ago, I’d be like, ah, no, I wouldn’t be using it that much. But I find myself using these models all day long. Asking it questions, I can fully understand how it just knows everything about me at this point based on the things that I’ve fed it, and I’m not proud of that.
[00:08:08] Preston Pysh: It’s a thing of convenience that I find myself, and I suspect many people that are listening to this are also participating in. And I’m concerned that you’re going to get to a point where you get so addicted to using these types of models that it’s really hard to wane yourself off of it. So what is this threat?
[00:08:27] Preston Pysh: What does this look like? And is there a point in the future where everybody comes to a realization of like how terribly dangerous this really is? Or do we just keep going down this path where we just keep beating it more and more?
[00:08:39] Mark Suman: Yeah, I mean, I don’t fault anybody for using these tools. Convenience is an amazing thing, right? It’s why technology exists. Technology comes around to make things more convenient for people. It adds value into their life, and so they grasp onto it. And then there’s always trade-offs. And so I have an OpenAI account, I’ve got a Grok account, I use them. And I use them in a way that I’m trying to minimize my exposure. My privacy exposure.
[00:09:03] Mark Suman: I also obviously build Maple and me and my co-founder, and so I use that for different purposes, but the threat that I see you talk about like five to 10 years down the road, the difficult thing I heard it describe recently chatting with a friend is that as you kind of give away your thought process to a proprietary AI service, right? It takes that. And there’s really no getting it back, right?
[00:09:24] Mark Suman: They have it now forever, and they can make as many copies as they want to, and then they can choose to put that into their model. They can choose to manipulate it if they want to. They can do whatever they want to with that data, and you’re just not getting it back.
[00:09:36] Mark Suman: So five, 10 years down the road, if you look at what’s unique about you as a human, it’s really the way that you think. Your face is unique for sure, but you could probably find a pretty good doppelganger out there that looks similar to you, but your memories, your thought process, the way that you perceive the world is probably the most unique thing about you.
[00:09:56] Mark Suman: Yeah. And if you’ve kind of given up that thinking process to another machine that has now captured it and can train off of that, we might be giving up the thing that makes us uniquely human. I think the threat could be viewed in that lens of are we turning over some of our humanity to a proprietary system?
[00:10:14] Mark Suman: And I’m working on a long form article about this right now. It’s a phrase I’m calling subconscious censorship, and we can dive into that if you want to. But, it’s really this notion that these proprietary systems capture your memories and capture your thought process. And then they can be instructed, given directives to alter your memory to be more, more mainstream or be less mainstream.
[00:10:33] Mark Suman: You know, they can guide you and direct you how they want. And that’s really the threat, right?
[00:10:37] Preston Pysh: Is the model could, if the desire’s there to, let’s say it’s somebody with very influential that’s using one of these accounts, let’s say, that you want to start shaping and transforming what they think. You can go in there and exploit it in a way because you kind of know what their desires are and what their interests are, and then you just kind of slowly like correct it by putting it into a rut and into the direction that you want it to be shaped.
[00:11:03] Preston Pysh: Is that really the deep concern or the risk for some people that are using these over time?
[00:11:10] Mark Suman: And we’ve seen those methods already used with social media feeds, right? We all talk about the algorithms and how we don’t like them. We don’t like the For You tab on X. We’d rather have the time, the chronological timeline.
[00:11:22] Mark Suman: Same thing with Instagram. People got very upset when Instagram flipped over to an algorithmic timeline, but at the same time, we just keep using it. And so we’ve seen how, just by the way, that they order the posts. They can affect your emotional state, right? So maybe there’s a real, a piece of good news that you’re going to see on your timeline, but for whatever reason, they’re motivated to keep you in an angry state.
[00:11:43] Mark Suman: So they’re going to show you something that’s really maddening right before that. Good news. So now you’re in a totally different head space when you receive that good news. And maybe it downplays it if you take some of those tools of persuasion that they use and apply it to AI, well, now AI knows you intimately and so it knows where to like place an anchor of a false fact in this output that it gives you.
[00:12:04] Mark Suman: So it’s guiding you and now it’s emotionally triggering you. And it can do all these things in a very subtle way and it can repeat them and do them and change the permutation of it. In thousands of iterations over the course of weeks, months, years that we’re working with it until the point where we don’t realize that we’ve been kind of guided into this rut, as you call it, and led a different direction.
[00:12:25] Preston Pysh: Today, I don’t suspect that, you know, these large models that we’re using, call it X AI or even OpenAI , I don’t think they’re being used in a way that’s trying to psychologically manipulate people. But I think it can go there really fast. And I think that’s the concern, right, is when does the government get their hands potentially in something like that, and then start using it for exploitation as opposed to just an everyday tool that’s being tailored to help you out or to make your life better.
[00:12:54] Mark Suman: Yeah. I’ll jump in there. I don’t want to be like super doom and gloom.
[00:12:57] Preston Pysh: Yeah. Neither do I.
[00:12:58] Mark Suman: Yeah. Right. Yeah. I tend to view technology as this amazing gift that we have. And this thing that we’re building. And so I love it. And obviously I love using ai because I’m working on that daily in my life.
[00:13:10] Mark Suman: And so I want to just call out this vulnerability that I see. And I think that the mitigation is really verifiable AI. So if we use models that have been trained on open data sets, and we can see the weights, we can see the biases that went into training, and then we can see the code that we’re running and we can run it locally on our own data, or we’re running it in the cloud with something like trusted execution environments. These kinds of things allow us to see what’s going on.
[00:13:36] Mark Suman: Now we can kind of have our cake and eat it too. We can use these powerful models and this powerful technology without having this risk that we are going to be led away. because I think you’re right. I think right now we’re not seeing that kind of manipulation going on.
[00:13:49] Mark Suman: We have seen them introduce things like advertising and shopping experiences, and a few weeks ago it was, Hey, you know, chate is going to work while you sleep and when you wake up you’re going to have recommendations in front of you. Which basically like we’re going to give you advertisements in front of you in the morning of all these think cool things that you want to get, and you can see how that can quickly turn into something more.
[00:14:11] Mark Suman: They’re building this tool that could be used in other ways, and I would prefer to basically not give myself to that system and instead go down a path of openness and verifiability.
[00:14:21] Preston Pysh: Yeah. think the example that you provided earlier, as far as like social media, it seems that AI has, been used for almost like a dark way when you start talking about social media, what’s coming into the feed. But as far as the chat context windows and the interaction, it seems like we’re still very early in building that all out so that it hasn’t necessarily entered into that form yet. I think anybody listening to this is saying, I like the fact that it kind of knows me and can give me just tailored, custom made responses.
[00:14:57] Preston Pysh: I love that. I just wish that I knew it was partitioned or on my own server and that nobody else could see that data, and it was something that was inside of my control. I think everybody listening to this would agree with. That statement. So walk us through, because that’s what you’re trying to do, right? Walk us through how you’re trying to do this.
[00:15:17] Mark Suman: Okay. Yeah. And I think what you just described is what most people want, right? This is the, tale’s oldest time with privacy technology and freedom technology is, we know that we should be probably being better about our data and better about our information that we share, but it’s just so useful and so convenient. And there we see so much productivity gains from using some new technology that we’re willing to close our eyes and plug our nose as we use them. And I’m just as guilty as everybody else with doing that because we have to make trade offs in our personal life to do that.
[00:15:48] Mark Suman: And I mean, we haven’t really talked about the potential for data leaks. Right. And maybe I’ll just drop this in here really quick and then I’ll answer your question directly. But we’ve seen with ChatGPT with Grok most recently, both of them. Had bugs in their software where chats were being indexed on Google search results.
[00:16:06] Mark Suman: And so, I don’t know if you saw, this was like a month or two ago. So people were searching for stuff and finding personal chats that they had made. These were specifically around the share button. So in chat to pd, you can click share. It gives you a private link that you can send to somebody, and now somebody else can read that chat.
[00:16:21] Mark Suman: And so it was still meant to be private between you and someone else, but Google was picking up on those and now people could search, and it would be things like somebody’s chatting about their marriage, you know, and marriage difficulties they’re having. And then they send it to their spouse and say, okay, here’s what our AI therapist on Chad Giti told us, and now their marriage details are spilled out onto the internet.
[00:16:40] Mark Suman: So it’s when you give somebody else your data, that’s a risk you’re taking on. is have stuff like that happen. What we’re trying to build is we’re trying to build verifiable AI that people can use so they can see everything through the process. We build everything in the open. All of our code is online before we push anything, the servers, before we push anything as an update that you can download, you can see the code first so people can inspect it.
[00:17:04] Mark Suman: Then we also know that local AI is really the most private AI something you can run on your phone, you can run on your computer. It’s never going to get more private than that. Turn off the internet, talk directly to it. You can inspect it before you use it. Like that’s the utopia right there. But not everybody has a powerful enough device yet to do that.
[00:17:22] Mark Suman: It costs tens of thousands of dollars to run these big largest open source models. So we’re trying to give people an in between. We run secure enclaves in the cloud. We push our code there, and then what it does is it gives you, it’s called an attestation, which is really just a mathematical proof, and it’s a way to match.
[00:17:39] Mark Suman: So it’s a way to say, okay, you have this code that’s open source on GitHub, but how do I know that you’re actually running that exact same code? On your servers? So there’s a lot of other private AI out there that say, Hey, here’s our open source code. We’re private, we’re not tracking what you do, but you can’t actually check. You can’t verify that.
[00:17:55] Mark Suman: So we’re trying to be as transparent as possible, and so we provide that mathematical check. So when you go into Maple, you get this little green verified check mark. Yeah. You can click on that. You can see all the details. It’s really similar to that lock icon when you go in your web browser and log into your bank.
[00:18:10] Mark Suman: You get that secure socket layer lock icon. HTTP SI view this as like the next iteration. The internet started open with HTTP and everybody was just like going around websites, viewing them. And then when we started having usernames and passwords, we had to come up with something better. So we did H-T-T-P-S.
[00:18:26] Mark Suman: Now I view this as a third IT iteration. I’m calling it H-T-T-P-S, Seb for Secure Enclaves, which is my own fun little moniker. But it’s really this way that we can now verify the code running on the servers because we’re trusting the cloud with so much more every day. So we need to provide a way for people to verify that.
[00:18:42] Mark Suman: So that’s effectively what we’ve built with Maple. But I’ll say one more thing and then, you know, I’ll, let you respond if you want to, but what we’re trying to do is, we know that people don’t want to give up their convenience just for the sake of privacy. It’s a, really tough sale to make.
[00:18:57] Mark Suman: Really tough. So we are going to build effectively. ChatGPT but it’s going to be better, right? We’re building ChatGPT and it’s going to have privacy at the core, and so we are going to give people all of those core amazing features that they get out of ChatGPT and Grok, but they’re also going to have privacy built into it, rather than someone who is harvesting their data for other business purposes.
[00:19:20] Preston Pysh: The challenge on that last part, Mark, is people hear that. They know that these large models, especially the newer ones, they call it GPT five or GR four Heavy. They’re training even more powerful ones, getting access to that. In a way that you can leverage it in an open source kind of way, is the challenge.
[00:19:40] Preston Pysh: And people are just looking at the performance of these models. I know I’ve played around with these models and I’m saying, you know, the open source version or the open weight versions are just not even close to what these newest models are doing. Is that a temporary something that is going to continue to persist for the next two or three years, and then all of a sudden the open source or the open weight versions are going to be up to speed with where some of these larger models are, or are they always going to kind of be outpacing where the open weight models are at? I think that’s the concern that people have before switching over to something like this.
[00:20:15] Mark Suman: Yeah, it’s a valid concern, especially in the early days. The open models were significantly worse than the proprietary ones. But we’ve seen an acceleration. I mean, Chachi PT came on the scene two and a half years ago, maybe three years ago is when it really caught on, and we’ve seen the open models catch up a ton in that timeframe, you know?
[00:20:33] Mark Suman: There were like 50% as good, then 75%. Now they’re like in the 90% range. You have a coding model, Quinn three coder, which is scoring just as good as some of the proprietary models on programming in some areas, not all areas. And we’re seeing a point where like the benchmarks, however you want to define that, are really getting similar.
[00:20:52] Mark Suman: And then it comes down to just using it and seeing how it behaves for you. Really, most people don’t need to have that extra like 3% in their model to really get a lot of value out of it. And then the other thing we’re seeing is you bring up GPT five and arguably GPT five is an incremental increase over GPT-4 and a lot of people will complain to want them to go back to G PT four and make it available again.
[00:21:17] Mark Suman: Some people have done like introspection into the routing technology behind GPD five and think that there’s actually still a lot of GPD four just under the hood that’s helping to power version five. And so you look at that and you say, okay, maybe their progress has slowed down just a little bit. And then also they open sourced G-P-T-O-S-S, which is really just kind of 4.0 under the hood.
[00:21:38] Mark Suman: And so you’re seeing that open source from their standpoint is starting to catch up. And then you have this whole market dynamic of the Chinese models. You have deep seek, you have Quinn, you have these other ones, I’m blanking the other one right now. But you have these other ones that are coming up.
[00:21:52] Mark Suman: And in order for them to compete with these big proprietary models, they’re going open first. And they’re trying to be as good so that they can compete. So I think we are seeing this world where the open source catches up just enough that it becomes just as valuable to a regular person, mainstream person than these other models will be.
[00:22:10] Preston Pysh: What’s the business model for them to go open source like that? That’s the challenge that I’m continuing the struggle with. What’s the incentive for them to go that route? Like how are they going to make money doing that?
[00:22:20] Mark Suman: That’s a good question, one that I’m still trying to figure out. I think that if you are China and you don’t have access to chips, or you don’t have access to some of these things, then you know, maybe you’re trying to build this model that you just want to spread around the world.
[00:22:35] Mark Suman: Really, I guess one thing that I come down to is if ideologically you want your view of the world to be out there and used by everyone. And you know that the American models are not going to have your viewpoint. So you’re going to build that and the best way to get out there is just give away for free.
[00:22:51] Mark Suman: So that could be one thing. The difference with the open source and running the open models is you can see exactly what’s going on, and so you can kind of build around that and build around that propaganda. So I don’t have a good answer for you on what’s the business model other than it seems like if you can’t compete from a proprietary standpoint, then you go open and you spread your message far and wide and try to catch up that way.
[00:23:16] Preston Pysh: Do we get to a point where these large language models kind of start peaking out and it just doesn’t make sense to train them with even more, you know, they’re plowing so much energy into these latest models. Is there a point where they get like peaked out, call it two years from now and there’s no longer this race to just build an even larger compression of all the information on the internet? because that’s basically what we’re doing right.
[00:23:42] Mark Suman: Yeah, it’s hard to guess five, 10 years from now, even two years from now, because AI’s moving so fast. Yeah. But I think what we start to see is a slowdown of the general models and then we start to see more specific models. the first thing we’re seeing is with coding, right?
[00:23:55] Mark Suman: People want to program, and very quickly we’ve seen that a model that is fine tuned at just software engineering is performing generally better. And so we’re seeing a divergence there. And I think we start to see, medical field and legal field and all these other different industries come up with their specific models.
[00:24:14] Mark Suman: I know there’s people working on therapy ones, and then we have these general models that act as routers in the front. So you come up to it and you start talking to it, and it pulls in as specialists. and we can really go deep and dive deep, you know, when we have those specific models. So I think that might be the next thing is how do we have specialization and then how do we have these general models kind of wheel and deal and be the general contractor, if you will.
[00:24:38] Preston Pysh: How do you think through stitching all these different models together, so, and when a person creates an account, walk us through the process of creating an account at trymaple.ai, which is the website that you’ve built in order for people to run their own AI locally.
[00:24:54] Preston Pysh: Walk us through what that is, and then more importantly, how are you stitching the different models together to provide the experience that the user has? Help us understand that.
[00:25:03] Mark Suman: Sure. Yeah. So first off, with Maple, right now we’re only in the cloud. We want to provide local stuff as well. It’s like a hybrid local cloud where all your data is encrypted locally first on your device. And we use a private key coming from the Bitcoin world. We understand the power of a private key.
[00:25:18] Mark Suman: Same with Nostr, right? And so we apply that here. So you’re chatting with AI locally on your phone, it encrypts it and then sends it to the cloud for processing in the cloud as secure enclave. Uses your private key decrypts, it gives it to the ai, it comes with a response, re-encrypt it with your private key and sends it back to you.
[00:25:37] Mark Suman: So we are not in the middle. We can’t see anything going across the wires. Only the secure enclave sees the personal data, but you can go look at the source code and see that there’s nothing going on there. So how do we tailor that experience? Right now? We just give you a model picker and you’re having to choose, and we have a lot of users telling us they get almost like low key anxiety from like trying to pick which one’s the best model that I should use right now.
[00:25:59] Mark Suman: So part of our big 2.0 push that we have coming up is we want to build something that helps guide the user. And say like, I am in big brain thinking mode right now, so I’m going to click on this thing and it’s going to drop me into a model that helps me do that. Or I’m just looking in quick trivia mode. I want to look up something, so it’s going to drop me in there.
[00:26:17] Mark Suman: In the beginning it’ll be like just an easy picker to do for the user, but then we’ll switch over to an auto mode where they just chat and it knows what to do, and you just put a simple classifier in front of it so it looks at your prompt and it can quickly determine itself. What should be used, just like you would, you know, you would say, I’m, in this mode.
[00:26:33] Mark Suman: Here’s what I’m thinking, and you would pick that. Well, it’ll do that for you. So we want to get more automatic with that, but always provide these advanced features that people can turn back on and be more selective if they want to.
[00:26:44] Preston Pysh: Love that. The next thing that I personally want see, so with OpenAI, you know, I’m having a chat and I’ll say, this is really important. I want you to remember this for other context, windows and other discussions that I have with you, right? It then it compresses, whatever that is. It puts it into like this, my understanding of Preston Memory Bank. Are you guys working on something like this?
[00:27:06] Preston Pysh: Because I think this is something that people really want to have, but they want to have it in a private way. I know I personally want to have this in a private way. I hate every time I do this. But I also find myself going back to similar conversations where that context is really important and I hate just, I have to type it in every time I’m in a new context window.
[00:27:24] Preston Pysh: So is this something you guys are working on and where do you see the roadmap for something like this, if ever?
[00:27:30] Mark Suman: Yeah, definitely what you just described, there are two different implementations that kind of help out with the same thing. So one way that the chat PD does it is they have these custom gpt where you can set up this thing and it has a lot of the context pre-built into it.
[00:27:45] Mark Suman: It’s basically like you typed in a system prompt with all the stuff that you want. And for people listening who maybe aren’t like super into ai, a system prompt is basically the instructions that you give to the model. You give to the AI to say, Hey, when I talk to you, I want you to kind of be in this personality or this frame of mind, or as this character as I’m talking to you, so you can get silly and you can say like, I want you to talk like a pirate to me.
[00:28:08] Mark Suman: So every time you chat with it, it’s going to talk like a pirate. That’s like the extreme example, but more nuanced. You can be like, Hey, I am going to have a legal discussion with you right now, so I want you to be a lawyer. I want you to be a contract lawyer, and I want you to have these qualities about you.
[00:28:22] Mark Suman: So to me that’s one part of what you just described, and we definitely do want to do that. We have a system prompt you can edit. We show you what the system prompt is. We want to have multiple in the future where you can customize those and maybe a dropdown or something say, Hey, I’m in legal mode right now.
[00:28:36] Mark Suman: Switch into that mode. The other aspect you just described is the memory side of it. And we are definitely working on that. We’re going to have, you know, an open source memory. Component to it. we don’t know exactly which direction we’re going to go yet with it, but it’s going to be something where you will see, okay, here’s everything that the AI has learned about you and AI memory is really fascinating because I like to view it as you’re sitting down with a biographer, right?
[00:29:02] Mark Suman: Say you’re Steve Jobs and you want to have everybody know about your life, so you get the best person out there to write biographies. That’s what’s happening with you every day as you’re using chatGPT or any AI product. It is sitting down and trying to learn everything he can about you. Here’s how he thinks.
[00:29:15] Mark Suman: Here are his childhood memories, you know, yada, yada, The difference is in a proprietary system, you don’t get to read that biography. You don’t really get to see what’s in there. They will show you an interface that says, oh, here’s the things we know about you. Yeah, and we’ll even let you delete it, but there’s no guarantee that’s actually happening, right?
[00:29:31] Mark Suman: If you delete that thing out of there. It probably still remembers it, but it’s just like, oh, we’ll tell them that. We’re not going to use it, but we could use it if we want to. What we want to build is a truly sovereign AI memory where you can go in and see what we remember about you, not we, what the system remembers about you, and then you can edit it, you can add to it.
[00:29:49] Mark Suman: And then that will get pulled into future chats. And so with those two combined, the system prompt personality thing, that’s more proactive. You can say, I want you in this mode, whereas the memory side is more passive. It’s like, Hey, this is my context about me. So use it selectively as you see fit.
[00:30:07] Preston Pysh: For me, the, ladder there where it’s really kind of understanding your past and just understanding the essence of who you are and what it is that you’re wanting is super powerful and useful as I continue to interact with this.
[00:30:21] Preston Pysh: I’m curious what the challenges are from an engineering standpoint to put something like that together because it seems like it could dominate as you go into a new context window talking about a brand new topic, that when it has this memory that it’s pulling from to be customized responses, that it could potentially dominate the next conversation that has nothing to do with the memories that you’re asking it to hold.
[00:30:46] Preston Pysh: How do you think through the problem, and I’m just kind of curious. You know, is that going to be a really difficult feature to kind of roll out as you continue to build this out?
[00:30:54] Mark Suman: Yeah, absolutely. That is one of the biggest challenges with AI memory right now, is that it overweights and overemphasizes something that you give it, because you think about your brain, you’ve been storing up memories for decades in your brain.
[00:31:07] Mark Suman: And you know, without realizing it, you know when to selectively pull on something and when to not pull on something and use it in the decision that you’re trying to make. Whereas ai, maybe it has right now, maybe it has like two pages of information about you. And so that one memory you tell it is suddenly like one of the most important things it knows about you.
[00:31:26] Mark Suman: So it’s going to like overly push it into the things that you’re using. So we’re trying to figure out how do we down weight that and how we not have it influence everything. I think a lot of that comes down to. You have to get really good at annotating information when it goes in and say, okay, this little memory you have, this memory is really focused around finances, or this is focused around help or you know, being outside mountain biking or something, right?
[00:31:50] Mark Suman: And you want to classify that so when you’re having a chat, it knows, Hey, I can totally ignore that right now because that has nothing to do with this topic. Rather than like, oh, you’re mountain biking the day, Hey, you should bring along your financial advisor and have a good chat about your, you know, your IRA or something like that would be really dumb.
[00:32:04] Mark Suman: So we do have to figure that out. And I think it goes back to that verifiability thing is I want to know what the AI is doing and which memories it’s suppressing and why. Rather than a closed, opaque system that is going to do that, make that decision for me. because maybe that memory is really important and relevant right now to something we’re talking about, but it is choosing to not make it relevant for whatever reason. Could be accidental, could be profit driven, could be nefarious.
[00:32:31] Preston Pysh: One of the things that I’ve heard more recently is that inference is where is going to be the competitive moat moving forward in the coming five to 10 years? So for people listening, you have, and correct me if I’m wrong here, mark, if I’m not describing this correctly, but when you think about ai, you got the training of the model itself, which we were talking about earlier with some large language models.
[00:32:52] Preston Pysh: And then you have the inference, which is using the train model that you spent all this energy and poured into all these GPUs to compress all of it into a single, you know, large language model. The inference is using that train model to generate the answers that you’re getting back. And every time you prompt, you know, you ask a question and it goes into this model.
[00:33:13] Preston Pysh: This happens every time. You prompt it, it requires GPU memory, and you have to run it through that entire model, and then it pops out the answer. And this is the inference process. And what I’ve heard is that in the future, the speed of that inference and then the cost to do it as efficiently as possible, but still give you the high quality answer, is where.
[00:33:33] Preston Pysh: The competitive edge is going to have the winners and the losers. I’m curious if you agree with this, and then if you do, how do you think about that inference piece with your company and being able to provide an efficient and quality fast response to people as they’re doing this in an open weight, decentralized kind of way?
[00:33:54] Mark Suman: Yeah, so inference versus training, the costs involved and the power involved are very different. Right? So the training part costs, I don’t know what the exact numbers are. Let’s just say 10x. It might be bigger, it might be smaller, but yeah, it’s like it takes 10x the amount of resources to train something.
[00:34:09] Mark Suman: Just like as a human, it takes all this time to train you over decades for you to live life and learn all these things. And then eventually you can sit down and, have a fruitful conversation like we are right now, right. And so it’s easier for us to have this conversation than it was for us to learn everything we learned up to this point so that we are capable of having this conversation.
[00:34:27] Mark Suman: So inference should be viewed that way. Now you’re ready to have a chat, and I see that, what’s the moat? What’s the unique thing that’s going to be competitive? And that is just the user experience. And so these apps that we’re building on top of the inference are going to be the competitive moat. And what different qualities they have.
[00:34:45] Mark Suman: And we’re already seeing that with ChatGPT and some of these others. They’re trying to build apps on top of their inference layer now that really pull people in. The latest is the SOA video app that’s pulling people in and trying to make it more engaging. Right. As far as inference goes. I think that just only comes down in cost over time, and even though we’re going to get bigger models, we’re going to build chips that are more efficient for processing those models.
[00:35:07] Mark Suman: Apple, even though they don’t have the right AI solution yet according to the market, they have built these chips into every single device that are just highly specialized that processing these models. So one thing that we’re looking at with Maple is doing a hybrid approach where you actually have smaller local models that run incredibly fast and are extremely cheap to run because they’re just running on your spare cycles on your device.
[00:35:30] Mark Suman: And they will do a bunch of the initial processing and on some of the most sensitive information, and they will come up with the most efficient prompt. To give to the cloud model. So you might go in and bang out like this massive prompt paste in a whole PDF of information. And then the local model will crunch that all and say, okay, this is all good and dandy, but really what I need to pass on is a smaller chunk.
[00:35:53] Preston Pysh: Yeah.
[00:35:53] Mark Suman: It’ll pass it on to the cloud that’ll get processed on the more expensive servers and then come back to you. And I think in a model like that, inference continues just to drive into the ground as far as price goes. And gets faster as well. And so we end up with a better user experience and so the people that can solve that kind of user experience are going to have a better moat, be better as a competitive advantage.
[00:36:13] Preston Pysh: Yeah. I recently read that X AI has custom asics that they’re building just to improve the speed of the inference, and I guess it’s 10 to 20 times faster than some of the GPUs, the best GPUs on the market right now just because they custom made it for that specific task, which is fascinating. And it also just, I guess it leans into this idea that I think it is challenging to compete with some of these larger, you know, like X AI, it’s going to be a bloodbath of competition to compete against because they’re going to be able to provide such quick and efficient and quality responses because they’re going out and doing things like this that are very capital intensive.
[00:36:55] Preston Pysh: Like I’m just looking at this whole space and I’m curious your thoughts on just how expensive some of this stuff is. And you know, we saw the thing with OpenAI and Nvidia and help me out with the other Oracle. Yep. Like this piece and that Broadcom is in the mix too. Yeah. The numbers are crazy. Mark.
[00:37:15] Preston Pysh: Absolutely nuts. How do you see that kind of resolving itself and like, where is that going? Is it just going to keep going more?
[00:37:24] Mark Suman: Yeah. It’s crazy. I mean, I saw somebody comment over the last 24 hours about the whole Broadcom and OpenAI thing. Yeah. Where it’s like, Hey, OpenAI, we want to buy all these chips from you, but we don’t have the money to pay for them.
[00:37:37] Mark Suman: And so they basically say, let’s do a press release together. Broadcom stock goes up, now their market cap has gone up $150 billion and it’s like, boom, there’s your money that you needed. So we’ll loan you, we’ll loan you our market cap basically to help you out. So it’s kind of this crazy thing. A lot of money being tossed around.
[00:37:52] Mark Suman: Where does this resolve? Oh man. I wish I had a crystal ball to understand, but I think that we are going to have big players out there. We’re going to have these people who are building these big, massive, mainstream solutions. And I also think that you look at the government contracts that have come to these major models, right?
[00:38:10] Mark Suman: They gave 200 million to X AI, 200 million to OpenAI to anthropic. I think meta got that too. I can’t remember. So there are bigger things at play here with Department of Defense and other governments around the world. So I think that there is going to be a need for large scale systems like that.
[00:38:26] Mark Suman: There’s also a need for other people to build out systems, and I think there’s a world where they all exist together. I don’t think this is going to be a race to where there’s going to be one winner take all, because really there are so many different ways to approach intelligence in this life. There are so many different avenues and so many different needs that rock’s not going to be able to solve them all.
[00:38:47] Mark Suman: ChatGPT is not going to solve them all, and so I don’t know where all the money resolves. I think we’re definitely going to have a bubble at some point that’s going to pop. It’s a view of very similar to the internet. And so we’re going to have all these companies that overinvest, and then there’s going to be a retraction and a retracement back, and the winners are going to remain.
[00:39:06] Mark Suman: So I don’t have a perfect answer for you on that, but I just think that there will be some overinvestment, but I don’t think it’s going to pop and go away. There’s too many benefits. People are seeing too much productivity from AI, too much value coming out of it, that it’s going to survive. It’s just, which people and which companies will remain standing.
[00:39:23] Preston Pysh: It’s hilarious. because it’s almost similar to like a meme coin pump of them raising capital and then transmute the common stock pump into capital that then invest into the hardware.
[00:39:35] Mark Suman: And it’s like I’m sure you’ve seen the charts of like OpenAI goes over to Nvidia and then they give money to Oracle and it’s all circular. It’s this weird thing going on.
[00:39:44] Preston Pysh: It’s crazy. Yeah. What’s one of the most challenging things for you right now, building out this business?
[00:39:51] Mark Suman: We’re trying to keep up and we’re trying to get feature parody with arguably one of the biggest companies in the world right now. And so OpenAI and others, they have billions of dollars to throw at designers, at engineers and trying to build the best user experience possible.
[00:40:07] Mark Suman: So in order for us to get people to care about privacy. We have to give them a tool that’s as convenient, as usable as chat chue as a baseline. So that’s really the biggest challenge right now. That’s where we’re racing, is we’re trying to figure out how do we pick off the most important features because we can’t build them all right?
[00:40:22] Mark Suman: Now, it’s just me and another person. We can’t build them all right now. And so let’s selectively pick the most important ones, get those to feature parody, and then keep building from there and keep growing. So, you know, we’re going to be raising some money soon, hopefully. And that’ll help us hire a few more people.
[00:40:36] Mark Suman: But even then, we’re never going to match these bigger companies as far as team size, but we’re using AI against them. We’re using AI to help us build faster than we could. And we only launched back in January and we’re getting ready to do our 2.0 launch probably next month. And it’s come a long way just in those last nine months. And so the next nine months next year are going to see drastic changes to what we’re building in the positive direction.
[00:41:01] Preston Pysh: That’s the crazy part. The reflexivity of the AI itself. As it’s getting better, you just get more and more powerful and in a way having maybe a smaller team, you’re able to just kind of focus in on those features that are most important.
[00:41:16] Preston Pysh: The things that I’m seeing on the programming front, especially with the Google model seems to be almost unfathomable. What it’s one shotting with a prompt. Can you just kind of help us understand what’s transpired just in the past year with respect to your ability to program and write code?
[00:41:40] Mark Suman: So there is a lot of salesmanship going on when it comes to the vibe coding space. And so a lot of progress has been made and there are definitely stories where people go on and say, oh, I wrote like a few sentences and it gave me an entire app that I can use. And those are great. I think for proof of concept. We’ve definitely seen a lot of great things in that space. But getting an app that you wrote, which is one paragraph of text into production that millions of users can use that has covered all the edge cases and stuff. That’s a totally different story.
[00:42:09] Mark Suman: So I definitely see a lot of memes where it’s like, oh, and software engineers are so cooked. But really what I think the great power is software engineers using AI and accelerating their abilities. So that’s what we’re seeing. I don’t want to cast throw shade at the companies that are doing vibe coding for people who are not software engineers. Because what I see as a huge value add right there, being a software engineer myself for decades, I get approached all the time. Someone’s like, I’ve got a great idea for an app.
[00:42:35] Mark Suman: I want to build this. You know, can you please go build this? And they’ll draw on a little piece of paper. They’ll maybe build a PowerPoint presentation, but they try to gimme requirements. And I can see very quickly that the requirements haven’t thought of everything or the idea just is, kind of off base.
[00:42:49] Mark Suman: So where vibe coding comes in now is they can take that and instead of coming to someone like me, they can give it to an ai. It can build in the proof of concept. They can play with it and they can say, oh, this is a piece of garbage, or, this is a great idea. Let me iterate a bit on this. And so now when you go to approach someone to build it for you, you’ve got this really refined proof of concept that conveys your idea and has thought through a lot of the initial things.
[00:43:12] Mark Suman: And then for us, we’ve taken. AI and built it into all sorts of parts of our process. So we’re using tools locally where we are running coding environments, you know, IDs locally with something like Claude Code. We’ve tried out Codex from Chacha T. We’re using Factory right now also as kind of the new hotness that’s come on.
[00:43:30] Mark Suman: So we’re using all of these. We also use Maple. We’ve got Qwen 3 Coder where we’ve got that plugged into IDE. So we’re using all these tools and then when we check in code to GitHub, do a poll request, we have two other ais that hop in there as code review agents. And so they’re both reviewing the code and they come from two different models, two different companies.
[00:43:48] Mark Suman: And so they give a different perspective, and so they drop in their comments and say, Hey, this line of code. Maybe, you know, you should think through this more. There’s potential bug here, that kind of stuff. And then we go in and we say, Hey, Claude, like respond to these comments from the poll request. And so you have these agents that are really helping out, but ultimately in the end, we are the ones reviewing the final say on the code.
[00:44:08] Mark Suman: And we might say, Hey, we don’t like the approach they all took, so let’s get in there and bang things out a little bit differently. But truth be told, I think probably like 90% of our code, maybe 95% of our code is written by ai with the human in there, directing it, guiding it, inspecting it, and making sure that it comes out correctly.
[00:44:25] Preston Pysh: Would you say that your time has been multiplied 10x? Like what used to take you 10 hours? You can do in one hour now?
[00:44:32] Mark Suman: Yeah, I haven’t measured it specifically, but that’s definitely the vibe that I feel. Wow. I look at what we’ve built in the last nine months and launched, and we have a sizable number of users now, a sizable amount of revenue coming in, and I think about trying to do that before I’ve been part of multiple startups.
[00:44:48] Mark Suman: I look at another startup that I was with, and it took a lot longer just to get the product to market. Took almost an entire year of just understanding and writing initial versions and then throwing those away and doing different versions. To where now we’re just, yeah, we’re so accelerated. Like I said, it’s just two of us, and so if we were doing this prior to AI, we probably would’ve had to have two more people, three more people to get to this point.
[00:45:12] Mark Suman: So we’re definitely seeing an acceleration.
[00:45:14] Preston Pysh: Yeah. On the home hardware running this locally, like right now you’re saying you’re doing it in the cloud, you send the encrypted data over to the cloud, it then processes it, it sends it back encrypted, but for people, let’s say 10 years from now. I’m curious how you see the home market for running your own server.
[00:45:32] Preston Pysh: I know as a coiner I run my own node. There seems to be some synergy there for people that are, do it yourselfers that like to do these types of things. But is this going to become something that’s almost like a heater in your home or like any type of appliance that’s in your house, are people going to have their own data?
[00:45:53] Preston Pysh: That’s stored locally, that’s run with, you know, some type of ASIC or GPU, that’s a specialized piece of hardware so that we can not give up our data. Is that where this is kind of moving? Do you see that trend going there or do you think that’s still a very hard ask to ever expect to out of people that might not have the technical competence or desire to for privacy? Like what are your thoughts?
[00:46:17] Mark Suman: Yeah, I would love a world where everybody had their own home server plugged in and it’s doing all this for them. So, you know, 10 years from now I could see the technology catching up to where we can make really easy plug and play. You just, you get this little box, you plug it in, you connect it to your wifi or plug it in via ethernet.
[00:46:35] Mark Suman: And now everything you do on your phone, everything you do on your computer, everything you do on your watch, or if wearing glasses. Whatever device is the input device and the output, it’s talking to your home server and doing everything locally. I think we will definitely be there from a hardware perspective and from a user experience perspective, that’ll be possible.
[00:46:53] Mark Suman: The difference is will people do it? I don’t know. I mean, there’s definitely incentive to keep the cloud model running. And if you want to look at a parallel to this, you can look at email where we all have the capability to run an email server at home. We can host our own server, we can be totally sovereign and have total control over it, but we don’t.
[00:47:12] Mark Suman: We still just go onto Google Workspaces, give it our domain name, and now our email is run by Google because it’s just so inconvenient and they handle all of the, DevOps and the IT headaches that would come along with running our own email server at home. So I would love to think that’s the future is home AI stuff.
[00:47:30] Mark Suman: And maybe it could be, maybe this is finally the line in the sand where it’s like you can have our emails, but you can’t have our brains. Our brains need to live at home. And I hope that’s the delineation that people are not willing to give up that aspect of them. So I just don’t know. But we will definitely technically be there.
[00:47:47] Mark Suman: It’ll be possible.
[00:47:48] Preston Pysh: Last question I have for you, you mentioned Nostr earlier. There’s a lot of opinions as to what in the world Nostr even is, but one of the talking points that’s constantly shared is just, it’s an identity layer. And so you’re talking about having a private key, public key pair to ensure that the encryption is actually being conducted between the cloud and your request coming from your phone or your computer.
[00:48:14] Preston Pysh: Do you see Nostr as playing an important role? Because that’s an inherent feature of the Nostr protocol, which is this public key, private key relationship that you can sign anything with.
[00:48:27] Mark Suman: So I see it all coming back to that word verifiable. And that’s really the power of Nostr is verifying that this communication truly came from me now, right? And so that’s the private key, public key promise that we have. And whether or not it’s Nostr as an open source brand that ends up being the thing. I think that concept is what’s important. And so being able to say, Hey, this little piece of memory that went into my AI. That’s signed with my private key.
[00:48:55] Mark Suman: And so I know that I’m the one who put it there. And that it came from me. Right. And so I think that’s where verifiability takes us. And so that could be online communications. I post something, you know, I want to sign our maple builds and have those signed with a private key. So I want to kind of integrate this throughout the entire process.
[00:49:11] Mark Suman: And I think that’s where it really, where it shines whether or not it becomes a replacement for Twitter, that remains to be seen. Right now, it’s a niche protocol, but I think the power is beyond that. People who are looking at it as a Twitter clone are not seen far enough in the future where it’s really all about verifying that this communication came from the person that is they said they are.
[00:49:32] Preston Pysh: Wow. Any other parting comments or things that you think are important for the audience to know about what you’re doing?
[00:49:39] Mark Suman: Sure. I think what you should view, like AI is as a toolbox, right? So you have a toolbox, you have different tools you use for different things. So I would say I’m not asking people to throw away ChatGPT.
[00:49:49] Mark Suman: I’m not asking them to throw away these other services. Instead I’m asking them to add Maple into their toolbox. And that way when you are talking with ChatGPT and you’re like, I don’t really like the fact that I’m sharing my children’s name and personal information about them.
[00:50:04] Mark Suman: You can switch over to Maple and you can have that exact same conversation with models that are as powerful or like 95% of the way there. Powerful enough for you. And it’s very refreshing. You get this refreshing feeling knowing that this is just a private room with you and an AI, and nobody else is listening. Nobody else is recording that information, and it’s not being sold to anybody. You’re not being, you know, influenced in any way. And so I would just say go to trymaple.ai, get the free account.
[00:50:30] Mark Suman: You can upgrade if you want to support us, whatever. But grab it and just have that extra tool in your toolbox and play around with it and start to see, you know, where it takes you and what you gain from that.
[00:50:40] Preston Pysh: Mark, I think you’re working on one of the most important things in the world right now. Truly, I wish you all the best and I can’t wait to try it out myself. We’ll have a link in the show notes for people, but it’s trymaple.ai if you want to go to the website and try it out. And this stuff is so important. I think it’s only going to get more important and, hats off to you for what you’re building and I really appreciate you taking the time to come on the show for the conversation.
[00:51:08] Mark Suman: Definitely. Thank you Preston. Appreciate it.
[00:51:10] Outro: Thank you for listening to TIP. Make sure to follow Infinite Tech on your favorite podcast app and never miss out on our episodes. To access our show notes and courses, go to theinvestorspodcast.com. This show is for entertainment purposes only. Before making any decisions, consult professional.
[00:51:29] Outro: This show is copyrighted by The Investor’s Podcast Network. Written permissions must be granted before syndication or rebroadcasting.
HELP US OUT!
Help us reach new listeners by leaving us a rating and review on Spotify! It takes less than 30 seconds, and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!
BOOKS AND RESOURCES
- X Account: Mark Suman.
- Website: Maple AI.
- Related books mentioned in the podcast.
- Ad-free episodes on our Premium Feed.
NEW TO THE SHOW?
- Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members.
- Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok.
- Check out our Bitcoin Fundamentals Starter Packs.
- Browse through all our episodes (complete with transcripts) here.
- Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool.
- Enjoy exclusive perks from our favorite Apps and Services.
- Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value.
- Learn how to better start, manage, and grow your business with the best business podcasts.



