We Live to Build Logo
    33:042025-10-21

    Never Show Customers Your Underpants (A Guide to AI Pricing)

    Are you making the #1 mistake in AI pricing? Many founders are using their old SaaS pricing model, but for AI, that's a huge error. As pricing expert Dan Balcauski explains, it's like showing customers your "underpants", and it's killing your sales. In this guide to AI pricing, Dan breaks down why traditional SaaS pricing models fail for AI products, the psychology behind value-based pricing, and the two critical axes of a smart AI pricing strategy. He reveals why AI companies are hardware constrained, the problems with capped "unlimited" plans, and how to leverage long-tail distributions in your pricing model. Whether you're building AI applications or foundation models, this conversation offers essential insights into pricing strategies that actually work in the AI era.

    AI PricingSaaS PricingBusiness Strategy

    Guest

    Dan Balcauski

    Pricing Expert & Consultant, Product Tranquility

    Chapters

    00:00-Why Your SaaS Pricing Model is Obsolete
    02:05-The Value vs. Cost Dilemma of AI
    03:20-Foundation Models vs. AI Applications
    07:15-Why AI Product Prices Are So Inconsistent
    11:15-The Power of Long-Tail Distributions in Pricing
    17:05-The Problem with Capped "Unlimited" Plans
    19:10-Why AI Companies Are Hardware Constrained
    24:30-The 2 Axes of a Smart AI Pricing Strategy
    28:00-The #1 AI Pricing Mistake: Showing Your "Underpants"
    31:30-The Psychology of Value-Based Pricing

    Full Transcript

    Sean Weisbrot: Why is pricing AI so much harder than SaaS? Because the rules have changed and a lot of founders have not caught up yet. In this interview, I'm joined by Danowski, founder and Chief Product Officer of Product Tranquility, who helped dozens of B2B SaaS startups rethink how they price their product. We dig into why traditional SaaS models fail with ai. The psychology behind value and how to avoid pricing yourself into a corner. Let's get to it. Why is AI product pricing different from SaaS product pricing or any other pricing models we've really seen?

    Dan: It's interesting because I think for a lot of software companies, they are now having to deal with business realities that. Every other industry has always had to deal with. For the longest time, the incremental cost of serving an additional customer has been net negligible, effectively zero for AWS storage, networking, compute. And so now companies as they're building in AI enabled capabilities into their platforms are seeing significant. Variable cost in serving those capabilities and that money's going out the door to foundation model providers, et cetera. I don't think it's a net new pricing problem. If I am in an e-commerce store and my cost of the hammers that I'm selling on my website cost me $2 landed, I can't just sell 'em to you for a dollar 50 and expect to have a. Uh, business for very long. So this is a pricing challenge that pretty much every other industry has had to keep, uh, an eye on. But I think for most software companies, uh, they've been able to skate along for a very long time without being to worry about this incremental cost. And so it's causing, I think, significant consternation for folks on the cost and margin side. I think the other thing. Maybe this is slightly different than what other industries have to deal with is, you know, this is almost certainly a general purpose technology, much like, you know, electricity or telephone and communication networks, and I think it would have been very difficult upfront to understand everything in your house that would run on electricity, all the applications, and where people would get value out of having. Voltage running through a particular appliance if you were just net inventing electricity and, and running the infrastructure. And we're in a little bit of that moment of uncertainty. And so you have uncertainty and exploration on the value side while you have very concrete variable costs happening on the the margin side. And so it's put, I think software companies in a interesting position where they're trying to figure out both sides at the same time.

    Sean Weisbrot: What are some of those variable costs you were talking about?

    Dan: So let's take for the purposes of the audience, separate, uh, two different classes of companies just to make sure that we're not leaving people behind. I talk about like the AI foundation companies. So this would be open ai, anthropic, Google with Gemini. These are companies that are very much. Like the electricity providers, they're selling intelligence as a service. Their pricing models look very much like other infrastructure as a service. Companies like your AWS Compute or Azure, et cetera. The rest of the world is consumers of those foundation models in order to adapt these AI capabilities. And so. What's happening is, say for example, you're selling a CRM or customer service platform and you want to use one of these foundation models in order to summarize all of the history that your company has had with a particular account or on a particular ticket so that your sales person or SER customer service rep can get up to speed on what the customer's dealt with. Maybe over multiple reps they've been dealing with over multiple years, multiple product cycles. And so the company is going to you, you're gonna, customer's gonna click the button to say, Hey, I wanna summarize this history. And that's gonna have a cost that is going to, you know, an open AI for some amount of processing. And the way the AI foundation model companies do it, at least traditionally now through their APIs, have been on a like a per token basis. It's not clear necessarily to summarize an article, what that means in terms of tokens, so it's a little bit unknown until you run enough of those and those costs scale with the capabilities of the models. The most powerful intelligent models as we're recording this, it'd be like Gemini two five Pro or open AI is oh three models or cloud four. These are incredibly intelligent, but also the most expensive on a per token basis. Or you might use, uh, a cheaper, less capable model and potentially for, depending on the task you're asking it to do, may get equivalent or to the human eye, basically equivalent responses. And so as companies are rolling out these capabilities, there's. Uncertainty of, okay, we're, we're going to experiment, we're gonna put this new capability in our platform. We're not really sure if it solves a problem or if it, you know, works. These systems have this other effect where they're probabilistic, not deterministic. And so you may have had this experience where one day you use chat UBT and you're like, oh my God, this is the smartest thing ever. And then you ask it. The old thing was how many Rs are there in strawberry? And they couldn't answer that. And you're like, look, any 5-year-old can tell you how many Rs are in strawberry. And so you may have, you know, day to day or, or you know, case to case a different performance. And so you can't just say, oh, deterministically, this is how much it's going to cost us. And so the companies are in this area of exploration right now where that's, uh, causing them to not maybe have the, uh, certainty that they may otherwise have in, in some of their. Costing models.

    Sean Weisbrot: Yeah. I experienced this when I was using lovable a long time ago where they would get really good at getting me really close to what I want, but then I would need to spend like 20 prompts to fix something that it took one prompt to make because it hallucinated something. I realized later on that it was because lovable doesn't have chat threads like Cursor and, and these other applications do, so that if the context, it gets destroyed by over usage, then you have to change and start again with another thread. Um, I, okay, so we've talked about variables and it's. Important to understand pricing, and I've experienced different pricing but from different companies. Like Lovable had like a $20 a month and then they were charging up to a hundred dollars a month and for like 500 prompts. But then you go to Cursor and Cursor for $20 would give you 400 prompts or 500 prompts. It's like each of these applications are doing similar things. But their prices are wildly different from each other. Normally in, in an industry, the prices may be similar to one another, and yet between bolts, lovable cursor, these are just examples of a class of of AI products. Why are the prices so wildly different?

    Dan: One is a characteristic of a new market. So I think that what you're seeing a lot right now is heavy experimentation and iteration. You're also seeing a lot of copying. So, you know, although you mentioned, right, it's like, Hey, I've got a probably a $20 per, per month per user lovable subscription. I can go to Chade and it's also $20 per month, and I can go to, uh, Andro, you know, use CLOs $20 per month, or I could go to Bolt or, or any number of these. So we're seeing folks that are, you know, as a first pass saying, well. Okay. Cursor has this, and so Windsurf comes behind like, well, we're just gonna kind of use what they have so that you know, it, it seems to be working well enough for them. But you also see folks who are saying, Hey, there's a, there's an opportunity here to maybe rethink, uh, what we're, uh, how we're providing value given our use case, and, and taking some risks. I think where this is one of the most prominent examples. That I've seen is, uh, with open ai, um, you know, they were the first out the door with the chat GBT pretty much. Most people now, when you talk about ai, just think you're talking about chat, GBT. Um, and they came out the gate with a $20 per month seat license about six, nine months ago now. They introduced their pro tier at $200 a month. This set off a lot of waves, at least in my world, um, because it was a, you know, 20 to 200 that's a 10 x increase. And folks kinda looked at open AI as, what are these folks doing? Like, who, who in the right mind is gonna pay $200 per seat? I think one, you know, I, I don't know that they did a huge amount of research. There was some. Gossip that Sam Altman just sort of pulled, pulled it out of thin air. Hey, we want to do, uh, 200 bucks a month. Uh, I, I don't know if that's true. I don't have any inside information into it, but what did we see? We not only saw that there was a cohort of folks who were willing to pay $200 a month, we then saw the same thing I said before, which is a bunch of copycats. So Google and Anthropic both came out with their. Pro tiers in the same order. I think ops is like a hundred bucks a month. Like Geminis is on the 200, 2 50 a month, but basically the same. I, you know, same basic concept. And not only were there folks who were willing to buy that tier. But I probably spend way too much time on ai, Twitter, myself. There are folks who are paying across all those providers, all the highest tiers. So not only, you know, not only are they paying open AI $200 a month, but they're paying anthropic a hundred dollars a month. They're paying Google two 50 a month, uh, et cetera. Um, because these are folks who want access to the latest and greatest intelligence. And so what I think is particularly useful about that example is that. One, they were able to take a really close look at what they were seeing in their usage data. So going back to the costing concern from earlier in our conversation, Hey, there are, you know, these are not, uh, normal distributions in school. We got, you know, the bell curve, the Gaussian distribution drilled into our heads. But when you're talking a lot about, especially in a pricing world about usage, willingness to pay. There are really long tail skewed distributions and they were seeing folks who were using it 10 XA hundred x more than the average user. And so said, Hey, like there's folks who are one more negative margin on, but we think also that they would get a lot of value out of havoc, the, the latest and greatest that we should price to that. And I think that's a generally, I also like that as a lesson learned for folks because what I often see for companies. That I work with, just even outside of the AI space. Say you have a product or an offer for the SMB or mid-market and now you're thinking about the enterprise and let's say, you know, put us back in the CRM world. Well, we charge 10 bucks a seat, we're introducing a higher tier. You know, I don't know, maybe we could charge 20 or 25 a seat, but those companies are often anchored. To what they have in market today. And very rarely are they willing to even consider, oh, I, maybe someone would be out there who would pay a hundred bucks per seat. Uh, but that's very much what can happen when you start to look at, uh, usage and customer value by these different types of segments who get different value outta your product.

    Sean Weisbrot: So what exactly are they offering in these higher tiers that people justify paying for?

    Dan: Well, through total transparency, I have not been convinced to buy the higher tiers of, of these products. Um, I am, I'm happy paying, I pay all three of them the lower tiers. So I definitely have a higher AI bill maybe than the, than the norm. Um, but not quite, uh, to that level. Uh, they're getting access to the most powerful models and. I hesitate to even say the model names because they'll probably all be different next week. Uh, but the most powerful models as well as unlimited usage, uh, of those, uh, capabilities. Um, and so, you know, all, all the companies, it started with Google released capabilities like deep research where it's kind of like a, uh, it. You think of it like a, a really capable intern that you could say, Hey, go off and do web research on a couple hundred websites to answer. You know, I dunno. I, I did it the other day for, I was buying, uh, a workout, uh, weight weighted vest. Um, and so, you know, I, I could go spend, you know, a couple hours on Amazon reading reviews or, you know, shop it around and looking at, at fitness websites, et cetera. What I need to know,

    Sean Weisbrot: yeah.

    Dan: So you could just fire off one of these agents, but they're, I mean, incredibly expensive. They're searching hundreds of websites and some summarizing, uh, data. And so those are capped for most users. And, but on these pro plans, you can use them on an unlimited basis. And that can be, you know, if you are, you know, if your business is relying on it. It may be a no-brainer because you're like, Hey, we're, I'm using this to write code, that code I'm able to sell for 10 20 x. What, whatever I'm paying, you know, OpenAI, and this is effectively, you know, replacing a, a, a junior software developer for me. Um, it just, it's a no-brainer.

    Sean Weisbrot: Yeah, I've hit upon these limits a few times on Tragedy's free version because I, I had the paid version before and I didn't see a reason to keep using it, but I like it enough that I want to use it. I just don't want to pay, and I've found that I could very easily upload two documents or two images, and I've hit the limit like. Within five minutes, I can hit the limit for the day and I have to wait like 10 hours. So I had to say, screw it, and I go to Gemini. My friend had a, a, a offer for me to get three months of like AI Pro for free, but I al already had Google one on the accounts. So now I can access like all of those things. So I'm just using the hell out of Gemini's 2.5 Pro. For marketing and sales and pricing and LinkedIn strategy and all of these things, and I feel like it's doing a much better job than chat to bt and I'm even. Asking Chachi BT to do the same thing that I'm asking Gemini to do, and I'm asking them to check each other's work. And sometimes Chachi BT agrees, the Gemini, it's like, oh yeah, actually that's really good, but what if we also did this? And then I'm like, oh, actually, like that's quite a good idea. So sometimes Chachi BT is adding in things that Gemini is suggesting and says, yeah, we can also do this and it'll make this better. So I, I agree with the idea of having multiple ais to work with. But I feel like paying for Gemini is valuable and I don't feel like paying for Chachi VT is valuable for me. And I do find that Cursor is valuable 'cause it has access to Claude and I'm using that for coding a bunch of different websites and, you know, different things that I'm working on. So I, I wish their pricing was more transparent. Yes. You say, okay, 20, $20 for this. But then when you hit those limits, you're like, well, like now I have to what? Wait until the limit is up. This happened to me as well. On cursor last month. I hit the limit of 500, like 10 days before the month was over, and so I had to, I had to wait to be able to use it because it, it wasn't this thing where I was able to like top it up. I had to just wait. So that's more like a SaaS model where you have this fixed fee and if, but like with SaaS, you don't typically hit a limit. You just pay to be able to use the thing, but you're not limited to what you can use once you've paid. So why, you were saying before that the, there's a higher tier for people to have unlimited use. Why is it that AI models are priced this way? Where there's a capped version and an unlimited version rather than a Here, use it, use it as you see fit.

    Dan: There's a whole wealth of different topics, uh, within, uh, what you just said. So I think there's a, there's a few things going on. So I think one is, um, there's a really interesting set of dynamics. Competitively when you look at the companies involved and their fundamental business models and what they are ultimately trying to achieve and also trying to stay alive. So what I mean by that, um, you have, you know, Google with. You know, I'm not the first one to say this. Probably the best business model ever invented. You know, that just throws off incredible amounts of cash. They could lose money on their ai, like basically indefinitely. They could run the clock out on everyone else if they really wanted to, you know, keep pricing pressure on the market and run those other companies outta business because they can fund, uh, that type of innovation. Uh, and we've seen them do that with, you know, things like, you know, Waymo, which is finally becoming commercial, but has been in their labs for like 15 years and has, you know, they probably poured untold billions into, uh, developing. Um, similarly, we probably see that going on right now with Mark Zucker, Zuckerberg and Meta, uh, who's poaching all the open ai, uh, employees with ungodly amounts of pay packages. Uh, because. He doesn't necessarily need to win, but he just needs to make sure those other guys don't. Um, and so he's willing to, you know, stick, put his thumb on the scale, uh, with the assets that he has available. I think the other dynamic that is happening right now is these, all of these companies are incredibly hardware constrained. Um, there was a meme. I think it, it was open AI is like Twitter account. I think it was probably back when they launched the image generation. They were like, uh, sir, our GPUs are on fire. You know, and open a, uh, Sam Altman of OpenAI has talked about this publicly where he said, I spend probably, you know, 50% of my day going around and begging people for additional GPUs. Um, and so I think that is a fundamental supply constraint that most software companies have never had to deal with. And. We're probably, you know. F we could, we could sort of make an analogy to probably early days of, uh, the internet, late nineties, um, when, you know, all of us had, you know, dial-up connections or DSL and we didn't have fiber and, and broadband and eventually. We, the world laid enough cable and now we have, you know, 4K streaming Netflix and you know, you could, I don't know, I get offers from Google Fiber to my house, so I, I could run five HD downloads simultaneously. I'm like, I don't need that, but I guess That's cool. Um, so I think we're in a little bit of a transition there on a capacity constraint. I think there is a, a risk also in, in what you're pointing out, which is, is there clarity about what you are selling? A customer that they understand what they're buying going in. Um, and we've seen a little bit of this, uh, recently where folks have the sense that they bought some sort of limited unlimited, but then, you know, hit some, hit some hard limit. And what that does is that makes very angry customers. Um, and that's not a good thing. And so I think, uh, you know, some of these companies. They haven't existed very long. Uh, the, you know, a lot of these companies are founded by folks who are, the whole founding team is like, maybe the oldest person is 23 years old. You have all these other dynamics of a very competitive industry, you know, super rocket ship growth, trying to, you know, make sure they could stay in business, but then also, you know, acquire more companies in a month than most companies ever acquire in their entire lifetimes. Uh, and keep the lights on. Uh, and. Their ability to sort of communicate clearly in the shifting landscape is being put under, under pressure. One thing that, you know, in terms of, um, what I see sort of probably will, will likely happen, predicting the future. Predictions are always hard, especially about the future. Something like, say a cursor or a lovable is. A, you know, intermediary between you and these models. Likely what will happen is some sort of baseline platform fee, and then what looks like a, uh, added, added value cost per token on top of that, that's fluctuating. So you could think of it similar to how you may have had, um, back before WhatsApp destroyed the, uh. Telecom business. They used to charge you based upon the number of SMSs you would send over, uh, your, your, your monthly limit. Uh, so you didn't, you didn't just stop being able to send texts. Uh, but say you happen to get a significant other that month and your texts, uh, went up a little bit more than average, uh, all of a sudden you got a giant bill at the end of the month because you went over your, your plan limit. Um, that will probably be. The evolution that these companies will start adopting such that they don't have hard cutoffs. I think probably what's getting in the way of that right now, um, is, uh, and I think you even saw this a little bit, right when, uh, windsurf, uh, the deal's not closed, but Windsurf is in an agreement to be acquired by OpenAI, uh, OpenAI and Anthropic or direct competitors. Andro is struggling to make sure all their customers can get access to these models. And so they deprioritized windsurf of getting the capabilities. Again, I don't have any inside information, but you know if, if you're in this situation where you're trying to. Not like just, Hey, we're, we're, we're running, basically running outta stock. Our GPUs are on fire. We have to allocate appropriately. Uh, so you're seeing that happen even, even at the, uh, foundation companies levels. And so I think the, the application provider, uh, companies are, are at sort of at the mercy of what's happening, uh, lower in the stack right now.

    Sean Weisbrot: So what if you are building a traditional SaaS that's going to have AI features inside of it, especially like an assistant? How should they price for that?

    Dan: So any pricing exercise, it is really important to be clear upfront about what your goals are because depending upon what your goals are, you can end up in very different places and both of them be totally okay. So if we think about. Potentially two different axes. You know, they take your consulting license away if you can't put anything in a two by two matrix. So, um, if we're in the world of, uh, AI goals, maybe by y axis is monetization, you know, low priority monetization versus high priority and monetization. And then, uh, the x axis being something like adoption. So, you know, am I more concerned about. Just trying to get adoption and usage. And we think about this a lot from a product management perspective. Hey, I, I put out a new feature. I need to make it a habit of usage. Right? If it, if I put it out there, nobody uses it. Like, it doesn't matter that I put it out. Uh, right. And so we need to be able to think about, okay, these are, these are net new capabilities. There's a lot of uncertainty about why someone would even use it. Just saying it's ai. I think at this point everyone's sort of been AI washed. I think everyone's probably tired of hearing even the term. Uh, and, and what, what it means is it applies to your particular feature. People are like, I, I don't get it. So you wanna try to drive that adoption to build a habit. Um, and maybe monetization is a lagging factor. Maybe there's, um, Hey, we're, we're, we're in this period of investing in what people find valuable and therefore we're not gonna try to gate that. Um. You might go the opposite direction. I think, uh, Satya Nutella at um, uh, Microsoft made a very bold move, maybe not the best in retrospect, but when they came out with their co-pilot for Office 365, I think for Office 365, which is, you know, word and Docs and PowerPoint, et cetera. Uh, I think it's about $10 per user per month. For the copilot, they said, well look like, actually if you want the AI assistant for, for all those, it's actually an additional 25 per month. So we're actually gonna ascribe more value. 'cause before, yeah, you could write and edit document, but now the AI can write it for you. Turns out that the capabilities weren't quite there. And so I think that the, the, you know, the promise of the value was greater than what the realization was. Uh, but they made a very strong statement as a market leader saying. Hey, we're gonna set a high bar for the rest of the market. Um, and I can't remember timing wise if you know, OO opening I with, I think chat chip PITI has already launched and so, right. I mean, you saw what Chate came out $20 a month. Like the rest of the market just kind of followed, oh, I guess $20 per month for these co-pilot is about, it was about right. Uh, and so that was a, that was a strong leadership move from, from there and to to, to set that price. What I think is, um, is dangerous. So one thing I talk about a lot when you're thinking about monetizing capabilities is. S picking the right pricing metric. A pricing metric is the unit of value you charge a customer for. So if I'm buying a CRM, I'm usually paying per seat. 'cause there's a very strong value linkage between quota carrying rep in the amount of revenue the company's generating. Uh, and therefore if the CRM is meant to help increase, you know, sales efficiency and deals closed and pipeline, et cetera, then uh, that's a very natural pricing metric where the, the more, the larger companies that have larger revenue will buy more seats. Where people can go off the rails when they introduce AI capabilities is say, okay, well let's use that summary example I used before and say, Hey, I'm gonna add an option in the, in the CRM. So a salesperson can say, Hey, summarize all the history with this account, and that's gonna use whatever X amount of tokens on the back end. So now I'm also gonna charge besides per seat, you know, I'm charging you tower per seat, but now I'm also gonna charge you per token. This is a no-no. Uh, I call this a showing your customers your underpants. You do don't wanna show your customers your underpants. The underlying cost model is your problem, not your customer's problem. So what went from a very easy conversation to that CRO or VP of sales? Hey, how many sales folks you got? I got a hundred. Great. It's 10 bucks a seat. That's a thousand bucks a month per, uh, for your company. Went. F from that to how many salespeople do you have? How many tokens are you gonna use per month? And now this CRO VP of sales is gonna be like, I, I don't know how many tokens do I need? Like what, what does that even, what does that even mean? And so even if you have, there are sales folks out there who are incredibly talented and could probably navigate that conversation, but even if they do. When that prospect hangs up, they're just gonna be like, I, I thought I understood it and I don't. And now I have a busy day and I'm just, I'm just not, I'm just not gonna think about that anymore. 'cause that seems way too complex. Um, let alone all the operational complexities of rolling that out to, in your entire sales team of differing capabilities and talent levels. Uh, and what you've basically done then is just pour sand in your go-to-market engine, where now you have to train all of your salespeople to be AI experts. And so what is the antidote to that one is. If you believe that that capability is actually driving underlying value and you're putting it into a tier, can you rationally increase the price on your existing pricing metrics? So if I was a $10 a seat. Can this now be equivalent to a $12 per seat? Um, this is actually the move that Google just did with workspace. They decided they weren't gonna compete head-to-head with OpenAI and Anthropic, uh, trying to sell everyone Gem Gemini subscriptions. So on the consumer side, they rolled it into Google one. But on the, on the workspace side, they said, Hey, uh, we're now rolling all the capabilities of Gemini into workspace and we're, we're raising the price of the Worke workspace per, per user. Um, which is an example of, of doing that. Um, the other is by putting those more advanced capabilities into your higher tier offers. So if you've got a standard good, better, best offer, you know, perhaps you start by putting the AI capabilities in your best. Therefore, creating an incentive for customers to upgrade or so increase the, your share of wallet with them by them adopting your most, uh, expensive plan. Um, so that's another way, uh, to drive it. Uh, there are others, but I'll stop there and see where you wanna go. What's the most important

    Sean Weisbrot: thing you've learned from focusing on pricing?

    Dan: So pricing is this really interesting blend of economics and statistics and psychology, and I think that's what makes it super interesting. I think that last bit for the most part gets left out of all of the econ 1 0 1, 1 0 2 lectures that everyone's ever sat through, where they teach you about supply and demand curves. There's no semblance of, of psychology in there. Um, but especially when it comes to something like SaaS pricing, the more what a, what a demand curve will show you is, is price, relationships to volume. The more fundamental problem for the areas that I work in, in software, it's prices, relationship to value and value does not exist in the bits in the product that you're selling. Value, like beauty is in the eye of the beholder, value exists in the minds of the customer. And so I think that has been a continuous touchstone for me of like. I don't really care about what product you're selling. Like if we're talking about pricing, like I care about what your customer is thinking that product can do for them, uh, because that's really where value lies and therefore that's where we need to focus our attention on pricing.

    Network
    Before
    You Need It

    6 lessons on the only asset that survives when the company doesn't.

    Sean Weisbrot
    Sean Weisbrot
    We Live To Build
    Free ebook when you subscribe

    Stay Updated with Our Latest Insights

    Get exclusive insights on building better businesses, networking strategies, and AI-powered growth delivered straight to your inbox.

    Subscribe and instantly receive Network Before You Need It — 6 lessons on the only asset that survives when the company doesn't.

    Subscriber 1
    Subscriber 2
    Subscriber 3
    Subscriber 4

    Join 235,000+ founders

    Unsubscribe anytime. We respect your privacy.