We Live to Build Logo
    31:072025-08-01

    The Real Danger of AI: We're Losing Control of Ourselves

    What is the real danger of AI? It's not that AI will become our overlords; it's that we will become so reliant on it that we lose control of ourselves. I sat down with William Welser IV, CEO of Lotic.ai, to discuss this critical issue. He argues that by offloading our cognitive responsibilities to AI, we risk losing the very skills that make us human.

    AI EthicsHuman CognitionTechnology Dependence

    Guest

    William Welser IV

    CEO, Lotic.ai

    Chapters

    00:00-Introduction: AI is Easy, But It's Also Dangerous
    02:15-The Problem with "Sycophantic" AI
    03:20-How "Artificial Relational Intelligence" is Different
    06:00-The "Last Mile" of Human Interaction AI Can't Cross
    08:00-My Tattoo's Advice: "Consider You Might Be Wrong"
    09:20-Offloading Our "Cognitive Responsibility" to AI
    12:20-How to Use AI to Teach, Not Just Get Answers
    15:20-The Real Danger: Are We Becoming Too Dumb?
    18:15-AI is Just a Tool, Like The First Rock
    20:45-How to Educate Your Team and Kids on AI Use
    25:15-Why We Built a "Bottom-Up" AI Company
    29:15-The #1 Lesson from 5 Years in AI: Ride the Commodity Wave

    Full Transcript

    Sean Weisbrot: What if the real risk with AI isn't that it's too smart, but that it's too agreeable? In this interview, I sit down with William Weller iv, the co-founder and CEO of Lodic AI to explore how AI mirrors our cognitive patterns sometimes in ways that stunts our personal growth. We dive into behavioral science feedback loops and why cognitive responsibility may be the most important skill of the AI age. If you're using AI to make faster decisions, this conversation might change how you think about your relationship with ai. How can AI take advantage of what we know about behavioral science? As well as data and psychology to help us be smarter about the way we manage ourselves.

    William Welser IV: That's a really good question. And artificial intelligence is something that for the past 20 years, I mean AI started, the concept of AI started in the 1950s, but for the past 20 years, we've really kind of co-mingled this idea of AI just making things faster. Maybe there's a little bit of, um, you know, learning, but it really was just robotic process, uh, augmentation. And since 2022, with the explosion of large language models, it's really become accessible to everyone to use something that's a little bit more. Uh, powerful and actually is a lot, bit more powerful. And so how we can use it now is you can query it about very detailed things about yourself. And get back feedback that is very particular and individualized to you. And it's been made by the anthropics of the world and the open ais of the world to be easy to, easy to interact with, right? I mean, it's basically like you're working in a browser, you know, type in your question and it shoots back a ton of answers. And so that's, you know, first order how we can use it. I hope some, one of the things we're gonna get into is the fact that like, that's the wrong way to use it. Um, you know, it is in fact, yes, easy, but it's also quite dangerous.

    Sean Weisbrot: Okay, let's go there. I love that you pointed out something that you think is wrong with how something works. So why do you think it's wrong?

    William Welser IV: So right now, um, these systems are sycophantic. They are, they're based on our prompts. They're going to take what we say and give us back what we've asked, and then we're going to iterate. What I would argue, and actually what I've spent the past five years of my life doing is working with behavioral scientists. I mean, I'm a chemical engineer, I'm not a psychologist or anything like that, but like I've, I have dove deeply into the basic behavioral science side because humans are the biggest challenge. And what we've developed is what I call artificial relational intelligence, a way for the AI to actually. Come back and prompt you in ways that you wouldn't even know how to ask the question. You wouldn't even know the question needed to be asked, but it's taking all of the information that is personal about you, that you've fed into a, in our case, a very secure system that just has your data in it and it's actually conversing with you like you and I are conversing right now. Like I have no idea what you are going to say next. If you were an LLM and I asked you a question, I'd have a pretty good idea that the answer would have something to do with my question.

    Sean Weisbrot: Hmm. So let's say, for example, this is something that I do. I have been recently feeding transcripts from interviews and sales calls into Gemini. Mm-hmm. And asking it to analyze the transcript and tell me what I've done well and how I can improve. And sure it's positive, right? You said it's a chantic and it says, oh, well, you know, you were really good at this, this, this, this, and this, and here's why. Here's how you can improve. So it's never like, you're so bad at this thing. It's always like positive. And I've, I've seen people prompt AI to say, Hey, don't give me any nonsense, right? I don't want any of that crap. Just be honest with me. What's the, the raw reality? And personally I find that it's actually working quite well to help me to understand how I can improve. And so I feel like in the recent month or two, I've been able to get a lot better at being an interview host, even though I have constantly trying to improve. But with the help of the ai, it's telling me very specific things about what I could be doing and definitely including talking less, um, and giving the, the guests more time to speak. Um, so while I. Agree with you that people are not using it correctly. I also think there's a tremendous value in doing it that way

    William Welser IV: with without a doubt there's value. I mean, these tools are extremely valuable. We just have to know their limitations. So for your example, it's giving you back feedback. Not necessarily as it re relates to you as a human, it is taking what you as a human have done it is running it against best practices that are known in the literature, right, that are, that have been written about, that basically out there in this bolus of, of information that the human, uh, species has created and it's coming back and saying, okay, well based on that, here are things that you could improve. Those are very valuable. Very valuable. It is not going to come back and say. Based on this, when you have this type of person that has this type of experience that you've asked these sorts of questions to beforehand, you need to be aware that this could be the case and thus you need to re-ask the question this way, like all of that last mile of interaction. It's, it's not there because there's no rule set against it. Like there, there's too much emergence that happens between an individual when you, uh, well between two individuals, uh, when you're talking together and then there's emergence that's happening within you that when you're talking, right? So it's that part of it is not rule-based and thus the LLMs can't really. They can't tackle that last mile. So what they've given you. Heck yeah. That's super valuable, right? Like I'm sure that you made the comment of, you know, talking less if you were talking more and now you're like consciously talking less. That's great. But if in cons, if, if in thinking about talking less, you miss the opportunity because you're so focused on talking less, you miss the opportunity to jump in when you should be waxing poetic on something like it, it doesn't have that nuance.

    Sean Weisbrot: Right? Yeah, I, I don't follow it blindly. Because I do know that there's some things I could say or some things I shouldn't say. The other reason that I've been thinking about how I can improve is because I see some of these guys that have millions of subscribers that are getting these incredible guests and having these incredible conversations, and all I've ever wanted to do is feel like I can interview at that level. And I feel like I do a good job, but I also know that from my own ego, if I think I'm doing well enough, then I'm actually probably sucking because humility is something that I struggle with. So the only way I can feel like I'm making progress is if I constantly allow myself the opportunity to be wrong. And the AI is helping me to find those places where I may not see.

    William Welser IV: Yeah, I mean, this is, uh, my, my children think that I'm silly because, uh, my daughter challenged me when she's 20 now, when she got, when she turned 18. Turns out, if you're in, I live in Texas, and if you're in Texas as a parent, you cannot. Approve of your child getting a tattoo prior to the age of 18 because it's considered child abuse. So even if the child wants it and you want it, you cannot sign off on it. So my daughter's 18th birthday, she's like, I want to get a tattoo, but you need to get a tattoo too. And I was like, okay, I love you. Let's do this. And I couldn't get just one. Over time, I've gotten a few, but one of them is actually right here on my arm and it's. To the point that you just made, it's consider that you might be wrong. Um, consider that you might not know everything and it actually does say consider you might be wrong. Um, it's a reminder to me that we don't know everything, right? And, and there is nuance to, to all of the things that we tack on a day-to-day basis. We have something to learn from everyone. We have something to learn from every situation, and we shouldn't take things at face value. So while I believe that. What you've just said is true, which is that you take it with a grain of salt, you're using it, uh, you know, as you see, um, it might have value. I am worried, and I have seen lots of examples just in my, uh, personal space where people are offloading their intellectual curiosity. They're offloading their cognitive responsibility onto these lms. And they're not taking it with a grain of salt. They're saying, well, a computer must be smarter than me, so like, I'm gonna roll with that. So

    Sean Weisbrot: maybe I'm at fault for this, maybe I'm not. Maybe you can help me to decide. So one of the other things I do is the same with my sales calls or like if I send an email and I get a question back and I don't know how to respond in a way that's gonna get them to hopefully wanna continue the conversation. So I have been sending the this information along with context to Gemini 2.5 Pro, and I'll say, how can I handle this? Or in this situation, what should I be charging this person? 'cause they already paid for something else. I don't wanna charge 'em the full price that like somebody who hasn't paid for anything before would, or, oh, this PR firm isn't really sure because they think of this and so what should I say? So I'm constantly throughout the day saying, Hey, what should I do? Here's context, and then it'll tell me what to do. Now, on one hand you could say, yes, I'm, I am having the LLM do my thinking. But on the other hand, I don't know the answers to those things. And in the past, if I just answered it the way I would wanna answer it, I probably wouldn't get favorable results. So the LLM is helping me to understand what is favorable, what is going to psychologically get them to behave the way I want, which is to continue to the conversation and, and it's working. So. Is it wrong for me to trust the LLM to tell me how to handle something that I don't feel comfortable handling in a way that allows me to grow my business in a moral and ethical way?

    William Welser IV: I don't think it's any more wrong than it is for you to ask your mentor advice. I see the AI as a mentor, right? I don't think it's any more wrong for you to. You know, call up your best friend and say, Hey, I'm struggling with this. I know that you're in sales. Like, what would you do? Right. And this is, again, you're asking for advice. You're not, the way you're describing it, you are maintaining, uh, the cognitive responsibility of deciding what to do in the end. And I think that's there. It sounds so, it sounds like it's in the minutia. It sounds so like. Uh, well, where does that start and where does that be? Where does that end? But it is such a big deal, uh, that you're asking it, how could I do this better? It's giving you answers and then you are going to actually run that through your internal algorithms and then do something. Uh, what I would posit. And from what I've seen through the people who have used our system, uh, versus LLMs and have given us feedback on this, what they see is that when they take that approach and we force that approach, we force the approach of we're advice, we're in a conversation with you. It's a conversational relational system that they get more out of it. Then kind of that, you know, binary, here's the answer of how you could do it better. It might not actually be how you can do it best. And so one of the things that I've been, and it is just a simple example to what you just stated. One of my sons is, uh, taking a summer class and he asked me to review his essays and I was like. The, the tense is crazy, right? Like I do not want to have to go through this multi-page essay and ex and like fix all of the problems with verb tense. Because he is writing about historical events, but then how they affect today and like, there's, that's a tricky verb, tense situation. That sounds awesome, by the way.

    Sean Weisbrot: Right? I like that topic.

    William Welser IV: Yeah, it's so, it's, it's fun, right? So what I did was, uh, I walked, I walked him through how to use, we used Gemini as well. Um. Two, 2.5 grow and I put it in. I put all of his text in and I said, I don't want any adjustment to his words. I don't want you to summarize things. I don't want you to make it more, you know this, that I don't want to get rid of his voice, but I need you to help him understand examples of where the tense is right and where the tense needs to be adjusted and why. And so I was able to give him a cheat sheet that could describe it better than I could for sure, because I'm not a, I'm not an English teacher, and that cheat sheet he was able to then take and apply to the entire document. And it teaches him how to think and remember Exactly. Based on his mistakes. Exactly. And so then when we went through it, there were far fewer mistakes and there were some of them that were like, oh man, like is this right? Is this wrong? I don't know. And yes, we did go back to Gemini and say like, Hey, you know, what are some, what do you think? Right? Like, is this the proper tense? And you know, you give it context and blah, blah, blah, blah, blah. But it is that source of maintaining, and I'll go back to it again, maintaining the cognitive responsibility of learning and then applying that learning. If we don't do that, then I'm not really worried about AI becoming so smart that we have our, you know, AI overlords. I'm worried about us becoming so dumb. That we are just reliant upon something to make any decision. We're reliant upon something to just tell us the answer. And it's an interesting way to change that narrative of are we gonna lose control of ai? What if we lose control of ourselves?

    Sean Weisbrot: I hate to say it, but I think it's inevitable because if you look at smartphones and you look at Google as a search engine in general, before AI came out. People stopped remembering the time they, they stopped using watches, and they just look at their phone's time. They don't look at the sun. They don't look at what direction they're going in. Google Maps tells them where to go. People who spent decades driving cars without needing Google Maps, suddenly are using Google Maps to know where they're going. Even though they know where they're going, they don't need it, and they use it anyways. I'm, I'm kind of at fault for that as well. Uh, but generally I can look at the sun and I know what direction I'm walking in. Mm-hmm. I can go to new cities and I instantly know what direction I'm walking in and I can go walk down, you know, roads in random cities I don't know of, and I don't get lost because I know what direction I'm going in and if I need to go back, I can turn around and I know where I'm going. Um, I, those are skills that you learn. I have chosen to prevent myself from losing those skills. Not everybody thinks that way, and I think that's why AI is going to Domas because I think a lot of people are outsourcing their knowledge. They, they have, it's like their smartphone is their brain, right? They've outsourced their brain to their phone, and now AI is becoming the next iteration of that, especially because AI can have voice conversations with you. So. I don't know how there's any other way forward.

    William Welser IV: Well, I I do think that there, there's a lot of education that has to go on to let people know the look, I agree with you, that humans are going to just like, you know, mosquitoes to a light, or bugs to a light, like humans are gonna flock to something that gives them a ton of convenience. And by the way, from a psychological standpoint, that's kind of problematic because we require friction in our lives, which is why people will say, oh my gosh, I've got, you know, that person has everything. They have all the money, they've got all the house, they've got all the, you know, the boats and the this and the that. Why aren't they happy? And it's because they've found friction. Like we find friction. So I do believe though, that we also get attracted to convenience. So that's kind of like a weird dual-edged sword. You've described, again, your commitment to maintaining your cognitive flexibility, responsibility, accountability, um, your intellectual curiosity, all of that. We need to educate people that. This supercomputer that we have in our pockets. And sometime, you know, sometimes it's in our glasses and other times it's, you know, in, in others it's on our wrists. Uh, but these, these really, really powerful devices, they are just tools, right? And if we think about, I oftentimes, I'm reminding my team when we're thinking about how we present. Something to a user or to a customer. The first tool was a rock, right? Like I, I, it really hurt my fist to try to knock out game because I was hungry with my, I mean, it hurt my fist to knock it out that way. Well, it was a lot easier to knock it out with a rock, but I didn't like seed all of my livelihood. To the rock making the decisions. And I know that seems silly because of course a rock can't make a rock's, not a supercomputer, but it, if we think about it as a tool, a tool that is for our uses, but not to dictate to us how to live, then I think we're better off. And that starts with education and I, I don't. And I can go, I could go really deep into how the education starts and where, who's responsible, and the policies that have to be involved with that. There's a lot that goes into that, but the education piece is super, super important and people just aren't getting it.

    Sean Weisbrot: So in the absence of a responsible education system, where I think education should start at home. Not in a school, how does one, as a parent or as a leader of a team, right, if we're talking about a professional environment, how does that person who's responsible for other people educate them, whether it's a child or an employee, to use these tools but not lose themself in the process of it?

    William Welser IV: Yeah, I, it's a great question and. You, you hit on something at the very beginning of your question, which is most people when they think about the word education, they think about school. It's synonymous with school for me, like I'm learning something all the time. I'm being educated all the time. I in, in long, kinda, long term, long tail sorts of things. Like, Hey, what do you do with your money? Oh my gosh, watch the markets. I'm, I'm educating myself based on that, all the way to, like, I'm driving down the road. That person is acting in a very strange way with their car. They're going to cause an accident. I'm learning that I need to go around them and get them behind me. Right? And so we're educating ourselves all the time. How do you best do this? I, it's, it's multifaceted. If you're dealing with someone who is looking to you to model, so like with my kids, my kids are looking at me to model behaviors. If I am blindly saying to my child. My son that I was just referencing, like, Hey, it turns out like, yeah, there's a lot of work that you need to do on that essay. Let's just throw it through Gemini and have it fix it. Right? And like, that's easy. That's not like I've got the Gemini Pro, uh, subscription. Like I can get it to do anything. And if I model the fact that like, there's an easy way out and that easy way out, by the way, strips your voice. It oftentimes can cause errors, which I can give a lot of examples of that. I'm sure we've all experienced that. And it shows like a lack of, again, the phrase, the title of this should be cognitive responsibility, but it, it shows that if I'm not modeling that, then I'm not teaching. It's the same thing within a job, right? Like, Hey, I'm not gonna sit here and say. You know that my developers can't use Claude to help them along the way of developing. I mean, there's just, there's so much value to those tools. But I surely, when I go through a quality control check, I wanna know where they used it, how they used it, why they used it that way. What is it that they found, you know, was helpful, but also where they had to adjust. I want them to be thinking about the fact that it's not as simple as like, you know, and you and I were chatting the first time we talked about vibe coding, right? Like yeah, that's really cool. Like I can build lots of things, but if I'm going to like scale something to an enterprise level, I'm not gonna do it. Vibe coding. Because there's a lot of other things that are required on the, in different parts of that system to make it robust, to make it so it doesn't break and to make it ethical and all those sorts of things. So I can again, model to my team and say, Hey, use it. Use the tools. That's awesome. Use what we have. We'd be dumb not to, but let's walk through how. How did you use it? And not, because I'm looking to say, gotcha, you're wrong, but instead along the way we might figure out, heck, we can use it for something else. Right? We can learn along the way while also maintaining control and cognizance over how it's being used. Does that make sense? Yeah, for sure.

    Sean Weisbrot: So. What is your company actually seeking to do with? Behavioral science and change and all of that?

    William Welser IV: Yeah, so I started, my, my company is named Lodic and so L-O-T-I-C and uh, I started the company in 2020. I actually put in the paperwork in January of 2020. And, uh, Delaware and the state of Texas and all these wonderful states got back to me in April. And so a lot had changed between January and April of 2020. And I set out with the following goals. One was I thought technology was upside down. And I say that in kind of a flippant way. Large technology companies clearly were making a ton of money BA based on our data. And I wasn't necessarily seeing that individuals benefited from that. Uh, they, you know, met might benefit based on a convenience they might be, but it wasn't, it was good for me as a 77th percentile of men my age with three kids living in Austin, Texas, blah, blah, blah, blah, blah. Right? But it wasn't good for me, bill. And so how could we make it so that these tools. It could be far more personalized. How could we democratize these tools to an individual level? So that was test number one. So instead of top down use of technology, let's use it bottom up. And it turns out, you know, I was running with the conceit that there's more than enough data at the individual level to run these very, very powerful algorithms. I don't need to commingle my data with yours just so I can run those algorithms. Point number two was I didn't see a ton of ethics. I didn't see a ton of privacy. I didn't see a ton of security. And like if you want to enable somebody to learn a lot more about themselves, you're gonna have to engage them in ways where you're drawing out far more personal information than they might normally be comfortable sharing. And our system is voice first. So we're asking people to speak vulnerable and like vulnerably into the system. And then we're analyzing that and we're coming up with like, it, it's, it's, and I can explain all that, but it's, it's wonderful. But without privacy, privacy, provisioning, provable, security, and an ethical approach that can be shown to the individual, uh, you're never gonna get 'em to speak that way. You're just not. And so we went about those two things, and the idea was if you can treat someone's data, collect it, treat it, inform that person, individual level, you can make them better decision makers. And if you're a better decision maker, you can be a better consumer. And we're all consumers. We're consumers of healthcare, we're consumers of financial advisory. We're consumers of durable goods. We're consumable consumers of consumable goods, et cetera, et cetera. The world benefits if I'm a better consumer of healthcare, if I know when to go to the doctor to stave off the trip to the emergency room, but right now there's not a heck of a lot of information out there that's personalized to me. That's like, Hey, yo, you need to do this now because of your information. And so that's what the goal, the goal of our company was Democratize self. Take all these tools, focus 'em on the individual, make the individual understand themselves better, help the individual understand themselves better so they can make better decisions, they'll be better consumers. And once you have really, really informed consumers, now there's a ton of opportunity from a market standpoint for companies, businesses, municipalities, those sorts of things.

    Sean Weisbrot: And so in these five years of doing this, what's the most important thing you've learned?

    William Welser IV: So, uh, the most important thing I've learned is that it is extremely difficult and more so over the past five years, it's become just so much more. So we've had this boom in expecting that you are going to disrupt the large players. So I started this company. We were building LLMs because we needed something to base stuff on, and GPT hadn't been released yet. Gemini hadn't been released yet. These, these other systems had not been released yet. Those weren't, didn't come onto the scene until 2022, for real. And I could have continued that path when they showed up and been like, well, we can just do this better for our own purposes, et cetera. And instead, we stepped back and we said, no, we need to build a layer. That sits on top of those things, we need to let them become commodities. And so a huge learning for me was with a boom like this, a race. I mean, there's been really an AI race between these large companies, which has led to a commoditization. Of these tools. And if you take the approach of, I'm not going to fight them, I'm not going, but I'm going to learn how to ride on all of the rails. I'm gonna look at them as commodities, then you run a far better chance of succeeding. And that's all about, I mean, the learnings we're pivoting, paying attention to the market, being able to anticipate what to do next once something new hits not overreacting. Because there's a ton of stuff out there to overreact on. Um, but then also to really just pay attention to what is, what's gonna be commoditized. And if it's going to be like, turn your attention away from trying to rebuild it or make it in your own way, ride on top of it.

    We Also Recommend

    Network
    Before
    You Need It

    How I generated $15M for my businesses and $100M+ in value for my network.

    Sean Weisbrot
    Sean Weisbrot
    We Live To Build

    Network Before You Need It

    How I created $100M+ in value for my network
    and earned $15M for my own businesses.

    Delivered as 6 lessons I learned from experience as an entrepreneur.

    Subscriber 1
    Subscriber 2
    Subscriber 3
    Subscriber 4

    Join 235,000+ founders