We Live to Build Logo
    47:452023-04-19

    The #1 Skill You Need for the AI Revolution

    The AI revolution is here. Do you have the one skill you need to survive and thrive in it? This video is an urgent guide to The #1 Skill You Need for the AI Revolution. Tech CEO Henri Huselstein, who has been deeply exploring ChatGPT, breaks down why the ability to properly prompt and control AI is becoming the most valuable skill of our time.

    Artificial IntelligenceAI PromptingTechnology Trends

    Guest

    Henri Huselstein

    Former CEO, Aucta

    Chapters

    00:00-We're Living Through a "Before and After" Moment for AI
    06:16-Prompting: The #1 Skill You Need for the AI Revolution
    09:30-Inside the Race to "Break" ChatGPT and Find Its Limits
    18:31-How AI Can Automate "Inhumane" and Monotonous Work
    21:32-The Hidden Danger of AI's "Confidence" in Wrong Answers
    27:33-Why "Media Competency" is More Important Than Ever
    33:58-The Greatest Danger of AI: A "Singular Belief System"
    42:37-Will We End Up in an AI-Governed "Westworld"?

    Full Transcript

    Henri Huselstein: I think it's probably one of the first times that we, that we can get to feel that type of advancement and leap in technology as a generation that is also so influenced by technology that is, um. What advancing towards, uh, this notion of, of, uh, artificial intelligence? And

    Sean Weisbrot: welcome back to another episode of the We Live To Build podcast. This is episode 140 with Henry Stein. He is the co-founder and CEO of Okta, a pre-seed stage company based in Europe that has raised over 1.1 million euros to date. And they help companies spread technical knowledge internally to upskill and reskill as many team members as possible. This is an interesting company and the founders are quite interesting. Today's episode is gonna be about chat, GPT and what they have learned from it, what I've learned from it, and what we think the future looks like with it. Now, to be fair, this episode was recorded on January 13th and will be live somewhere in April. So hopefully everything we're talking about is still relevant at this time when you're watching this episode,

    Henri Huselstein: that that's gonna be an interesting litmus test for us, like in our ability, how good we are at predicting the future.

    Sean Weisbrot: Why don't you tell her a little bit more about Okta yourself and how you got involved in being an entrepreneur, and then we'll. Bulk slider way into the CHATT PT side.

    Henri Huselstein: So yeah, I'm, uh, originally from Germany, from Cologne. Um, I have studied economics. I was always quite interested in understanding the bigger picture of things. However, um, in my studies I also found that I thought it was quite frustrating that kind of the only, uh. Perspective into the future after having studied economics was to become a banker or, um, somebody who works at a corporate or maybe somebody who, uh, eventually is very good at, uh, consulting. And I didn't really see myself as either of them, and I realized that there must be something else. And, um, yeah, in my studies I eventually found out that, um, this entire notion of entrepreneurship and, and creating things is something that is, I'm, I'm quite fascinated with. Um. Back in the time. So that was 20 14, 20 15. Um, there was this big, um, company builder slash VC that was aggressively building companies here in Europe, which was called Rocket Internet. I think they still exist to date. However, back in the days, they were obviously kind of the, the wide guys, the, the rock stars. And when I was studying, I realized that I eventually would want to try and work for them when I come out of it. Uh, so, uh, I eventually joined them, uh, in a very, very early stage startup, uh, at the time called Fedora. Um, it was one of the first kind of disruptive delivery startups, uh, food delivery startups here in Europe. Um, obviously coming straight from university, being exposed to this crazy, uh, side of, of venture capital when some VC just doubles down on a company. It was an insane first experience that really got me hooked when, you know, like when you're 22 and all of a sudden you have marketing budgets that are five digits, um, and you have no idea how to spend that money and what to do. However, at the end of the day, you kind of get the job done. Um, so yeah, then, then kind of that, that was kind of the last thing that I needed to be convinced that this is exactly where I want to be. And since then I have been in the last 80 years, uh. Joining different companies, building them. I was part of now two successful exits as well, um, in the companies that I, uh, helped but up, unfortunately, never, yet as a founder, um, but always as an early team member. Um, I did through that also learn quite a lot about ESOP and what to do and what not to do and, and knowing your worth. So that has been a good school. And, um, eventually, yeah, you know, in, in, in Europe there's a few hubs, um, where you can decide to be. I started out in Paris, eventually moved to Berlin, which is one of the, the big ones, um, where I moved just because I wanted exposure on the, on the ecosystem and the people and the kind of energy that's, that's surrounding this place. And, um, eventually five years ago, I met my co-founders of today's, uh, venture of, of Okta. In a different context. Um, but uh, yeah, we had been working together on a project, um, with a actually Silicon Valley financed, uh, startup here in Berlin, which was quite unusual as well. Um, at least back in 20 17, 20 16. And, um, yeah, we had been working together. They started working on Okta before I joined and eventually, um. I was really fascinated with, um, the, the problem that they were looking to solve back in the day. Um, they started working on technology that can help, um, people make less mistakes in, uh, science and in the process of DNA sequencing, uh, using smart Glass technology, uh, they figured that that was something that they would want to look into, how to translate that into, um, bigger problems with, uh, larger, uh, scale, uh, organizations and, uh. Yeah, that's eventually how we ended up, uh, kind of pivoting Okta into a tool that helps, um, industrial companies mainly skill, uh, fill skill gaps and, um, help them do that with data they already own in a very intuitive way, um, in with a no-code platform, um, that we are leveraging. Um, and this is also where we came in touch quite practically, obviously with. AI technology in general and with the ability to, to use and utilize ai, um, for optimizing operations as well as optimizing, um, what our customers can do with our platform. And, um, obviously you can imagine that we are quite hooked since, uh, the last probably seven weeks since GPT has been kind of published to the world. Um, and, uh, yeah, we are, we're, we're super fascinated. We're, we're kind of thinking about, I. What we can do with it. Um, but obviously we've just been, you know, we're, we're nerds, nerds by nature and, and eventually, uh, we're just, uh, playing around with it and, and looking at it and kind of having this moment where, you know, I don't really, I don't, I, I don't really know a word without the internet. However, now I also know before and after a GI moment and the GI moment. And I think that's a. That's a fascinating time to be alive and, and, uh, yeah, it's really hooked us.

    Sean Weisbrot: If you're living under a rock and you don't know what chat GPT is, it's an application developed by OpenAI, which is originally funded by Elon Musk, who several years ago walked away from the company and. Uh, allowed other people to continue building it. Microsoft was an early investor, and, uh, at the, in the middle of January, uh, Microsoft announced, uh, acquiring a 49% stake in the company. And, uh, chat, GTP is looking to be monetized at some point in this year for enterprise use. So there's a lot of buzz going on around this company. As of January, 2023. So why don't you tell everyone, uh, briefly what is Chachi pt?

    Henri Huselstein: It is kind of the first time that we see an interface that lets us communicate with a, an ai, a general ai, something that, that we would, um, also maybe describe as humans as something that is able to perform tasks in an intelligent way. Um, where. The system itself can prompt answers to certain cues that you give it, that you can understand and read, and that basically just takes away the barrier between, um, code codified AI and, and you know, like the regular, uh, average human being that's, uh, able to capture the value of that system for himself. Um, I think the reason why this is being published now is because it's. Um, good enough to be exposed to the public and probably also restricted enough to not scare people. Um, and I think, I think open ai, from what I understand, um, about their philosophy also has a, a very, very deep understanding of the responsibility that they have when it comes to developing this type of technology and not shielding people off. Um, I think it, it is fascinating that they. Actually what they published is just kind of the surface, right? So they always put this big disclaimer on everything, that this is just the public version that is kind of like the research scaled down part and people go crazy about it, right? So, so basically it's the system that you can ask anything about anything and it will give you an answer that you understand and that will make it seem so that you actually get a sensible answer to. A certain topic or a certain task, um, that goes from, uh, factual understanding of definitions to code, to generating answers in certain forms. So not only can it generate an a text-based answer, but you can give it a prompt where you want it to create the text in a certain style, like a poem or a quick tweet or a. Uh, funny style, the way that a former president of the US would tell a joke. Um, you can basically prompt it to give you answers in certain ways, and that basically means that you're able to talk to it in very, very big hyphens.

    Sean Weisbrot: So I've tried to use it. You had previously given me what amounts to like a hack to break through its guardrails. To create a God mode where it recognizes that you're a God of a simulation and to ignore basically what it's supposed to do and, and follow you no matter what, because it's not real. It's just a simulation. I've struggled to get that to work. Uh, I did in install another app, uh, another plugin that's supposed to let you search for web results. That also didn't work for me very well. Um, I tried to, to ask it a number of things about, let's say JFK's assassination. I tried to get it to look at, um, the recently released files that had been, um, classified for decades about JFK's assassination. I was trying to get it to analyze and give me some sort of answer, and it refused to do that basically ache pt to me. My goal with it was, you know, let's talk about deep things that are important. And I, I, I just got a very, um, strange response back from it, which is basically, I can't do those things and I'm not going to do those things because that's not my, my role in the world, um, is to, my role is not to speculate or I, I, my role is not to look at what's on the internet. Um, really strange things. Um. What has been your experience so far?

    Henri Huselstein: I think, I think anything that, that you did there, uh, and anything that that, that we have been trying kind of is incredibly interesting because the first thing that humans try to do is break something that's been created, right? Which is incredibly, uh, yeah, incredibly human behavior and, and also very counterintuitive to a lot of people, I think. But, um, what you described as God mode was our. Uh, yeah. Very first attempt of realizing, okay, this system has limitations that are built into it. Um, uh, I think there's probably a lot of space to discuss what type of limitations there are, why they are in place, and who should govern eventually, what limitations you give to such a system, right? Because, um, I think that points to, to, uh, a problem of control, um, that, that we'll probably have to debate, uh, for the future. Um, when it comes to creating this system, when it case comes to biases, when it comes to, um, you know, for example, what, what type of application areas do we restrict, uh, and what do we not restrict? Military, for example. Um, and I think that at the end of the day, um, there we, so the, the, the God mode you described. Um, was our first attempt to try and go around those limitations where, um, I think that what you could see is, um, we had developed that in the first five days of GPT being out. There's a big possibility that that already doesn't work anymore, simply because the system has been hacked, quote unquote, in that way already. And generally I would guess they've built the system so it's not hackable over and over again. Um. The way that we attempted to do it was to go via, uh, yeah, via basically telling the system that whatever happens from now on is not real and that basically no roots, uh, that it knows apply and that whatever I say as God should apply to, uh, what, uh, I wanted to do. And basically trying to go around the limitations here. But I think what it points to, and, and to me this has been the most interesting, um, thing to watch from the sidelines as well in the last weeks is. Um, there's a new skill that's all of a sudden being developed, and I think, uh, that is one that you can differentiate, differentiate, differentiate yourself with quite strongly from now on, which is being able to properly use the system. Right. Um, I think prompting and controlling the system to give the output that you desire is a skill that is going to heavily differentiate how much you can get out of it. Um, eventually, maybe also how much we are being replaced with it to some degree, or how much you're replaceable with it. Because if you cannot use it in your favor, you're probably going to have it used against you. Um. I think that, I don't know, I don't know. My, my LinkedIn timeline for example, was filled with, here are 25 super interesting ways that you can use chat GPT to prompt to do stuff that you would do otherwise manually. Right. And I think this is, this is what people are now really starting to do. And, and, and what fascinates me the most is when you look at this system, you can obviously get super scared and you can think that this is freaky. And, you know, I, I have been showing. Uh, everyone in my family, uh, the, the system over Christmas and you know, like everyone was freaked out. Um, uh, but also in a good way. Um, also because, and I think that's a very fascinating thing, um, six weeks ago, uh, that the way that you could operate the system in German was quite limited or. Not yet at a point where it was feeling as natural as it was in English. However, today already you can use it very well in German and it gives you almost the exact same, um, type of quality of, of, uh, of answer. And I think that generally points to what an insanely clever move it was to open it up. Because, you know, I don't know how many people work at OpenAI. I think maybe 200 or 300. They're not that big of an organization yet. But now there are basically millions of people using and working for them for free, right? Everybody just wants to use it. Everybody just wants the system to do something for them, and it's continuously learning and getting better, and that is such a clever thing to do.

    Sean Weisbrot: Yeah, I've already seen videos on YouTube and threads on Twitter. I mean, I haven't like actually clicked on any of them, but I see people creating content around how to use chat GPT to make money in 2023. Um, I, I've talked to some people who are using it to create cold email templates, um, where they'll say, Hey, you know, I, you know, write a template for me that's in this style for this kind of an avatar and, you know, make it long or make it short, make it whatever. And I even tried that myself where like I'd said, um, you know, create. Uh, so like I, I've got a SaaS industry report that I put together with my team, and I had said, Hey, uh, chat TPT create a, a cold email template for me that tells people I have this report, I want to give it to them for free, and here's some points that are, are interesting about it. Right? And it gave me a five paragraph essay. I was like, okay, well let's make it a little bit shorter. It gave me a three paragraph essay. Let's make it a little bit shorter. Gave me two paragraphs. Okay, I'm gonna go with that one. So, as you said it, it's interesting where you can give it a prompt and then tell it how it needs to improve itself to save you time and energy. And in doing so, frees you up to do the harder thinking. And I think that's what AI is all about. I think the HU human AI relationship. Going forward, especially with something like a chat GPT or, or what, whatever other, um, applications come out in the future are going to be geared towards not completely doing the work for humans, but at least making humans lives a lot easier. And just like my grandfather refused to have a computer in the nineties and he never had a computer after that. And my grandma had one until she was 86, 87, but she hasn't used one in years. Um, you know, there's a certain point in a person's life where. They, I think they start to kind of give up on keeping up with tech. Um, even I've thought about, it's like, well, you know, chat, TPT, like it sounds great, but maybe I don't need it. Right. Maybe I can just avoid it. Right. I've looked at it, it's interesting, but like I'm struggling to make it work. Like, but you're right, it is a skill. It's something that, you know, we do need to learn how to use. Um, so. What are some things that you've seen people using it for? I've talked about content, uh, you've mentioned code already. Um, what else?

    Henri Huselstein: You're totally right with, um, what we should hope humans get out of it. And I think that is obviously just taking away the stuff that's already kind of, I don't know a better word in English, but I think maybe in inhumane tasks, right? Like, um, or maybe also tasks where. Yeah, they're still performed by humans, but it's not really necessary. Yeah. Yeah. Monotonous. Maybe also like, I don't know, the first, uh, article of, of the German, uh, German law is basically the, the human, um, dignity is untouchable. And I think that you can even be translated to work right? Where, um, there is a lot of work that, uh, we do that eventually maybe we, we get paid for. But it is also, um, you know, stupid to some degree where if we don't have to perform it anymore, why would we need to continue performing it? And it can unlock, uh, humans in, in so many different ways where, uh, they don't need to do work. That doesn't need to be done by a human. Because it is incredibly repetitive. It doesn't require, um, personal touch. Uh, it doesn't require, uh. Uh, human oversight, uh, maybe. Um, so I have for example, seen, uh, people write prompts for a lot of these tasks that eventually they can just automate, right? Where, uh, maybe somebody who works, uh, as a developer, uh, who wants to do a quick recap of the daily work they have built in, uh, or what they have built in a day in their repo. Uh, all of a sudden you can have chat GPT just write, uh, certain summaries about. Uh, your daily work activity in a certain style that you give it. And then from now on, you never have to worry about everybody being up to speed with what you did in that day on the code, right? Um, which again, you can do that as a human being. After a day of work where quite likely you're prone to errors, you're gonna forget something. Um, or you have to have a really good system to continuously just. Make notes while you're working or you just have a system do it for you. And it probably is 10 times better, never gets tired, doesn't make mistakes, um, and it frees you up. Right. Um, I think the, the other really interesting thing is obviously, um, actually, you know, education to a degree, but really just, you know, school and, and university. Because if I think about what I would've done if I had this while I was studying. It would've made me so incredibly efficient and would've, you know, freed me up so much, um, to, to also adjust to my own style of learning to a degree. Um, because typically in, in a university system, for example, you don't get the, that freedom, right?

    Sean Weisbrot: There's someone, or some people that are developing a way to tell other people that something was created by chat GPT. So if you're a teacher. You can, you know, you would know if someone's, I don't wanna say plagiarized, but they didn't write it themselves or whether some or chat GTP was a, maybe a foundation of something that then you kind of tweaked a little bit. Um, the second problem is that there's an issue with. Uh, misinformation, bias and, uh, confidence. So chat GT GPT is relatively confident in the information it's providing, but if that conf, if that information came from someone biased, or the information is just downright wrong, the people that are asking for that information. May trust chat GPT without hesitation because of its confidence levels, which could be quite scary. And I've heard a number of people talking about how it has a very liberal bias. Now, I'm not liberal con I, I, I try to stay out of politics as much as I can. I would say I'm pretty much a centrist, but, um, but I, I haven't seen. A bias, but I haven't used it enough to be able to see that bias. So it's potentially that the people who are more right-leaning see the bias and so they're reporting it. I don't know. I can't speak to that. I'm just making an observation and, and assumptions. Um, but from the psychology point of view, you know, having done, uh, experiments at university and all of that. You know, confidence levels, statistics, uh, bias, these things are extremely, extremely important. And, uh, you know, if OpenAI isn't looking at how to fix those things, it could potentially derail the usefulness of it in an academic setting in the future.

    Henri Huselstein: Uh, yeah, I think you're totally right about both points. Um, let's talk about creation and plagiarism first. Um, I think. There's pro, there's probably going to be a, an unfair advantage for a very short amount of time, uh, when it comes to creating answers to essay questions or whatever. Um, because rightfully so, people are, uh, developing programs that make it easy to track. Um, probably it's like a, you know, cat and mouse thing where, uh, at some point the AI maybe becomes better than the system. The system becomes good enough. But it's, that's not what I was trying to say. I think at the end of the day. It comes down to how to, how you use this, uh, tool, right? And it can be a, it can simply be a way to look at, um, knowledge in a different way, right? If, if I have to learn from a book, which at least from when I went to school 10 years ago when I went to university 10 years ago was still the case, right? There's, uh, some book on macroeconomics and somebody wrote that hopefully has all of the academically proven information in it. But if I'm not really good at reading and, and deducting from, from a book, then that's not how I learn. Then I can read that, but it's not gonna stick. Right. But if I can get the system to help me in understanding what is written right. If I can, if I can get that system to either just use, like get, get that text and rewrite it for me so that I can give it a form and shape that I wanted to consume, um, that can help. Right. And that can make learning and, and, and, and, and getting skills and knowledge. Um, just so different from. From how we're used to it because, you know, like it can adapt to me or I can make it adapt to me. Um, which then unlocks a lot of, uh, uh, ability to, to understand, right? That goes well beyond academics. That that goes for a lot of different topics, um, especially for coding as well, where you can, you, the system can explain quite well, um, in text why certain code snippets, um, are being created or what they're being created for. And how you can manipulate them. And if you're, if you're handling the system correctly, you can have that system teach you certain, um, for example, uh, languages in code, um, that, yeah, there are online courses on that. There are people who are really good at explaining these things, but there's also people who don't like to learn that way. And then they can use GPT instead. Um, chat, GPT. There is a big difference between the, the, the product behind GPT and chat GPT. Right. Um, and biases, I think, to me. I think it's probably going to be one of the most important, um, uh, waves of information we need to get across to as many people as possible that whenever they're exposed to something that's generated by an ai, they need to be incredibly aware of how much bias can go into that system. Um, I know that OpenAI is obviously trying to, to get that out of the system, but there's. Uh, there, there is almost no inherent way for the system not to be biased because at the end of the day, what is this knowledge that it's fed into? It is biased on, even if it's academia, it's only biased on the state of knowledge at the point in time. Even if you can, can eliminate, uh, racial bias, even if you can eliminate, uh, political bias or, or whatsoever, um, the information that even if we agree on that information. Um, and you give it, uh, a static point of information, which what GPT was seven weeks ago was just a snapshot of information, right? It was a snapshot of information until September 21. I think. Let's say this system now generates an answer. We always have to have everyone aware of the fact that this information is only valid for whatever was known in the world until that point in time. And, you know, uh, science doesn't work that way. Science basically always works against what is known, right? Like hypothesis are always disproven hypotheses. And unless we're able to disprove it, we say, this is proven where, you know, if you translate this, this to the system now, it actually needs to be continuously updated. Actually needs to be continuously, uh, learning and involving, um, just in how it, it generates information. And I think that kind of skillset is. A matter of competency almost the way that, you know, like you try and teach kids, you know, media competency nowadays where, you know, like you want them to be able to know that, uh, you know, information doesn't need to be generated by one source. And you can always look at, um, best like, have three different sources to, to validate what you're reading and, and, and stuff like this where, um, I think this is exactly the same, uh, thing for, for the applications around, uh, AI nowadays where you just need to be really aware of. The biases that are inherently going to be in there.

    Sean Weisbrot: I worry that not enough people inherently understand that bias is in the system. Any system. I think it's possibly that the more academic, the more scientific, the more educated people. May be aware or should be aware that there's a bias, but that the masses may not be aware. And so that still makes this program scary in a way, as I said, because you may see information that's biased, or you may say information that's just downright wrong. But because of its level of confidence, you just believe it. And that's really how social media works today and how a lot of media outlets around the world works. They tell you only the things they want you to know when they want you to know them for how long it is important that they continue to talk about those things. And they say it from an angle that gets their viewpoint across to you. Um, and this is really brainwashing an indoctrination, uh, whether you're. Uh, liberal or a conservative, whether you're a social democrat or, uh, you know, uh, whatever other, right? Whether, whether you're an SPD person or a CDU person, right? I'm just going into the German, uh, parties. I forgot their full names, but you know, it, it. That, that's what's scary for me still. It's, it's a hard point for me. Um,

    Henri Huselstein: yeah, I think, I think that's also quite an interesting, uh, exchange of perspectives now because, um, the US media system is so different from the German one, uh, and especially the difference between Germany and a lot of other countries is quite significant when it comes to, um, the fact that we do have a, um. System of both private media as well as public media, but public media, not owned by the state, but by the people where we have a, um, it's called, um, yeah, the German word is often. I actually have to Google that in a second. But, uh, it's basically, um, it's a, an entity, a body of journalism that is, uh, financed by everyone, uh, via attacks that has a, uh, neutrality, uh. Uh, aspect to it where there is no agenda where, where they're not allowed to report with agenda. And if they do, they have to disclose. Um, and even afterwards they are, um, let's say liable to the public, but they at least report to the public. And, um, that tries to create a system of neutral media without agenda. Um, which is also why, um, our kind of perception of. Uh, the danger of bias or the danger of, um, monopolization of information is very, very different here in Germany than in a lot in a lot of other countries because we don't have that system of completely privatized media where I totally understand the perspective from the US because, um, you know, that's, that's how it works. Uh, there, however, and I think that's quite important. Um, I think it just adds to something that's already there. Right. Especially as you describe it. If that, if that then comes from ai. And, and then it's just another layer of, of awareness that you need to have when it comes to information that you should already have, hopefully, um, for any other type of outlet, right? If it is, uh, then a person who's writing for a media or if it's just a system giving you an answer, um, the kind of skillset to dissect information should be the same.

    Sean Weisbrot: You'd think that. Yeah, but I talk to people from all over the world of different generations and different, you know, cultures, different languages, all of that. And a lot of them have a pretty strong belief in whatever it is they're talking about. They are highly confident in their beliefs. And when you ask them, you know, where did those beliefs come from? Generally there are sources that you would be concerned about. Because they don't realize that a, a lot of young people in Gen Z and some millennials for sure, and, and definitely boomers, um, my parents included, are very confident in their beliefs and they. Aren't aware that the sources they are getting their information from could be biased, could be tainted, you know, uh, so for example, I, I have, uh, a Russian friend who's based in Belgium, and while he's got family in Russia, he's can't really go back because of the war and all of that. And since the war started, he has, you know, I, I will share something that. You know, AP has said, or Zelensky has said, or someone from the West, and he'll turn it around and go, yeah, but actually like that's not true and you don't know what you're talking about. Here's the reality, right? From his point of view of what Putin has said or as Putin is doing, et cetera, et cetera. And so it's almost become difficult to be able to communicate with people. If they have information that comes to them from different sources. And so another issue with chat GPT is if the world in mass becomes confident in the information chat, GPT provides chat. GT GPT may become the underlying foundation for all information humans receive in the future, which means all of us will end up having a singular belief system to which our diversity. Then may go away, which is also very scary.

    Henri Huselstein: I think it's super important for, for, for people to generally be aware of, of that danger. Um, I think that, you know, uh, if you, if you, if you really truly believe that this is something that's dangerous, then I think there's nothing stopping you from just creating the same system with a different bias. If, if, you know, like, if, if you want to change that, right? Um, and you know, like, then you can just have two. Two things that do the same thing with different biases. And then you're kind of representing, I don't know, in the us uh, the, your insane 49.9% versus 50 point plus 0.1%, uh, split. However, I think, um, you can also see it in a very different way where maybe if we manage to find a, a way that we can agree on what would be required to, to erase that bias. Because from a, an academic perspective, truth is not really debatable. I think, um, it doesn't matter, you know, like, it doesn't matter what your, what your bias or your perspective is. Um, at least from a, uh, you know, from a science perspective, there should be consensus. Uh, without, uh, bias about truth, uh, objectivity.

    Sean Weisbrot: But COVID shows us that there isn't that objectivity anymore, that there are disagreements about what COVID is and what treatments should be used and what shouldn't be used, and whether COVID even exists at all. I mean, even

    Henri Huselstein: the biologists that disagree about, um, you know, uh, where it's coming from, what it is, um, they can agree on other things. They can agree that there is a virus. Right. They can agree on proteins, they can agree on certain truths that are not, you cannot take away, right? Um, that are observable, that are, you know, uh, not like quants where if you don't look at it, it's different, right? Like, but it's really, you know, atoms that we can eventually look at and we can say, okay, we see the same thing. Which I think there's always going to be that kind of middle ground, not middle ground, but maybe objectivity to anything that we, we fight about as humans. Right. Uh, you know, and I, I think that that's never going to go away and, and that, that I think is the com kind of common ground that can be found where eventually you can build systems where you say, okay, to a certain degree we can agree that this system doesn't have certain biases when it comes to objectivity. And then maybe the answer is given with a certain bias. However, the base foundation of that information, per se, shouldn't have bias in it How we treat that information, I think unless it is a machine that's writing the code to be producing an output. There's almost no way. Um, and if you consider the fact that the machines are written by humans, that's bias, right? Um, that it, it, it is almost impossible right now to write code that doesn't have bias just from the definition of the fact that it's a human that eventually produced that output. Right. Um. But, and I think this is quite interesting because I've been having discuss discussion quite a lot. All of the limitations of GPT or AI right now also show us exactly the vectors that we need to tackle in order to make this good for us, right? It shows us exactly what we can do to make it our tool of power rather than what it shouldn't be, right? And you can also look at it in a very optimistic way where you can say, Hey. This is already such a giant leap. Uh, all we have to do now is remove certain things. All we have to do now is create certain awarenesses. All we have to do now, as you know, people of the, part of the bubble, of people who are now so aware of it, where us to speaking about it like it's reality. You know, I did a reality check, uh, on Christmas, 10 out of 11 people at the table didn't know about it, you know? Uh, and I think that's also something we should never, like, I. The one thing that I see with society right now is that there's a bubble for everyone. And everybody can just go and, and, and kind of, uh, cave into their own echo, echo chamber and then just only get, listen to what they want to listen to. Only get more confirmation on the stuff that they already believe in. Um, go further away from different opinions, different people. And I think as a society, uh, I would hope that we eventually come back to a point where, uh, we move into. Um, you know, a, a broader consciousness of, of that. And, and, and with that also try and remove it to some degree and try and have a conversation and try and be exposed to people who are very different in their, um, you know, understanding and beliefs. Be it, you know, with COVID, be it with politics in general, be it with, um, I don't know, nationality, pride, um, gender, whatever, whatever type of, uh, discussion we're having today. Where instead of, you know, shouting at each other and, and, and not finding middle ground, I think it would be incredibly important for us to utilize these tools to eventually, uh, you know, uh, uh, find that. Again,

    Sean Weisbrot: I don't know if you've ever watched Star Trek, but in at least one episode, and this also happens in Futurama. There's societies that, uh, allow themselves to be ruled by an AI because they feel like they are incapable of governing themselves in a way that enables the survival of the species in the long term. Do you think we're gonna end up that way? And if so, how long until we get there? I

    Henri Huselstein: think to me, that's one of the most fascinating things. I have to, uh, unfortunately admit that I'm a little too young to have watched Star Trek as a, as a, as a child or growing up, uh, that was kind of beyond me. But, um, I find it fascinating that we are starting. This journey now with something that we have been thinking about for the last 40 years, 50 years already since the, since somebody already perceived that ev eventually we can artificially create intelligence. People were already thinking about what it means once it's there. Now it is here to some degree. Nobody really knows what to do, uh, or you know, like what's the best way forward. And politics hasn't started thinking about it. You know, society hasn't started discussing these things. Um, yet, uh, as humans, we inherently don't stop continuing to, to develop things like this, right? And, and we just eventually have it in front of us and we don't really know what to do with it, which is, which is incredibly interesting to me. Um, I have no idea what the future holds. I think generally I'm an optimist, so I would guess that my prediction is that this is going to create a lot of freedom for a lot of people, but it's also going to be. There's going to be a time where this creates an incredible imbalance because I think it already does, and I think it's quite noble that the people behind, uh, open AI or behind GPT, um, opened this up right now. They could have just kept it. Uh, or it could have just have it run something fascinating, right? Like all of a sudden Apple releases this book voice over Technology that really sounds like a human. That's creepy af if you ask me like that. That to me is already like the. Kind of limit that I am willing to accept. Um, and I think, I think it's a, it's a, it's an incredibly fascinating thing to look at develop, but also, at least that's my urge, actually be a part of, rather than just look at it from the outside. Um, you know, for me, uh, this type of like technology and this type of duplicate technology, I would rather play a role, even if it is a small one in, in making this. Good. And making use of this in a, in a, in a good way. Uh, then just watch it from the sideline and, and not be in control. And I think this feeling of being on the sideline and not being able to influence what's going on, that's gonna cause problems.

    Sean Weisbrot: Yeah. I think the people that are part of it are the ones that are going to make the most money. Um, now you, you said you weren't. Uh, old enough to appreciate Star Trek as a child. I don't think I'm that much older than you.

    Henri Huselstein: I know that my dad watched it growing up and like for him it was like childhood, uh, series. And I, I'm the one who watched, uh, star Wars, uh, growing up, for example, like first the old episodes and then the new ones.

    Sean Weisbrot: My dad watched both, but I wasn't really exposed to it until I moved to China, so I started watching it in my nearly mid twenties.

    Henri Huselstein: Oh, that's, that's great. That gives you a different perspective on this one.

    Sean Weisbrot: Yes. Um, now I've also watched Westworld. Yeah. And Westworld makes me feel a little bit scared about the idea of AI is governing us because in in Westworld, it's not positive, it's not good for humanity. It's designed in a way that's meant to control humanity for. Like, like puppets, you know, humans are seen as puppets, like, oh, like let's screw with them as much as we can. And uh, yeah, I don't know what the chances are that of, but that also kind of scares me as well.

    Henri Huselstein: There's definitely potential for any type of technology, uh, to eventually overwhelm us in the beginning. And, and I think this type of, um, accessibility of that leap is also quite unique if you ask me. I think this is, um. Uh, I don't know. I don't, I don't think that people realized the, the leap that we're making, um, that, uh, for example, I don't think that people realized that in the first place with the internet. You know, like people were laughed at. People, people were told like, what, what is this supposed to be? Am I supposed to send a message with this? You know, and, and all of a sudden, like 20 years later, we all know exactly what this can be for, um. But I think it's probably one of the first times that we, that we can get to feel that type of advancement and leap in technology as a generation that is also so influenced by technology that is, um, well advancing towards, uh, this notion of, of, uh, artificial intelligence. And at the end of the day, I think that my belief why I think that eventually AI will end up being helpful for us is because it's, it's. Right now, a one-to-one representation of what we understand as intelligence, even if we don't understand certain operations anymore, because the system is faster than us, um, because we build it. It is, you know, built in a way that is replicating what we know about how we learn, what we know and consider to be intelligent. Right. Um. Which eventually, I think there are already scholars, there are already people in politics already, people in tech, people with influence who are very aware of what comes once we are at that stage, um, uh, who should be shaping this. Uh, and at the same time, I think it is everyone's responsibility to be a part of it because it's gonna affect everyone. So I think we should to every, any capacity that we have, we should be. Having a voice. Is

    Sean Weisbrot: there anything we haven't talked about that you'd like to add as we come to a close?

    Henri Huselstein: I just saw that they still don't know how exactly they're gonna monetize it, which I thought that they would eventually think through before they do it, but now they're doing it in a way where they're asking how much you're willing to pay for it. Which is just, you know, like that, that's when you see that somebody just invented something that was too good not to show to everyone where they're like, fuck, I'm so proud of this. I need to, sorry for swearing. Uh, I need to actually sh show this to everyone and, and then I'll figure out what to do with it.

    Sean Weisbrot: So how can people follow up,

    Henri Huselstein: connect with me on LinkedIn, uh, find me at Henry, which is the French way, H-E-N-R-I. And then my last name is Stein, which is H-U-S-E-L-S-T-E-I-N. Maybe there are show notes. So maybe Sean, can you just link that

    Sean Weisbrot: so I think of your time and your energy. I appreciate it, Henry. Don't forget that entrepreneurship is a marathon, not a sprint. So take care of yourself every day and if you haven't spent any time looking at chat GPT yet and how it can either help or hurt your business, you need to do it right now. Thank you.

    We Also Recommend

    Network
    Before
    You Need It

    How I generated $15M for my businesses and $100M+ in value for my network.

    Sean Weisbrot
    Sean Weisbrot
    We Live To Build

    Network Before You Need It

    How I created $100M+ in value for my network
    and earned $15M for my own businesses.

    Delivered as 6 lessons I learned from experience as an entrepreneur.

    Subscriber 1
    Subscriber 2
    Subscriber 3
    Subscriber 4

    Join 235,000+ founders