Author Jeremy Kahn helps us think about the future.
Jeremy Kahn is the AI editor at Fortune Magazine and the author of the new book Mastering AI: A Survival Guide to our Superpowered Future. In this podcast, Motley Fool employee Alex Friedman caught up with Kahn to talk about the current AI landscape.
They also discuss:
- Bill Gates’ initial hesitancy to invest in OpenAI.
- Where LLMs go from here.
- Developments in biotech.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our beginner’s guide to investing in stocks. A full transcript follows the video.
This video was recorded on August 31, 2024.
Jeremy Kahn: What is it that the human does best and what is it the machine can do best? Let’s each be pre-eminent its own realm and pair the two together. If we think about it more like that, then we are able to master AI, and we will be able to reap the rewards of the technology while minimizing a lot of the downside risks.
Mary Long: I’m Mary Long, and that’s Jeremy Kahn. He’s the AI editor at Fortune Magazine, and the author of the new book, Mastering AI, a survival guide to our super powered future. My colleague Alex Friedman caught up with Kahn, earlier this week to discuss the current state of the AI arms race and to take a look to the future. They also talk about what convinced Bill Gates to move forward with Microsoft‘s initial OpenAI investment, how LLMs are being used to shorten clinical trials, and the changing relationship between man and machine.
Alex Friedman: You are the Fortune Magazine AI editor, and you were a tech reporter before this. At what point did you first hear the term artificial intelligence, and when did you really start taking it seriously?
Jeremy Kahn: I guess I first heard the term probably sometime in 2015. Even before I had become a tech reporter at Bloomberg, I was doing some finance coverage and working for a magazine Bloomberg had that I was doing a story about London’s little tech hub emerging tech hub. At the time, people said the most successful exit, but in some ways, the most disappointing exit from the London tech scene was this company called Deep Mind, which I knew very little about. But it had just been acquired a couple of years before by [Alphabet‘s] Google for $650 million which was the best exit that the London tech hub had had at the time. But people were upset because they thought that this could actually potentially be a huge future company, and they thought maybe it sold out too early. I didn’t know anything about Deep Mind, but I started to look into it, and that’s when I first heard about artificial intelligence. Then a few months after writing that story, I got a chance to move over to the tech reporting team at Bloomberg, and then I actually started covering AI at that point. That was basically the beginning of 2016.
Alex Friedman: You’ve now been covering AI for years. I’m curious, after ChatGPT was released, were you surprised by the reaction and the adaptation of the technology or was this something you’ve been waiting for a long time?
Jeremy Kahn: Well, yeah, I think all of us who had been following this for a while, we were wondering, when will this breakthrough into the general public consciousness. But I was surprised that it was ChatGPT, that was the thing that did it, and I was surprised by the reaction that ChatGPT. I think in retrospect, I probably shouldn’t have been, but yeah, because I’d been following it for so long, and it seemed like the technology was making fairly constant progress, but OpenAI, which I’ve been following as well for years, had previously, this is months prior to ChatGPT being released had created a model called GPT-3 instruct, which was a version of their GPT-3 large language model, which itself had been out even earlier than that. But it was one that was much more easy to control. One of the things you could do with the instruct model was have it function as a chat bot, have it engage in dialogue.
But OpenAI had not released this as a consumer facing product instead they’d made it available to developers in this little thing they had called AI playground is a sandbox they had that developers could use their technology. They let some reporters play around with it, and I had played around with it a little bit and thought, this is interesting, I didn’t think it was going to be a huge thing. Then when ChatGPT initially came out, it looked like the same thing. I thought, this is just like an updated version of this GPT-3 instruct model. But actually, I think the simpleness of the interface and the fact that they made it available freely for anyone to play around with just made the thing go viral. It was the first time people realized that they could actually interact with this AI model, and that you could do almost anything with it. I think the fact that it was designed to be in this dialogue through this very simple interface that looked like a Google Search bar made all the difference. When the GPT-3 instruct model was out, it was actually much harder to us it had all these dials that you could control the output, which were great things for developers, but actually made it much more confusing for the average person to use.
Alex Friedman: You tell a great story in mastering AI about Bill Gates’ skepticism about Microsoft’s huge investment in OpenAI. Why was he so skeptical and how did Satana Della get Gates to change his mind?
Jeremy Kahn: Gates had been a big skeptic of these large language models. He thought they were never going to work that they were not the path forward to super powerful AI. They seemed too fragile. They didn’t get things right. He had played around with some earlier versions of OpenAI’s technology. The OpenAI created a system called GPT-2, which was the first system that could write a bit like a person. But if you asked it to write more than a few sentences, it went off in strange directions and stopped making sense. He played around with GPT-3, and he thought GPT-3 was slightly better, but it still had some of the same problems, and it couldn’t answer. In particular, Gates thought the real test of a system would be if it could solve hard questions from the AP advanced placement biology test. He had played around with GPT-3 on this, and it had failed on those AP biology test questions, and as a result, he just really didn’t think it was going to go anywhere. But so such an Adela he knew this, and when he let the OpenAI guys know that this was the case, that Gates was skeptical, and that Gates in particular had this interest in AP biology. Then, one of the things that OpenAI had done when it created this even more powerful model called GPT-4, which is now out, and is the most powerful model currently out. But before it was released, one of the things that OpenAI did is it had gone to Khan Academy, which is this online tutoring organization that is a nonprofit.
They had asked if they could partner with Khan Academy, and it turned out one of the things reasons they wanted to do this is that Khan Academy had a really good data on AP biology test questions. It had lots of examples of those questions and lots of examples walking you through how to solve those questions successfully and answer successfully. They made sure that GPT-4 was trained on those questions and answers from Khan Academy. As a result, GPT-4 was able to totally ace the AP biology questions. When they brought that system back in to try it out with Bill Gates and he tried his AP biology questions on GPT-4 it completely aced them, and Gates was blown away. That’s what really convinced Gates that large language models maybe were a path toward super powerful artificial intelligence. Since then Gates has rode back and a little bit he said, he thinks that this is a big step in that direction, but probably won’t take us all the way to systems that can really reason, as well as humans can across a whole range of tasks. But it definitely impressed him and convinced him to allow such an Adela to continue to invest in OpenAI.
Alex Friedman: How do you think Microsoft’s $1 billion initial investment in OpenAI impacted the development of generative AI and the overall AI business landscape.
Jeremy Kahn: It was hugely important because it allowed OpenAI to go ahead and train first GPT-3 and then later GPT-4. It was really those models that helped create the landscape of generative AI systems that have come out from competitors and from researchers. Without that investment, it’s not clear what would have happened. There were other people working on large language models, but the progress was much slower. There was no one that had devoted as much emphasis to them as OpenAI. I think without that billion dollar investment from Microsoft, it would have been difficult for that to happen as quickly as it did.
Alex Friedman: We’re recording this interview at the end of August 2024. I’d love to hear your current analysis of the Big Tech AI arms race that’s been taking place over the last decade and where you think it’s headed.
Jeremy Kahn: It’s fascinating. There’s definitely a race on, and it’s not over yet, and it’s unclear, who’s going to win, but it does seem like the competitors are familiar ones, and that they’re mostly these really big tech companies that have been around for the last two decades and dominated the Internet and mobile era. For the most part, it’s Microsoft, it’s Google, it’s Meta, and those three, in particular, and then maybe trying to catch up is Apple and Amazon. Those companies really are the ones that are at the forefront of this and then you have this one new entrant, which is OpenAI, but even OpenAI is very closely partnered with Microsoft. That’s basically the constellation you have. You have all of these companies are racing toward ever more powerful AI models, basically around the same architecture, which is based on something called a neural network, which is, again, a software loosely based on how the human brain works. Within neural networks, they are all using something called transformers, which was a system that Google actually invented in 2017. It started to implement behind the scenes in Google search. It helped basically clarify what users intent was when they were searching for things because you could understand natural language much better. But Google did not scale up the systems as much as OpenAI did at least initially, and did not try to create systems that could generate content and write the way OpenAI did. But of course, once ChatGPT came out, Google very quickly was under all this pressure to catch up.I think at this point they’ve shown that they can catch up and have caught up. Gemini, which is Google’s most powerful model, is very close, if not completely competitive with OpenAI’s GPT-4, on some metrics, it may even be ahead.
There’s some other players in this race, there’s a company called Anthropic, it’s smaller that was founded by people who broke away from OpenAI, that’s closely aligned with Amazon at this point, and it is very much part of Amazon’s efforts to try to catch up in this race. They have a model called Claude that’s very competitive and powerful. Meta has jumped into this with both feet, and it’s taken this approach that it wants these models to be open source and it wants everyone building on its technology. I thought the best way to do that was to for the models for free. It doesn’t have a big Cloud computing business that is trying to support by offering models that are proprietary. Instead, It thinks it’s going to benefit the most by open sourcing these models, but it’s created a model called Llama that’s very powerful, also equally competitive. It’s just interesting to see where this is going to go. The models keep getting larger they are multi modal now, meaning that they can take in audio and video and output audio and video and still images as well.
They can reason about, what they’re seeing in imagery and in videos. They can engage in very natural conversation over a mobile phone or through audio. The models are very interesting, it’s not clear that they’re going to overcome some of these fundamental limitations. You may have heard about things called hallucinations where models make up information that seems plausible, but is not accurate. It turns out as the models have gotten more powerful, they haven’t necessarily been hallucinating that much less, and some people think that’s a fundamental problem that we’re going to need some other technique to solve before we actually get to this is Holy Grail of the AI field called Artificial General Intelligence. Again, that’s AI that could think reason like a person across almost any cognitive task. It’s not clear how close we are to that, but we’re clearly a lot closer than we were before ChatGPT came out in late 2022.
Alex Friedman: In your book, you talk about how Apple was slower than Microsoft or Google and rolling out AI. Since you sent mastering AI to print, Apple has released their version of AI creatively called Apple Intelligence. That’s been in large part driven by a partnership between Apple and OpenAI. I’m curious, what do you think about Apple’s rollout of their own AI platform?
Jeremy Kahn: Apple was behind, and I think they needed to catch up. I think Apple’s instinct is always try to do everything in the house. They have been trying for years to work on advanced AI models of their own. They were not as successful in part because I don’t think they ever devoted quite the computing resources to it. Then also they had a problem with hiring some of the best talent, actually, even though Apple is a very good reputation. But I think in particular, among the AI researchers, they really needed to get ahead in this game. They were not seen as at the cutting edge, and then it became a self reinforcing problem. They ultimately decided to partner with OpenAI, which in some ways was an admission that they were behind. That has allowed them to get back in the game, though. I think they have so many devices out there they have a huge distribution channel. Distribution channels do matter, and that’s an advantage that they know they have, and they’re trying to leverage it. We’ll see what happens.
I think there’s a chance that people will want to use whatever Apple is offering just because they like Apple products, and they already are embedded in the Apple ecosystem. It’s a pain, as everyone knows, to switch your phone or switch a different operating system for your laptop. I think most people don’t want to do that. If they can have a product that’s pretty good or very close to top of market, without having to switch devices, that’s what they’re going to go for. Apple’s been smart by partnering with OpenAI, which does have the leading models in the market. Apple’s also taking this approach that is very much in keeping with their own strategic position around user privacy and data privacy, which is they’re going to try to keep as much as possible any data that you’re feeding to an AI chat pod or AI system on your device and not have it transmitted over Wi-Fi or over your phone network to the Cloud, because that introduces all security concerns and data privacy concerns. They’ve said they’re only going to hand off the hardest queries to open AIs technology. Ultimately, they may try to have something that runs completely on device.
The way AI is developing, the most powerful models tend to be very large and have to be run in a data center, so you have to use them over the Cloud. But people are very quickly figuring out within six months, how to shrink those models down considerably, and in some cases, be able to mimic some of the capabilities of the largest models with models that are small enough to fit on your phone. I think Apple is betting that that trend is going to continue, and that for what most users are going to want to use a digital assistant for, what they can put on the phone is going to be sufficient.
Alex Friedman: What do you think about the partnership between Apple and Open AI and what this means for the space, especially considering the large stake that Microsoft has in Open AI?
Jeremy Kahn: I don’t know how stable a partnership it is. I can’t imagine Microsoft’s thrilled about it, given its rivalry with Apple, but it’s a funny world in Silkin Valley. There’s a lot of frenemy relationships. There’s already quite a lot of tension in the Microsoft Open AI relationship because Open AI sells services to some of the same corporate customers directly that Microsoft is also trying to sell to. Microsoft wants those people to use Open AI services, but on its own Azure Cloud, it doesn’t want them necessarily buying those services directly from Open AI. You already had that tension, and then the Apple relationship just sort of adds to that tension. But it’s not clear also how long lasting that Apple open AI relationship will be. I don’t think Apple necessarily wants to be in a position where it’s dependent on Open AI for what is going to be maybe the most important piece of software that’s on your device. While Apple’s primarily a device company has always known that, software helps sell those devices and help cement people to those devices. I think if that glue or that cement is being provided by a third party, that’s going to be problematic for Apple strategically in the longer run. Apple is trying very hard still to develop its own models that will be competitive in the marketplace. It just hasn’t managed to do so yet. That’s why I think it had to partner with Open AI. But how long lasting that partnership will be? We’ll see.
Alex Friedman: Most people know Open AI and ChatGPT. What comes next after ChatGPT? Where are we headed?
Jeremy Kahn: Well, I think the next thing we’re going to see in the very near term is what they call AI agents. This it’ll probably be an interface that looks a lot like ChatGPT, but instead of just producing content for you, you can prompt the system to go out and take action for you, and it can take action for you using other software or across the Internet. It will become the main interface, I think for most people with the digital world. Now you can ask ChatGPT to suggest an itinerary for a vacation, but you still have to go and book the vacation yourself. What these new systems will do is it will suggest the itinerary, and then you can say, that sounds great. Go out and make all those bookings, and it will go out and do that for you. It may go out and research things for you and then take actions that you want it to take. It might go out and negotiate on your behalf. There are already some systems out there that are doing insurance negotiations on behalf of doctors to get pre approvals for patients. I think that’s an example of where this is all heading. Then with incorporations, you’re going to have these systems that will perform lots of tasks for you across different software that now have to be performed manually by people often cutting and pasting things between different pieces of software and doing something with the thing you create. That’s all going to be streamlined by essentially these new AI agents.
Alex Friedman: Other than these agents, what are some of the other trends in AI that get you the most excited?
Jeremy Kahn: Agents are interesting, I think in order to have agents that are really going to be effective, we’re going to have to have AI that is more reliable and has better reasoning abilities. There’s certainly some hints that that is coming. You hear tantalizing rumors and stories that suggest that we’re getting closer to agents that really will be able to reason much better than today’s large language models have been able to. We’ll see where that goes. I mean, there are some people in some AI researchers who really doubt that this will be possible with the current types of architectures and algorithms we have, and that we’re going to really need new algorithms to achieve that reasonability. We’ll see. But I think that’s really interesting. I’m very excited about what AI in general is going to do for certain big fields of human endeavor. One is science and medicine. I’m very excited about AI being used to discover new drugs, essentially to treat conditions. I think we’re going to make tremendous progress in curing diseases and treating diseases through AI in the next couple of years. There are already systems today that are a bit like large language models that you can prompt a natural language to give you the recipe for a protein that will do a particular thing. It will bind to a particular site. It will have a certain toxicity profile, and that’s going to tremendously speed up drug discovery. Then across the sciences, you see people using AI to make new discoveries.
I think there’s potential to discover new chemical compounds, which may have big implications for sustainability and our fight against climate change. I think we’re going to see big breakthroughs in science. Then in medicine, more generally, I think also coupling AI with more wearable devices will give us lots of more opportunities for personalized medicine. That’s one of the areas I’m most excited about. The other one I’m really excited about is actually the use of AI in education, where I think despite the panic among a lot of teachers when ChatGPT came out that everyone was just going to use to cheat. I think really, if we go a few years ahead and look back, we’re going to see this tremendous transformation of education where now every student has a personal tutor that can walk them through how to solve problems and design the right way, not give away the answer, but use a Socratic method to lead the student to the answer and to really teach the student.
Alex Friedman: You mentioned biotech companies really being able to do some cutting edge research to develop new treatments. Are there any biotech companies in your mind right now that are leading the way?
Jeremy Kahn: Yeah, some of the ones I really like are small, private ones. I talked about a company called Pfluen in the book, there’s another company called Lab Genius, that’s very good, but those are smaller companies. I think if you look at the bigger ones that are publicly traded, Biointec which is famous for its work on the COVID vaccine, but it’s also invested very heavily in these AI models, and has done some really amazing stuff. I’ve been very impressed. I heard one of their lead scientists give a talk at a conference just a couple of months ago. Very impressive what they’re doing using these same large language model based systems to discover new drugs. I definitely think they’re one to watch. But the whole industry is moving this direction. Recursion labs is one that’s out there and they’re doing lots of interesting stuff. They’re also publicly traded. I would just watch the whole space in general.
Alex Friedman: What is it in particular that those companies are doing that you find so interesting in terms of how they AI?
Jeremy Kahn: Well, I think it’s just that they are using these large language model based approaches to discover new compounds and to accelerate all the pre clinical work needed to bring a drug to clinical trials stage. You can’t really shorten the clinical trial stage that much. There’s places in clinical trials where AI can help as well. It can help select the best sites for clinical trials. It can help potentially run the clinical trial slightly more efficiently, but you can’t really shortcut the clinical trial process because it’s absolutely necessary for human safety and to make sure things work. But there’s a lot that happens before a compound can even make it to clinical trial, and most of that can be accelerated or shortcuted through the use of these new Genre of AI models. I think looking at companies that are really invested heavily in those approaches is interesting. The pharmaceutical industry has actually been very slow to adopt AI. If you look at big pharma company, they’ve been very slow. A lot of their data is very siloed. They’ve been very wedded to traditional drug discovery techniques, which are more human and intuition led. They’re now I think, playing catch up, mostly through partnerships with these smaller venture backed private companies.
Alex Friedman: Switching gears, what aspects of AI keep you up at night?
Jeremy Kahn: There’s lots of risks that I’m worried about, and they’re probably not the ones that get the most attention. When I go on podcasts like this, I almost always get asked about some mass unemployment, which is a risk I’m not really worried about. I think we are not going to see mass unemployment from AI. I think some people there’s going to be disruption. I think some people may lose their jobs, but I think on a net basis, we will see, as we have seen with every other technology that there will be jobs created in the long term than are lost. The other one I get asked about a lot is, of course, the existential risk of AI that’s going to somehow become sentient and kill us all. I think that’s a very remote possibility, not in the capacity of systems that we’re going to see in the next five years. I think, we’re starting to take some sensible steps to take that risk off the table. At least I hope we take those steps. Those are not the ones I’m most worried about. I really worry about our overuse of this technology in our daily lives, and how that may strip us of some of our most important human cognitive abilities. That would include critical thinking. I think it’s just too easy when you get a very pat capsule answer from a chat bot or generative of AI search engine which gives you a whole summarized answer to just accept that answer as the truth and not to think too hard about the source of the information.
Even more so than a Google Search, where you still have links, and you still have this idea that the information has some providence, and you have to think a little bit about where is this information coming from? When you get these capsule summary answers from an AI chat, I think it just the tendency is not to think too hard about it, and I worry about us losing some critical thinking skills. There, I worry about the loss of writing ability because I think one of the dangerous things about generative AIs, it creates a world where it’s easy to imagine that writing is somehow separable from thinking. I don’t think the two are separable at all. I think it’s through writing that we actually refine our thinking and refine our arguments and if there’s a world where people don’t write anymore, they just have a couple jot off some bullet points to give to the chat bot and have it write the document for us, then I think our arguments will get weaker, and we’re going to lose a lot of our writing and thinking ability. I also worry about people using AI chatbots as social companions. There’s already a significant subpopulation of people who do this and become very reliant on kind of AI as companion bots. I worry about that because I think again, it’s not a real relationship with a real person. Although these chat pots are pretty good at simulating a real conversation.
They actually have no real wants or desires or needs. They’re trained generally to be very pleasing to people and not to challenge us too much. I think that’s very unlike a relationship with a real person who does have needs and desires and isn’t always pleasant, and sometimes is in a bad mood and certainly isn’t always trying to please us. I think some people are going to find it, well, why should I bother with the real human relationships? Because there’s so much messier and complicated and harder than a relationship with the chat bot. The chatbot gives me everything I need in terms of being able to offload my feelings to them. It gives me affirmation, and that’s what I want. I worry that we’re going to have a generation of people who increasingly do not seek out human contact. I think we’re going to have to guard against that danger, and I think we actually have time limits on how long you can have use an AI system as a companion chat bot, particularly for children and teenagers. I worry about those risks. I worry about the consolidation of power to some extent in the hands of just very few companies. I do think that’s a concern. In general, I think there’s a tendency with this technology to create winner take all economics. For the most part, that means the biggest firms and biggest companies out there right now who have the most data, which they can use to refine AI systems and create systems that are therefore more capable than others. They will accrue more and more power. I think we need to be worried about that a bit. Those are some of the risks I worry about most.
Alex Friedman: One last question, given all of those challenges, what does it mean to truly master AI?
Alex Kahn: I think mastering AI is all about putting the human at the center of this and thinking very hard about what do we want humans to do in our organizations, in our society? What processes should really be reserved exclusively for humans because they require human empathy? I talk a lot about in the book that one of the challenges here with AI is that we will put AI into places where it really doesn’t belong because the decisions are so dependent on human empathy in the judicial system or you want to be able to appeal to a human judge. You do not want the judge simply blindly following some algorithm. I worry that increasingly, we’re going to be in a world where we put AI systems in places where they’re acting as judges and arbiters on human things where empathy is required, and these systems don’t have any empathy. I also worry that we’re going to look at these systems as a direct substitute for humans in lots of places within businesses and companies when actually, we get the most from them when we use them as compliments to human labor. When they’re assistants, and when we look at them as what is it that human does best and what is it the machine can do best? Let’s each be sort of pre-eminent in its own realm and pair the two together. I think if we think about it more like that, then we are able to master AI, and we will be able to reap the rewards of the technology while minimizing a lot of the downside risk.
Mary Long: As always, people on the program may have interest in the stocks they talk about, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell stocks based solely on what you hear. I’m Mary Long. Thanks for listening. We’ll see you tomorrow.