Trade or Treason?

Listen to this episode

Speaker A: Hello, and welcome to Slate Money, your guide to the business and finance news of the week. I’m Felix Salmon of Bloomberg. I’m here with Elizabeth Spires of New York Times.

Speaker B: Hello.

Speaker A: With Emily Peck of Axios.

Speaker C: Hi.

Speaker A: We are going to talk about insider trading in the oil futures market. We are going to talk about whether lawsuits against Meta can change its ways. We’re going to talk about OpenAI deprecating Sora. We thought it was a big thing. We thought it was a game changer. Tyler, the creator, stopped an $800 million project because he saw Sora, and now it is no longer. It is a dead parrot. We’re going to talk about that. We have a Slate plus segment about Marco Yield, the sexy firefighter in Cedar Falls, which you just need to listen to in order to understand. It’s a fun one this week, so stay tuned. It’s all coming up on Sleep Money. So, Emily Peck as the markets reporter for Exios.com?

Speaker C: that’s me.

Speaker A: Tell me about the big markets mystery of the $580 million bet that oil was going to move.

Speaker C: I’m going to just set the scene from my vantage point, which is that last week it was Sunday night, and we were like, no, last week it was the weekend. And we were like, let’s relax, it’s the weekend. But then our president posted something along the lines of, Iran has 48 hours and if they don’t surrender, I’m gonna blow up their power plants. And everyone was like, oh, no. And I was like, ugh, oh, no. Like, he’s ruining my market’s morning on Monday.

Speaker B: And also, obviously, it’s bad to buy with these powers of war crimes, right?

Speaker A: Donald Trump ruining my day. More respect for people who have Monday morning newsletters and don’t want to work at the weekends.

Speaker C: Yes. So I was like, you know what? I’m just going to wake up really early Monday and deal with whatever happens in the markets then, which is what I do now. I wake up very early and I check the futures, the oil futures. What’s oil trading at? What are stock futures trading at? And I see up or down, whatever, check it out.

Speaker A: Remember when we used to wake up in the morning and check the TED spread? Now we wake up in the morning and check Brent crude.

Speaker C: I mean, I used to just check to see if I had enough coffee. Anyway, so I wake up and I’m like, oh, the markets are really like, it looks bad. They are barfing. Everything’s down, down, down, except the price of oil, which was going up, up, up. And all the people who are awake with me are also, oh, my God, the market’s blah, blah. So, so I write that all up, get the newsletter ready. 7am, I go downstairs to get another coffee. And then all of a sudden, it’s like something’s changed. What’s different? The President has Posted again, it’s 7:05am on Truth Social. He says, let’s pause now. Let’s give Iran five more days. Five more days until the season finale or whatever the next episode is of the war. And then all the lines on the charts go in the other direction. Up go the stocks, down goes the oil price. Big deal. Then we learn, I think the same day, maybe the next day, that mysteriously traders, we don’t know how many one many placed bets in the oil futures market and the stock futures market before the President posted his little post. And so, in other words, they bought a bunch of stuff and sold a bunch of stuff before the stuff went down or up, depending on which way they wanted to benefit, which whichever way benefits them.

Speaker A: There were big trades. We should explain here, which is blindingly obvious but worth making the point anyway, that in any trade, there’s a buyer and a seller. So like when you see a big trade at 6.45 in the oil market, one person on one side of that trade was selling and the other person on the other side of that trade was buying. So you can always say that people were betting in anticipation of what the President was doing. But then, conversely, there was always someone on the other side who had taken the opposite position.

Speaker C: Correct. Then this was reported in. I’m not sure if Financial Times reported it first or Bloomberg, so apologies to whoever got the scoop. But they reported on it and then it got picked up everywhere because in other moments, most recently in Venezuela, there were bets placed, I think in Polymarket, more obvious bet that was like, will Donald Trump invade Venezuela, yes or no? And someone said yes. And then it happened and it wasn’t.

Speaker A: And then it didn’t pay out because technically it didn’t count as an invasion. I love that whole story, but yeah, ironically, but yeah.

Speaker C: So there have been a lot of stories now about this, like oil market conspiracy theory and insider trading. There’s been insinuations that, you know, someone in Trump, Trump’s inner circle has traded on the inside information. No one knows anything.

Speaker A: And I’ve been curious about it, which does not stop Paul Krugman from using the word treason.

Speaker C: Paul Krugman has said it was treason, so I’ll just pause here for comment.

Speaker A: So the first thing I want to say is that this is a little bit of a sort of boy who cries wolf situation, because Donald Trump tried the same thing at the end of the week where he’s like, actually, I want to give around another 10 days rather than like, bombing them even more tomorrow or whatever.

Speaker C: Wait, this week?

Speaker B: Right? Yes.

Speaker A: Yeah. And markets basically didn’t move like these tweets that he sends out, which I insist on continuing to call tweets, even though they’re posts on Truth Social. These tweets he sends out, like, they’re not always going to have massive effects on the market. I think often he wants them to, but sometimes they do and sometimes they don’t. So it’s not. If you knew this tweet was coming out, it’s not like you had 100% chance of making money when Trump tweets, but there is a pretty good chance. And the other thing I want to say is that when people say there was a $580 million bet in the oil futures market, that’s $580 million notional. It’s not that you need to take $580 million of cash and open up your Robin Hood account and say, sell oil. It’s all margin. And the amount of actual cash you need to place that is not remotely that big. But the number of people who have access, the number of individual humans who have the ability to place a 580 million notional bet at 6:45 on a Monday morning in the oil futures market, especially in America, is tiny. This is not something you can do from your Robinhood account. Right. You need a pretty high degree of financial access and sophistication.

Speaker B: That’s true, but I don’t think that that necessarily contradicts the idea that it might have been inside information, because you don’t necessarily need somebody in the administration to be trading themselves. They just need to leak it. Somebody. But I do think that Krugman is maybe going out, know, a limb too far on this when he’s saying it’s de facto treason.

Speaker A: So the first thing is that my initial reaction to this was, well, obviously it was mbs, right? It was the Crown Prince of Saudi Arabia, basically, who talks to Trump on a regular basis, and they chat away on the phone. And as everyone knows, when Trump is talking on the phone, he’s just thinking out loud. He is constitutionally incapable of keeping a secret. And so if he’s about to send out A tweet saying he’s giving him around a few more days, then he will say that on his phone call to the Crown Prince.

Speaker B: That’s probably true.

Speaker A: And if the Crown Prince listens to that and is like, oh, this is an easy way for me to make a quick billion dollars, there’s literally no rule against that, right? There’s no Saudi Arabia rule against inside trading. So he can do that. And I think that is probably the maybe the most economical sort of Occam’s Razor kind of explanation of what happened. Because otherwise it is a little bit difficult to work out why you would have this big out of hours spike in futures trades. There are conceivable reasons. It was someone waking up on Monday morning and suddenly remembering that on Friday was triple witching or something. There’s a lot of people are kind of contorting themselves into trying to come up with an innocent explanation for this. And this is definitely not, you know, a cut and dried case. But by the same token, it does look a little bit suspicious.

Speaker B: I find it weird that sophisticated markets people listen to anything that Trump tweets or says because he’s, you know, been proven to just say things, whatever is convenient in the moment. And whenever he suggested that there had been, you know, talks last week, the Iranian spokesperson came back and said, we don’t know what he’s talking about. Nothing has happened. And now I think we’re in a place where the Iranian spokesperson is more reliable than Trump in terms of delivering, you know, what’s actually happening.

Speaker A: Well, I don’t think what’s actually happening in terms of talks matters. I think what matters is the imaginary talks in Trump’s mind. And if the imaginary talks in Trump’s mind go really well and the imaginary Iranians that he’s talking to put an imaginary deal on the table that he agrees to and then he stops bombing Iran, that’s all we need. Right? You don’t actually need any real Iranians to agree to anything at all. All you need is for Trump to say that Iran has agreed to a deal and now the war is over and then everyone gets really happy and the price of oil comes back down and, you know, so on and so forth.

Speaker B: This seems suboptimal.

Speaker A: It is a bit suboptimal because at some level what you really need is real Iranians to stop mining the Strait of Hormuz, right? And so, yeah, you do kind of need someone in Iran to have some knowledge of the so called deal. But at some point, if the Americans and the Israelis Stop bombing Iran. It is reasonable to assume that at some point the Strait of Hormuz will reopen.

Speaker C: Well, it seems like what’s been happening, it’s not that investors or markets believe or don’t believe whatever the president posts. It’s just they take it as a signal. So when he posted his threat, I think people. The markets were like, oh, d***. Like, he’s escalating. He doesn’t want to stop what he’s doing. And then when he posted his retraction, they were like, oh, he actually does want to stop doing this, you know, and anytime he posts like, we’re working out a deal, it’s not that people are like, oh, he’s working out a deal. I think at this point, no one really understands what’s happening to that level. I think it’s just a signal that he doesn’t want to keep doing this. So if he’s less willing, if the US Is less willing, then the bet is this will end. Although I will say that what you were talking about, Felix, how the new deadline was up on Friday, supposedly, and then he came out again and said something about, we need more time. And there was like a big flat, like, all the. Like, I look at factset now all the time, and there was like a big breaking. President says, more time for negotiations, you know, but the markets didn’t really respond. And then when I woke up on Friday morning to see how the futures were doing, they were tanking. Like, no one seemed to. Like it didn’t seem to work as well. Like, it’s a diminishing returns to the strategy.

Speaker A: Yeah, exactly. This game is not a good repeat game. When Trump just keeps on pushing the deadline back over and over again, people realize that there is no deadline and the whole thing is just gonna keep on going on. And, like, he’s like, I think we should have probably two weeks of more bombing, and then it’ll be over. And he’s been saying that for how many weeks now? And, like, it’s always two weeks in the. Just stop the bombing already. I do have this idea that he has two little voices on his shoulder, you know, the angel and the devil. And, like, on one shoulder you have, like, P.B. netanyahu and Pete Hegseth going, bomb, bomb, bomb. And the other one, you have, like, reasonable people and Howard Lutnick going, markets, markets, markets. And he’s just, like, depending on which one he talked to most recently, he’s either very hawkish or kind of conciliatory.

Speaker C: I mean, we saw this. It’s the same playbook he used with tariffs. He was like, really strong tariffs. Then he quote unquote tacoed. And then he was like, no, still strong tariffs. And the markets were like, oh. And then, you know, by the time you get to the third taco, it just doesn’t taste as good.

Speaker B: I think also he just, you know, catastrophically misunderstood what it means to kind of go in and bomb around and then extract yourself. I don’t think that he can get out of it that easily. And I think he was anticipating we would just show up for a few days and then leave and, you know.

Speaker A: Well, I mean, that was the thing, that was the thing that surprised me. We start this bombing, we kill the ayatollah, and then I’m like, okay, mission accomplished, game over. This thing should all be over within like 48 hours. And then it wasn’t, right? And the bombs kept on falling. And at that point, everyone was kind of just looking at each other going, no one has a clue what the objectives are here. No one has a clue why we’re dropping all of these bombs. No one has a clue what we’re trying to achieve. And then, so at that point, the only thing you can look to, to get a feel for when and whether the war is going to end is Trump tweets. Because it’s all just in his head. And that is why they, you know, do move markets until, as Emily says, they don’t. Because we have learned that they have no predictive power. Right? They, the, the whatever he says on Truth Social doesn’t seem to have a huge amount of effect on what is actually happening on the ground.

Speaker B: So where do we think, you know, in lieu of an actual functioning, trustworthy government head, people are getting their information primarily. How do they actually react to what Trump is saying? Or does it just not even matter?

Speaker A: Well, I think that’s what we’ve said, right? We have now kind of reached the end of the point at which we take his tweets as major announcements. And we’re going to have to see an actual ceasefire or something like that in order for markets to take anything seriously from this point onwards. I think this trade, if it wasn’t inside a trade, is a kind of one off deal. Like you can get inside information about a tweet tomorrow, and that inside information is not going to be able to make you any money.

Speaker C: But oil’s still not. I mean, there’s people talking about how the oil price is still optimistic that this will end. If it was really like pricing in A long war with like 20% of oil offline because of the Strait of Hormuz, it would be higher. It was my understanding from reading all the analysis.

Speaker A: Yeah, yeah. The price of oil can definitely, like, right now there’s a. There’s this sort of probability distribution of the amount of time this war will go on.

Speaker C: Yeah.

Speaker A: And the probability that distribution does not have like 100% of this is going to go on for years. Right. People are still assuming. And we got a little bit of pushback last week when I’m like, he’s definitely not idiotic enough to send like a bunch of, to try and invade Iran with 40 million people and like a mandatory conscription and kind of a regional superpower. And then everyone kind of started writing into me and go, well, actually, he is still. I think if you look at the sort of implied probabilities that are in oil price, the consensus is he probably won’t. Right now. I’m not saying that means he won’t. I’m just saying the market, that’s what the oil price is saying. And so, yeah, if the consensus moves towards he probably will, then the oil price will go up.

Speaker B: Well, there was one piece of reporting that said there was a administration staffer who said anonymously that part of the reason Trump was doing this was that he thought he was thinking about George W’s approval ratings during the Gulf War and they were very high. And he thought, well, my approval ratings are tanking. Maybe we’ll just go have a war and they’ll go back up. And the opposite has happened. His approval ratings are tanking even with his own people and among independents. He has a 25% approval rating, which is a historic low for him.

Speaker A: And this, by the way, just to be clear, is the exact same thing that’s happening with Netanyahu in Israel. Right. That he clearly was hoping that, hey, have a war that’s going to keep me in power and like, improve my popularity. No. Nope. Like, no one likes this war.

Speaker C: This guy at Deutsche bank came up with a pressure index that tracks the pressure on Donald Trump using his approval rating, inflation expectations. And what were the other things? Oh, stock market performance. And he charts it and it’s mad high right now. It’s real high, the pressure index. It’s higher than it was on Liberation Day or the Greenland Dust up, like super high pressure index, which I guess is another way Wall street is sort of thinking that he’s going to back down. And the question, I guess, is even if he does fully back down, there’s Another side of it’s not just one person’s doing war and the other side has no say. That’s the whole thing with a war.

Speaker A: Right, exactly. So, yeah, the sort of Iranian reaction function is something that the markets have really no ability to model. Right, yeah, that’s a good point. Which is like, if Trump just stops bombing tomorrow and claims that he has a deal, how do the Iranians react to that? When and how do they start to allow tankers to move through the straight of Hormuz again? All of these kind of things are just genuine unknowns and the only way we can find out is to wait and see what happens.

Speaker C: Yes. And the Deutsche bank pressure index has a flaw, which is they don’t take the Iranian side into account at all. There’s no Iranian pressure. A separate index for them or something.

Speaker A: Yeah, because that’s not like direct pressure on Trump. Right. It’s not that. The, you know, that’s just, that’s a market thing.

Speaker C: Right.

Speaker A: But it’s not a Trump thing. The pressure index is just a question of like, how likely is Trump to Taco because he has so much pressure. And the idea, the sort of intuition behind the pressure index is that the higher the amount of pressure on Trump, the more, more likely he is to chicken out. And in this case, chicken out just means stop bombing these people because no one knows why we’re bombing them in the first place. But yeah, that obviously then chickening out in and of itself doesn’t necessarily. Well, certainly does not bring us back to the status quo ante. Right. We have. The world has changed in very profound ways over the past few weeks, and it is not going back to where it was.

Speaker C: I mean, chickening out in war is called losing. It’s what you. I mean, right? Like, it’s not like chickening out with tariffs where you’re like, never mind. And everyone’s like, great. It’s different. It’s like, but.

Speaker A: But tariffs are the same, right?

Speaker C: Flag, like, totally different.

Speaker A: We are living now and again like we are living in a post Liberation Day world of way higher tariffs than we ever had in 50 years beforehand. Right. So, so like some of the tariffs, even post. Even in the wake of the Supreme Court ruling, tariffs now are multiples many times higher than they were before Liberation Day. And no matter what happens, even if we get a neoliberal Democrat president, which is highly unlikely because I don’t think there are any neoliberal democrats anymore, but even if we were, we would still not get tariffs back down to pre Liberation Day. Stats levels, let alone tariffs back down to pre first Trump administration levels, that kind of glorious Obama era utopia is not coming back.

Speaker B: Well, one worrying thing is that when Trump is told that he’s wrong, he tends to double down. And he’s done that with tariffs. But he could do it here too, if people are regardless of what his poll numbers are. He’s already kind of been very reticent to admit that maybe this was not a good idea or to articulate a plan for withdrawal.

Speaker A: So yeah, I don’t think he did do that with tariffs, Elizabeth. Like if you look at what happened on Liberation Day and those tariffs that he announced in April of last year, he announced the very high tariffs, he was told that he was wrong. Then there was a massive amount of spinning and changing and negotiating and announcing and all the rest of it. And at the end of all of that crazy period where tariffs changed every day, they were not double where they were, they were lower than where they were.

Speaker B: They’re still changing them though. And when the Supreme Court said that the tariffs were legal, he was like, well, I’m just going to go find another way to do them.

Speaker A: I’m not saying he gives up entirely, but there’s a difference between giving up entirely and doubling. Now, the idea behind doubling down is you do it even more than you initially said. And I don’t think that he’s done tariffs even more than he initially said. Like max tariffs was still kind of circa April 2025. We haven’t seen him impose greater tariffs than that initial attempt.

Speaker C: He isn’t doubling down, he’s just down.

Speaker A: Yeah, he’s just down. He’s singling. But let’s move on and talk about social media because we haven’t actually talked about social media in a while. Remember, like all of this AI hashtag discourse has kind of crowded out the social media hashtag discourse. And isn’t that sad? Don’t we miss the social media? No, actually I don’t miss the social media discourse at all. But the courts move slowly and so, right? We are still litigating, quite literally social media. And we had a couple of massive court or important court judgments, one in New Mexico and one in. Where was the other one?

Speaker B: California.

Speaker A: California. Basically finding Google and Meta, or specifically Meta, but also Google liable in financial terms for creating products that were harmful to children, that are harmful to children. And the metaphor here is that this is similar to tobacco, which in that, you know, if you get a bunch of court cases finding companies responsible and finding the millions of dollars and each One of those has one plaintiff, and then you multiply the number of plaintiffs by the number of people on the planet. Pretty soon the companies are insolvent and they’re going to need to come to some major global settlement just to save their shareholders.

Speaker B: I think this is not a great metaphor because the lawsuit says that the problems are about product design. The ways that Meta and YouTube have designed their products so that between Endless scroll and the way the algorithm works, it becomes sort of addictive for the user. But that’s a problem you can fix. You can’t exactly go make cigarettes healthy and not addictive. Meta can make a decision to, you know, change some of these design elements so that they’re no longer as harmful.

Speaker A: Yeah, but wouldn’t they still be liable for a decade or more of harm that has already happened and there are still a billion people who have been harmed.

Speaker B: Normally, whenever companies go and remedy whatever the problem was, their liability goes down because they’ve made some good faith effort to fix it. I don’t know what the statutes of limitations are on suits like this, so you may be right, but generally Remedy makes a difference in terms of whether that happens or not.

Speaker C: Well, regardless if it’s a perfect fit metaphor, I still think it. I was really kind of blown away by the results. Like I guess in the California jury trial, you know, the first time this theory has been really tested that social media should be treated like a product and these features should be treated like product liability cases versus being treated like free speech cases. And it seems like just as like a test run, that theory is holding up. So that’s really interesting. And if the only thing that happens is Meta has to sort of revamp the way the algorithm works or something, that that could be really powerful change.

Speaker B: You know, it may actually have some implications beyond just social media because what the courts are ruling on is these product designs that are built into a lot of applications, not just social media. There was a great Wired story about the post office site where you go and, you know, you change your address whenever you move and you can do voter registration through that. And there’s one company that controls that site, it’s private company. And so if you’ve ever used that site or especially in recent history, it will, you know, direct you to pages that you didn’t want to go to. You’ll sign up for things without really realizing that you’re doing it. You can’t click out of ads because the X’s are, you know, too small or whatever. And these are just hostile user Design elements that are in a lot of crappy apps. And so the argument that the court is making is that in Meta’s case those addictive features caused personal harm. In the California case, the plaintiff was a young woman named Kaylee who said she’d been using social media since she was 6 and that it gave her body dysmorphia. And she’s had serious health problems. And I think there are plenty of other, you know, applications that, where you can draw a sort of similar comparison to the sort of harms they do. I think social media is so ubiquitous that maybe that’s, you know, these cases are going to be more high profile.

Speaker A: But thinking about it from a design perspective, if these things are sort of widely acknowledged as harmful, that puts a lot more companies than just these social media companies at risk for I, I, I don’t really buy that one. I have to say. I think that I think gaming in particular, well, gaming perhaps, I think it’s hard. I think just because something is, has dark patterns does not get you to a multi million dollar award. Right. So your example of the post office website, which is a terrible website, basically when you move there’s a million people who want to take, who are like, oh, when people move, they spend a whole bunch of money on like getting new Internet service or they want to buy new furniture or something. And so like there’s a whole bunch of people who will pay a lot of money to get your information so they can sell you stuff. And this website is s***, but it’s not going to give you body dysmorphia. It is not going, it is going to be very unlikely that anyone is going to be able to take this company to court and say they served me a bunch of ads that I didn’t want to see and that screwed up my entire life and they owe me millions of dollars.

Speaker B: We’re talking about range of harm here. And that that’s, and they’ve already been fined by the FTC for this kind of behavior. You know, they’ve already lost part of that in court.

Speaker A: So this is the big question that we’re talking about here, right, which is that insofar as there is pressure on social media companies to change their ways, make their products less harmful to children, is that pressure going to come from governments and specifically the European regulators, because the US Regulators seem to be out to lunch right now, or is that pressure going to be coming from like the tort system in the United States and like individual plaintiffs bringing individual lawsuits against the companies as we Saw in this case and historically all of the maneuvering has happened at the government regulation level, the FTC level, the European level, and so on and so forth. And that has been where the companies have been negotiating and doing all of their deals. Now the arena is moving over to the courts or it looks like it might start moving over to the courts. In the case of social media companies, I just don’t think that it’s similarly going to be like individual civil lawsuits are going to make a huge amount of difference on something like crappy ad filled websites.

Speaker B: I agree with you. Like you know, they’re not going to turn around on one lawsuit. But I do think that when they evaluate risk and liability, especially if they’re public companies and have shareholders, they will have pressure to change the way that they do business. If they consider this an ongoing liability, that’s the cost of which would be untenable for them.

Speaker C: We’re already seeing the stocks on these companies move lower on the verdicts. I don’t know the stocks are the best indicator, but it seems to be an indicator that there’s worry that there’ll be more issues like this going forward. And I think even AI companies are going to have a similar issue. There already a bunch of suits against OpenAI over encouraging teens to do harm to themselves and things like that.

Speaker A: I would say OpenAI in particular rather than AI companies in general. Like there are a lot of AI companies and OpenAI seems to be the one that has been most aggressive about optimizing for engagement. Although this is actually the perfect segue.

Speaker C: We’ll talk about this next.

Speaker A: Yeah, We should absolutely talk about OpenAI saying actually we spent a huge amount of money creating a product that was designed to optimize for engagement and to get up to get millions of people to engage with our product and create videos. We signed $1 billion deal with Disney so that they could create videos using Disney characters. We talked about this on Slate Money at the time and then they woke up one morning and said nevermind. Disney was completely blindsided.

Speaker B: I think this is an example of the tort system kind of, you know, doing its job because they are seeing this influx of people who are bringing cases against OpenAI for stuff like what Emily was talking about. And they have explicitly stated, I think that they part of the reason why they we’re shutting it down or at least this is the thinking is that because it uses too much compute but also because it gives them way more legal exposure because of deep fakes and IP theft and stuff like that.

Speaker C: What I was going to say from the last conversation is just that I think, like social media. Have we already talked about this on other episodes, but social media is on the decline, in my opinion. Like, AI is moving in. People like talking to chat boss more than they like talking to people. It just seems like the companies themselves, meta in particular, is more focused on AI now than it is on social media, which probably, I don’t know, given how their time with the metaverse went. I wonder about that effort too, personally.

Speaker A: But okay, yeah, I mean, how good is Mark Zuckerberg seeing where the puck is moving? I don’t know.

Speaker C: But yeah, not that good. But yeah, I think it’s really interesting what OpenAI is doing. It’s walking away from Sora. It’s not walking away from social media. I mean, they still have this very popular chatbot that billions of people are using. So it’s not like they’re done with consumer products. But, you know, people were like, they’re walking away from Sora, they’re walking away from consumers, and they’re walking towards enterprise customers, because that’s where, like, the real money is and they need the real money to afford the compute. And this is just a more. This is my initial impression. And this is just a more serious business. And it’s not like social media in the 2010s where it was like really cheap to scale up and get more customers and push the dark patterns at them. And that’s the recipe. This is a new era and a new recipe. It’s more about business customers, it’s more about enterprise. And we’re not going to do like the goofy video generation thing, but I think it’s probably a little more complicated than that.

Speaker A: The big picture here is that there are, broadly speaking, two models of making lots of money in software. One is you sell software to companies for large amounts of money and they pay you billions of dollars and you become Oracle or someone like that and Salesforce, and you just become massively wealthy. The other one is you give a product away for free to billions of consumers and then you make money by selling access to those consumers to advertisers. Right. And so that’s the free to consume model that has turned Facebook and Google into these behemoths. OpenAI has dipped its toe very gently into the water of monetizing the consumer product via advertising. But they already seem to be pulling back from that a little bit. And it is very clear, if you look at the sort of revenue run rate and the stuff that Wall street seems to care about, especially as these IPOs start coming onto the horizon, that what Wall street wants to see and what investors want to see is actual revenue. And the way to get actual revenue, and there is a lot of revenue to be had in this is by selling subscriptions to enterprises. What does not work, and we have seen them try and we have seen them fail, is the model of selling subscriptions to consumers. There is one company in the world that is good at that, which is Microsoft, and has hundreds of millions of people paying it every month for access to Microsoft Office. And I think for a long time, Sam Altman was like, well, you know, if people will pay for Microsoft Office, they’ll pay for ChatGPT. And I think we have seen empirically that they won’t, that people will happily use the free versions of Claude and ChatGPT, but if you ask them to pay in their millions for these products, they won’t. I personally can attest that while I do get a lot of value out of these products, the value I get out of them is professional value. And I fully intend and expect for my employer to pay for that, because it is me doing work that I, you know, I’m not going to do it just for s**** and giggles, you know.

Speaker B: Yeah, I think for consumers it really is, you know, if it’s, if it’s a work tool, people think of it, you know, is it mission critical for me or not? But people do pay for Google Storage and icloud storage and stuff like that because it’s, it is kind of necessary for a lot of people.

Speaker A: Right, but at a fraction of the cost of what OpenAI is charging, OpenAI wants like $20 a month. Google Store is just like $2.

Speaker B: My Google work suite is more than that.

Speaker A: But, yeah, because you’re Elizabeth Spires, but you’re basically a business, you know, I think.

Speaker B: But I do, you know, it’s interesting because I think if OpenAI had gone straight to an enterprise product, our AI environment would look totally differently. Because I think part of what they did when they released ChatGPT was Educate Consumers about what chatbots are, how they work, that sort of thing. And everybody else kind of benefited from that because now it’s, you know, people, even people who are not very technologically literate, know how to use a chatbot.

Speaker A: Yeah. And this is also like the classic Slack model of selling enterprise software. There are two ways of selling it to enterprise software. One is that you go to the CTO of the enterprise and you’re like, I have this amazing software you should buy my software and then install it and roll it out to all of your employees because it will make them more productive and you will make more money. The second model is you make the software free to everyone on the planet. They love it, they adopt it, they become really into it. And then the employees of the company basically wind up pressuring the cto. Like the CTO looks at the amount that the employees are using this and is like, well now I have to buy it because everyone’s using it and everyone loves it so much. And that worked for Slack and that worked for OpenAI. And I think that’s what OpenAI has realized is that, you know, the thing that the CTO looks at her employees using and says, well now I need to encourage them to do this and I’m willing to pay OpenAI for a corporate license so that they can continue to do it is absolutely not going to be sorrow.

Speaker C: What’s interesting is there’s this company called Ramp. They’re like a concur, they do expense reporting for some startups and things. You know, you use them to file your expenses and they track what employees at companies spend their money on using their data, anonymized, whatever. And they’ve been tracking, spending on AI apps and AI APIs or what, whatever. And starting last year, businesses were spending more on OpenAI. But over the past, I think it’s six months, I’d have to look at the chart. I didn’t prep it. Anthropic has been completely winning. Like the lines cross and now more businesses are spending on Anthropic’s Claude stuff than they were spending on OpenAI’s ChatGPT stuff.

Speaker B: Do you know if these are technical users? Like I feel like Claude has become the go to model for engineers way more so than OpenAI. So when people are switching, are they mostly technical staff switching or everybody?

Speaker C: I think the data skews more towards software engineers and those types because it’s like a tech forward. Like more likely a startup or a tech company is going to switch to use this new expense system versus like a concur or even expensify because it’s new. So it kind of skews tech forward. So I would sort of guess that it is a lot of software engineers switching because of cloud code. And I think if I’m remembering the chart right, because I charted it a month or two ago, it was after the vibe coding explosion that the line started moving up for Anthropic. And I think that’s really freaked out OpenAI and then now there’s this sort of like, enterprise software race happening, which I’ll just say one more thing is interesting to me, because at the same time, they’re racing to get the enterprise software business. The actual business of enterprise software is like collapsing in this aspocalypse, as discussed.

Speaker A: And to Elizabeth’s point, I think one of the big things that’s happening in that chart is not so much that people are moving en masse away from chatgpt and towards Claude and much more, that the people who are using Claude are using it much more and are spending much more. And that Claude has this thing, you know, you have to buy tokens basically, in order to do anything with it. And those tokens are not cheap and they add up pretty quickly, especially if you have agents using them up very quickly. I think what Anthropic has done is find a way to get way more revenue per user than OpenAI could ever dream of. And that ARPU number is, you know, Wall street loves that.

Speaker C: Crazy. The adoption, though, right? I mean, I was in a meeting recently and someone was like, when’s Axios going to pay for Claude? And, like, everyone in the meeting was like, oh, I just bought it and expensed it. Wow. Like, this is happening fast.

Speaker A: That’s always how it works. So you wind up seeing an individual line item on individual expenses for people using Claude, and eventually you’re like, fine, I’m just going to get a corporate license because it’s easier. Let’s have a numbers round. Emily, what’s your number?

Speaker C: My number is 49. That’s 49%. That is chatbots will take your side 49% more than a human will take your side if you’re having an interpersonal conflict. This is from a study that the New York Times just wrote about where researchers, they looked at Reddit posts, where people come to Reddit and they’re like, you know, I went to the park and I. There was no garbage can, so I just strung up my garbage in a tree. And then everyone on Reddit’s like, you’re an a******. And then they go to the chatbot and they say the same thing. And then the chatbot says, what were you going to do? It’s fine. You had no other choice and you really wanted to do the right thing. So the chatbots, as we all know, are affirming your behavior. The chat bots affirm such behavior as describing revenge, like destroying an apartment or cheating or violence. The chatbot was like, it’s fine, you’re A good person. Don’t worry. The researchers think this might be a worrying trend.

Speaker B: Okay, so that was. My number was from the same story. Oh, no. And it was the 60% of the time AI models were willing to take the user side. And this is from the Reddit boards. Am I the a******? The Reddit board would have some consensus about whether somebody was an a******. And then 60% of the time the AI model would say, you’re not the a******, and give the users all these kinds of justifications for why they’re not. But I thought, you know, one of the more interesting aspects of this, and maybe this is a conflating variable, is that Reddit users, because they don’t know the person posting, are also potentially more likely to give people negative feedback because they don’t know them. And they’re like. So that might be a little bit of a mitigating factor.

Speaker A: My chatbot really knows me. And when you really know me, you’ll understand that I’m not an a******.

Speaker B: Yeah, I actually worry about this because there’s a certain kind of person who like maybe the President of the United States who doesn’t like to admit that they’re wrong, you know, really responds to yes men really well. And I feel like those people are just gonna. Chatbots are just gonna make them worse. You know, there is some data in that piece about how people lose critical thinking skills when they rely too much on the chatbots to think through this.

Speaker A: Are we gonna have an epidemic of narcissism? Yeah.

Speaker B: I think the saving grace is that Trump is not very technologically literate because if he was, I feel like he would be on chatg listening to the bots tell him he’s amazing and we’d be in much worse places.

Speaker A: A chat lickspittle.

Speaker C: I mean, at a certain point you’re surrounded by yes men and you don’t need a chatbot. Like you have the real version, right?

Speaker A: Yeah, everyone can be their own little mini Trump. But Trump doesn’t need it, cuz he already has it. I should mention here because it was in the news this week, and this is a show about the business and finance news of the week that Trump has now decided, or Scott Besant has now decided, that next year US Banknotes are going to have Donald Trump signature on them rather than Scott Bessant’s signature on them. Because obviously for the first time ever, the President has never signed the banknotes.

Speaker C: But now, to celebrate the 250th birthday of America, it’s going to be the President I thought about pitching that for us to discuss, but, well, what do you think? I feel like he signed the stimulus checks and everyone was like, it’s brilliant. All the Americans will think that the money’s coming from him. But, like, I don’t think it’s the same with cash.

Speaker B: I feel like this is probably, you know, vanity thing where Trump just threw it out one day. Like, wouldn’t it be nice if my signatures were on dollars? You know, why aren’t my signatures on dollars? And then somebody in the administration had it.

Speaker A: No, Elizabeth, like, you don’t understand the mindset of the sycophant. He does not need to throw it out. Scott Besant just needs to wake up one morning and go, wouldn’t he like this? And then the obvious answer is, well, of course he would like it. Put his name on anything. He likes it, therefore it’s going to happen. He doesn’t even need to ask.

Speaker C: But I don’t think it’s to his benefit. People don’t even who uses cash. We talk about it all the time.

Speaker A: Soon his signature is going to be on every single debit card. You know, I want to go back to this subscription revenue concept because I kind of love it. My number is 2.8 billion, which is really just a reminder of the amount of money that Netflix received as a windfall for not buying Warner Brothers. Right? They agreed to buy Warner Brothers. They got outbid by Paramount at the last minute, and to sort of assuage their hurt feelings, they wound up getting a check for $2.8 billion from David Ellison, which, I mean, ultimately that’s a relatively good day when you wake up with 2.8 billion more dollars than you thought you were going to have.

Speaker B: Can I not buy Warner Brothers and somebody give me 2.8 billion?

Speaker C: Yeah, I won’t buy it either.

Speaker A: I mean, I feel like everyone should get 2.8 billion for not buying Warner Brothers. In any case, what Netflix did. And so what did they just announce this week?

Speaker B: What?

Speaker A: That the price of Netflix subscription is going up by like $2.

Speaker B: The h*** mean why do they need the money?

Speaker C: They just.

Speaker A: Why do they need the money? Why do they need the money? Because, like, they already have $2.8 billion for everyone. All the TV for everyone. Every single plan is going up by at least a dollar. But for most of us, it’s going up.

Speaker B: That makes Netflix more expensive than chat GPT.

Speaker C: I just want to point out, honestly, that that makes sense to me. I would definitely rather pay for for Netflix than have someone tell me like, I made a great decision when I obviously made a bad decision.

Speaker A: But yeah, this is just a reminder of how capitalism works, is that you want to maximize shareholder value and the price of the shares can always go up no matter how high they are. And so it doesn’t matter how much money you have. There’s no such thing as enough money in capitalism.

Speaker B: Ew.

Speaker A: Ew. We are going to have a really Fun Claude generated slate plus segment. But for those of you who aren’t slate plus members, boo. Become a slate plus member and learn about how the U.S. treasury market is a little bit like hot frosty otherwise. Thank you for listening. Thank you to Jessamyn Molly for producing. Thank you for emailing us on sleepmoneyleep.com and we will be back next week with more Slate money and specifically on Tuesday with Emily.

Speaker C: I am going to be talking to Bridget Armstrong about a very important part of the business and finance news of the past two decades, which is America’s Next Top Model, a reality TV show starring Tyra Banks and a lot of models. And it’s a really fun conversation and you should listen to it on Tuesday.

Speaker A: Is America’s Next Top Model. I want to know, like, what’s the first question? Why is this an interesting economic story?

Speaker C: Well, reality tv, we talk about the modeling industry, we talk about the business of reality tv, we talk about how both have changed and evolved over the past 20 years and have sort of intersected with each other to become completely new in different industries and how America’s Next Top Model was sort of at the, at the beginning of all of that.

Speaker A: Do we know how to. The trajectory of the contestants pay has changed over the past 20 years.

Speaker C: We get into it. Felix, it’s there.

Speaker A: All right, I’m tuning in on Tuesday.