Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)
Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)
By Robert Wiblin · Published March 7th, 2025
On this page:
- Introduction
- 1 Articles, books, and other media discussed in the show
- 2 Transcript
- 2.1 Bio [00:00:11]
- 2.2 More juicy OpenAI news [00:00:46]
- 2.3 The court order [00:02:11]
- 2.4 Elon has two hurdles to jump [00:05:17]
- 2.5 The judge's sympathy [00:08:00]
- 2.6 OpenAI's defence [00:11:45]
- 2.7 Alternative plans for OpenAI [00:13:41]
- 2.8 Should the foundation give up control? [00:16:38]
- 2.9 Alternative plaintiffs to Musk [00:21:13]
- 2.10 The 'special interest party' option [00:25:32]
- 2.11 How might this play out in the fall? [00:27:52]
- 2.12 The nonprofit board is in a bit of a bind [00:29:20]
- 2.13 Is it in the public interest to race? [00:32:23]
- 2.14 Could the board be personally negligent? [00:34:06]
- 3 Learn more
- 4 Related episodes
When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment.
As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.)
And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.”
But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action.
And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place.
This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that.
And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour.
Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable.
This episode was originally recorded on March 6, 2025.
Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions: Katy Moore
Articles, books, and other media discussed in the show
- “People are sleeping on huge news in the Musk vs OpenAI case today” — Rob’s overview of the legal developments this week
- What the headlines miss about the latest decision in the Musk vs. OpenAI lawsuit by Garrison Lovely
- The California court order from March 3 denying Musk’s motion for a preliminary injunction
- Rose’s last appearances on the show:
Transcript
Table of Contents
- 1 Bio [00:00:11]
- 2 More juicy OpenAI news [00:00:46]
- 3 The court order [00:02:11]
- 4 Elon has two hurdles to jump [00:05:17]
- 5 The judge’s sympathy [00:08:00]
- 6 OpenAI’s defence [00:11:45]
- 7 Alternative plans for OpenAI [00:13:41]
- 8 Should the foundation give up control? [00:16:38]
- 9 Alternative plaintiffs to Musk [00:21:13]
- 10 The ‘special interest party’ option [00:25:32]
- 11 How might this play out in the fall? [00:27:52]
- 12 The nonprofit board is in a bit of a bind [00:29:20]
- 13 Is it in the public interest to race? [00:32:23]
- 14 Could the board be personally negligent? [00:34:06]
Bio [00:00:11]
Rob Wiblin: Welcome, Rose. It’s great to have you on again. I think people might be familiar now that whenever we’re speaking, there’s probably some pretty interesting, juicy news, and that’s definitely the case this time. How are you doing?
Rose Chan Loui: Well, I have kind of a funny voice. I have a respiratory thing going on, but I’m well otherwise.
Rob Wiblin: Yeah, you’re being a trooper for showing up — and I appreciate it, because I did really want to talk about this court order that came out.
Rose Chan Loui: I know! It doesn’t stop.
Rob Wiblin: But for people who’ve somehow missed our previous two instalments, you’re Rose Chan Loui. You’re a nonprofit legal expert at UCLA.
More juicy OpenAI news [00:00:46]
And the big news today is there’s this court order in the case of Musk versus OpenAI that might foreshadow pretty major headaches for OpenAI as it’s attempting to effectively convert from a nonprofit to a for-profit.
I wrote up a bit of my interpretation of it on Twitter, which clocked up a couple of million views, so people were definitely interested in it. Before we hit the details though, how big a deal do you think this court order is in the scheme of things?
Rose Chan Loui: I think as a legal matter, it is a big deal. We’re not surprised that they didn’t win their request for a preliminary injunction. As we’ve said before, it’s a very high standard. And the judge said they didn’t meet the requirements to show that they were likely to win.
But the judge did recognise quite explicitly that there’s an important public interest here to protect — so she came up with this idea of pulling out the charitable law issue for expedited trial. And that’s her attempt, I think, to balance between the public interest and the fact that it’s a fight between two billionaires. But anyway, I think it leaves the door open. And the problem is still, I think, as it always has been, whether Elon Musk has the standing to defend the public’s interest in this case.
The court order [00:02:11]
Rob Wiblin: OK, so let’s back up a little bit. It’s a case of Elon Musk versus OpenAI. Elon Musk, years ago, gave at least $44 million in charitable donations to the OpenAI foundation. And now it is, many years later, attempting to basically shed the nonprofit foundation’s control and become a public benefit corporation — which in practice would be basically just a normal for-profit corporation.
Elon Musk has thrown many different legal arguments at OpenAI in order to try to interfere with this. And the court has rejected many of them, but one that it has accepted is that they may be breaching the trust that they created with Elon Musk when he donated very early on in the history of OpenAI — giving this money, Elon Musk says, on the understanding that it would always remain a charitable mission and wouldn’t be used for the enrichment of any individuals. Is there anything else to add there?
Rose Chan Loui: Yes. Well, it gets kind of technical here, which I know is a little bit hard, but the court’s analysis does get somewhat confusing, if you understand the nonprofit law side of it. But, I mean, I’m not blaming anyone. It’s kind of a difficult area of law.
But let me try to simplify as much as I can. So she’s saying that if a trust was created, then the balance of equities “would certainly tip toward plaintiffs in the context of a breach.” And then again, she says if a trust was indeed created, then preventing or remedying breach would be in the public’s interest.
So what’s confusing is this concept of there’s an enforceable contract, then there’s a charitable trust. And in this case, OpenAI is not a charitable trust. But I think what she means is that under California law, there are charitable assets. And in California, such assets are said to be held in charitable trust.
And there’s really no question– and this is where I think it’s important — there’s really no question that there are assets in charitable trust in California, and that’s why OpenAI had to register in California. But these charitable assets were not created by Musk’s expectations when he donated that $41 million, but rather by OpenAI’s commitment to the public.
So that’s really important, because looking for contractual terms, et cetera, doesn’t affect the public’s interest here. There are charitable assets. But really what it comes down to is, the way she’s interpreting the law — which admittedly is quite confusing with regard to standing — is that he needs some kind of contractual standing in order to be the plaintiff here. Does that make some sense?
Rob Wiblin: Yes.
Rose Chan Loui: So what I don’t want is to lose the fact that we all agree that they’re charitable assets. It’s whether he’s the right person to pursue it.
Rob Wiblin: To bring it. Right.
Elon has two hurdles to jump [00:05:17]
Rob Wiblin: So there’s two hurdles that Musk has to jump. One is to prove that he has legal standing, which is to say that he is able to actually bring this case at all — that he, in particular, is authorised to actually object to what is going on. And he’s saying that he does have standing in the court in California law, because of the donation that he made and the representations that were made to him, and I guess the fact that his donation was restricted for the purposes of pursuing the charitable mission rather than the enrichment of any people.
So that’s actually going to be the tougher thing for him to demonstrate, because merely making a donation doesn’t in general cross that threshold. It’s only if it was truly restricted for some limited purpose that you then have a sort of contractual agreement that gives you standing to object to what the foundation does.
Rose Chan Loui: Right. It affects the standing part. There actually is a Corporations Code section that deals with it in California.
I think what is the relevant part here is the other things he doesn’t meet: You can have the corporation or a member of the corporation sue to remedy a breach of charitable trust; you can have an officer sue; you can have a director sue, but he’s no longer a director. And fourth — and I think this is the relevant one — a person with a reversionary contractual or property interest in the assets that are subject to the charitable trust. And fifth, of course, the attorney general.
Rob Wiblin: Right, yes.
Rose Chan Loui: So I think that’s where the judge is wrapped up — in the contractual part. And that’s what they’re arguing, that we’ve got enough here to make a contract. And they have all these emails; there’s something in writing, but it’s definitely not the typical grant agreement that’s very specific about what their expectations are. Or if you don’t do this, we get it back. There’s nothing in writing about a reversionary interest, but they’re arguing that there’s enough to make a contract.
Rob Wiblin: Yeah, I see. So on this question of whether he can bring the case at all, the judge actually says that it’s a toss up. Now, it’s always seemed like his evidence that there really was a trust created, that he gave a truly restricted donation that gives him standing, the evidence has always seemed a little bit thin and people were sceptical that that would get up.
But the judge says that it’s maybe 50/50 in her opinion — which is maybe a step up from what people expected. But that’s all the standing question. So he’s referring to all these emails where they discussed what the donation was about, what OpenAI is about — and the question is, does that rise to the level of a legally enforceable agreement about how the money will be used?
Rose Chan Loui: Right. That’s how they’re analysing it. Correct.
The judge’s sympathy [00:08:00]
Rob Wiblin: So in a sense, that’s a slightly boring technical legal question. Maybe the juicier interesting thing, and the even bigger update, was having demonstrated that he can bring the case of, is OpenAI doing something objectionable? Is it violating the public’s trust? And is it violating the agreement basically with Musk to use his charitable giving to pursue the charitable mission? And there the judge has seemed very sympathetic to Musk, basically.
Rose Chan Loui: Yes. So I think that is the encouraging part. If we can get past this hurdle, the judge is indeed very sympathetic and concerned about what happens to these assets. And like we were talking about earlier, is this a signal to other parties, mainly to the attorneys general?
Rob Wiblin: Does she clarify what she thinks is probably objectionable about what OpenAI is trying to do?
Rose Chan Loui: Well, not really. I mean, there’s a part that to me doesn’t really make sense where they talk about the last argument with the federal tax benefits and kind of make a big deal of it. They’re following the plaintiff’s argument that Elon Musk got a deduction. But that’s kind of odd, because he’s making a claim about his own benefit. And I think really only the IRS can take a position on that. And then the potentially big benefit for getting tax exemption is that you don’t have to pay taxes on your income.
But I think there’s two things here where that probably doesn’t apply. One is, as far as we know, they’ve been cash strapped and they’re not really making any income yet. So there is no escaping anything.
And then the second is that, to the extent that they did make income from ChatGPT and whatever other commercial products they have, they would have to pay tax on that at the for-profit subsidiary level.
So the whole argument under federal income tax status doesn’t really do anything. This is all still a matter of state law.
Rob Wiblin: So the quote, I guess, that struck people from the court order was:
if a trust was created, the balance of equities would certainly tip towards plaintiffs in the context of a breach. As Altman and Brockman made foundational commitments foreswearing any intent to use OpenAI as a vehicle to enrich themselves, the Court finds no inequity in an injunction that seeks to preserve the status quo of OpenAI’s corporate form as long as the process proceeds in an expedited manner.
Rose Chan Loui: Right.
Rob Wiblin: Now, that second part there is about Musk was trying to preemptively block OpenAI from proceeding with its plans to convert from a nonprofit to a for-profit, which the judge denied because she said it’s not clear that he has standing. And because the standard for being able to preemptively block the conversion is that you have to be very likely to prevail at trial — and because he might not have standing, he’s not overwhelmingly likely to prevail at trial.
So she’s not preemptively blocking it, but she’s bringing forward the hearing on this matter to the autumn, to the fall later this year, which is a substantial bringing it forward in time — because I guess she thinks it’s very interesting and very timely, and she doesn’t want the conversion to proceed without this having been heard out in proper detail.
Rose Chan Loui: Well, I mean, I think the issue with the preliminary injunction, or the argument for it, is that you don’t want to have to unwind this whole thing.
Rob Wiblin: Like, how could it even be done? It’s so difficult, so messy.
Rose Chan Loui: Yeah. So I think she’s come up with a creative way to deal with what she thinks is the best issue going forward and have it heard in the fall instead of next year.
OpenAI’s defence [00:11:45]
Rose Chan Loui: So I think all of that is encouraging. And it’s all the stuff that we’ve been saying, right? That there have been all these commitments made to the public, starting with the certificate of incorporation, which is the legal document — but then also all over their website, and still today, they say that their purpose is to benefit humanity.
And what I’m understanding is while we think they’re changing purpose, they think that they are continuing to fulfil purpose, but they just have a different method of fulfilling their purpose. I think that they think it’s really important that they be first because there are bad actors out there.
Rob Wiblin: OK. So it’s looked to us as if they’re changing their purpose because it sounded in the announcement that they made around Christmas that they were basically removing AI from the foundation’s charitable purpose, and they were just talking about educational and science ventures, which is very vague.
But I guess the signs that you’ve gotten are that they’re going to argue that, in fact, this is the same purpose. Or maybe they’re not really going to change the language and perhaps they just messed up in suggesting that they would. But basically they think that they are pursuing the original mission. And the justification is that they have to be first, they need to invent AGI more quickly. That it’s in the public’s interest that they race there as quickly as possible and raise as much money as possible.
Rose Chan Loui: Right, exactly. And because they’re really seeing themselves as the good guys, in order to win the race, they need to move the nonprofit out of the control position so that they can raise the capital that they need to win the race. So in their view, those things come together.
Alternative plans for OpenAI [00:13:41]
Rob Wiblin: I see. So do you have any idea of whether they actually would change the stated charitable purpose of the foundation on paper as part of the conversion?
Rose Chan Loui: I think they have to if it’s going to be a foundation that just gives grants, because that’s not part of the stated purpose now. It doesn’t even include the general provision that you often see that any charitable purpose is included. So yeah, I think they have to.
I don’t know if they’re rethinking some of the things that they said in the blog post. They might be, because I think as we talked about before, we’re starting to hear things like that the foundation will have outsized voting rights. So it may be that public discussion of this is getting the board members to rethink how they really can balance this need for capital with the nonprofit’s interests.
Rob Wiblin: OK, so I haven’t followed this part of the story very well, but are you saying there’s been some signs that maybe they are going to get the nonprofit foundation to maintain some degree of control of the company? Maybe because people like us and many others have been saying it’s a bit crazy for them to totally give up control, like what’s the conceivable charitable justification for this?
Rose Chan Loui: It was reported in the Financial Times, but I haven’t seen more since then.
Rob Wiblin: I see.
Rose Chan Loui: But depending how broad those outsized voting rights are, it almost sounds like a reversal, doesn’t it? But it could be just very limited. I think that at the minimum, it’s addressing Elon Musk’s attempt to take over, but it was written a little bit more broadly.
So it mentioned that it would allow the nonprofit board to protect the company from hostile takeovers. But there was some language about “and other things,” so I think it really depends what the other things are that it can do with their outsized voting rights. Because in terms of economic interests, we’re assuming that it will have minority ownership, but we don’t know any more than that.
Rob Wiblin: Yeah, I see. OK. So there’s been maybe some signs that they’re softening or altering their position. It sounds like maybe this is a reaction to the Elon Musk hostile proposal where they’re feeling they have to give more justification for how it is in their charitable interest to go ahead with their plan.
And maybe that’s an easier sell to the attorneys general and the courts if they’re maintaining some control of the organisation rather than giving it up entirely, because that actually probably is more consistent with their charitable purpose. And they can argue that Musk might not do that, or he would do it badly.
Should the foundation give up control? [00:16:38]
Rose Chan Loui: Yeah. But I’m curious about what you think and maybe what others think is what is the best for the nonprofit. I think all along I’ve thought that their unique position within OpenAI is very valuable and is hard to replace with money. Actually, right now the blog post only talked about giving them equity, which I don’t know that they can do anything with unless it’s liquid.
But keeping that position only works for ensuring safe development of AI if the nonprofit board really takes that responsibility seriously and can act contradictory to management when they disagree. If we don’t really believe that can happen, then is the better strategy to argue for a truly independent nonprofit that moves outside but is truly well resourced? Can they do that work better from the outside?
Rob Wiblin: So if I were in their shoes, I think my primary approach to fulfilling the charitable purpose of the foundation would be to retain control of OpenAI and to be heavily involved in the decisions that they make about how they develop AGI and so on. I think that was the original plan and I think it still does make a bunch of sense.
And another change that they could make, that I don’t object to per se, is: it is true that OpenAI needs to raise a bunch of capital in order to remain relevant. And that might involve the foundation basically being willing to give substantial financial returns to investors who get on board and are putting their money at risk and want some return for putting their money on the line. That seems totally reasonable. It’s something they’ve done in the past and they could continue to do things along those lines.
Rose Chan Loui: Yeah. Are you saying some kind of middle ground here, where we wouldn’t have to give up all our control? So we might get compensated less than if we were really giving up everything, but we would have control over the important things — which is really having a kind of a quality control role, but one that’s real.
Rob Wiblin: Yeah. Well, the thing I really want the foundation to be in a position to do is if OpenAI is careening ahead doing something that many of its own staff, much of the world, thinks is very dangerous, that they could say, “This actually isn’t in the interests of the general public. We need to go a bit slower and more methodically. We need to be doing more testing before we train or deploy these models.”
Now, there is a substantial tension there where, if that is the core function of the foundation, that is in some conflict at least with investors who don’t believe that there are any risks and just want the company to be careening forward, going at maximum pace in order to win the commercial race. But because I think, and many people at this point surely do think, that there are some downsides potentially from going too quickly here, I think it’s completely reasonable to say that the foundation should have some role in not stopping, not blocking the entire project, but in asking them to be more careful and maybe not to race ahead of other actors without due reason.
So I think a middle ground could make sense — where the foundation does give up some degree of control, is able to hand over more of the financial returns to other actors in exchange for the company being able to raise capital to stay relevant, and the foundation then having access to some cash that it maybe could use for a grantmaking programme that could do other useful work to assist with AGI going well.
Now, I guess I am a little bit sceptical that the foundation is in a position that it feels empowered to fully push back against Altman and OpenAI the company. I mean, these are very serious people. It wouldn’t surprise me if they did step up, if they did really feel that they have a mandate to take action here as they learn more about their role and more about the situation. But they may feel that it’s a difficult situation for them to act.
I guess that might also transfer over to the grantmaking, where it could be difficult for them to make grants that are controversial, that are kind of muscular and maybe not in the commercial interests of OpenAI the company. So it is a slightly challenging spot.
Rose Chan Loui: It is. And hard for us to police what happens.
Alternative plaintiffs to Musk [00:21:13]
Rose Chan Loui: But I could definitely see a path where the AGs step in and negotiate conditions to a restructure.
Rob Wiblin: Let’s talk about the attorneys general, and come back to the court order. The court order suggested that, the way it was written, the fact that she is bringing forward the hearing for this to the autumn, a substantial advancement in time, suggests that she thinks that there is a serious issue at play here. There is a lot of legal uncertainty and question about whether the conversion is appropriate or whether in some way the public or Musk is being wronged.
And some people have suggested that she may have written this reasonably forceful, perhaps a little bit surprising court order in order to get the attention of the attorneys general, and say, “I, as a judge looking at this, am unimpressed with what I see that might be about to happen. But I’m not sure whether Musk is the right plaintiff here. I’m not sure whether Musk does entirely have legal standing as I understand it. So I would really like the attorneys general, who definitely do have legal standing, to chime in here and say there have to be conditions, something has to be done to protect the public’s interest.”
Do you think that could be what’s going on?
Rose Chan Loui: Yeah, I think that’s a very plausible reading. I think on the attorney general side, the question is whether they have the resources to bear on pursuing litigation. So they could be limited either in funding or staffing or both.
And it seems like there’s options that they could consider. One is to hire some experts. You know, they just might not have enough people on staff to pursue. But secondly, if they really just don’t have the money, I think Elon Musk has already requested relator status — that is revealed in one of the footnotes in the court order. And so far, the attorney general, I think, has not responded. So they just might not feel like he’s the right person to pursue this on their behalf.
Rob Wiblin: Sorry, can you explain what relator status is?
Rose Chan Loui: Relator status is giving a private party the right to pursue a cause of action for the attorney general, but the attorney general continues to oversee it. It’s very rarely granted. The instances where I’ve seen it granted in California have been mostly about disputes with holding government office. I haven’t seen it used really in the charitable context, although that option is open.
Rob Wiblin: I see. So the thing is there that the attorneys general in California, in Delaware, they definitely have standing to object to what is on, but they might not feel resourced to do it themselves. Musk has said, “Empower me. Give me standing to object to this.”
And they haven’t replied yet — perhaps understandably, because I guess they’re Democrats and also just Musk in general is a very controversial figure; they may not want to deputise Musk to go out to bat for them. I think if they wanted to give someone else that authority, probably they would choose a different party.
Rose Chan Loui: Right, exactly. So the next option — again not very frequently used, but might really work here — is to name a special interest party. And that one is used if the attorney general is not exercising their authority for some reason or other, such as inadequacy of funding or staffing; the charitable assets at issue wouldn’t be otherwise protected; third, the alleged misconduct is serious or egregious; and then fourth, the relief sought is appropriate to enforce the purposes of the charity.
So it seems like that would be something like one of the foundations that is interested in protecting the public’s interest in safe development of AI. But basically it would bring to bear a private party’s resources in support of the interest of the charity here, the nonprofit OpenAI.
So that’s something I think that the court could think about. In that case, the court appoints the person.
The ‘special interest party’ option [00:25:32]
Rose Chan Loui: So with the relator status, the attorney general designates someone to sue on their behalf. With a special interest standing, that’s the court saying, “This is a really important interest. I’m not seeing the public’s interest sufficiently protected here. The attorney general doesn’t seem to be able to come in. Let’s appoint this other party who is willing and able to pursue this on behalf of the public.”
Rob Wiblin: Hold on. So there’s this relator status thing with the attorneys general. But if they declined to deputise anyone to represent them, are you saying that Yvonne Rogers, the district court judge in this case, could actually herself give standing to a public interest group? Wow.
Rose Chan Loui: Yeah. If she finds that the circumstances are appropriate for that. And not very frequently used, but if you go through those factors, it seems like we could meet this. I guess unless the attorney general objected. But even then, if she thinks that the attorney general is not acting for some political reason, and they really should be, she could appoint a special interest party.
Rob Wiblin: Wow. That’s a huge deal, because you can imagine they can lean on the attorney general, perhaps, or maybe they will successfully kind of persuade the attorney general to accept their plans with some modifications.
But if Yvonne Rogers, the district court judge, is unimpressed with that — from what she’s written, she’s sympathetic to objections to the conversion in general; she doesn’t think that this is necessarily in the public’s interest, and she thinks the public’s interest might be harmed — she could appoint a special interest party to pursue the case in Musk’s place and object. I think it’s very inconvenient for OpenAI, especially if she was going to hear the case herself.
Rose Chan Loui: Right, right. I guess Musk would stay in, but someone else would be the one more responsible for arguing on behalf of the nonprofit and its board, the charitable interests.
How might this play out in the fall? [00:27:52]
Rob Wiblin: OK. So there are quite a lot of different ways now that this conversion could get tripped up, or at least changes could be forced in order for it to be accepted.
This could happen in the fall. The judge could decide that Musk does have standing and that the conversion is objectionable, and so block it for that reason. Or she could decide Musk doesn’t have standing, but appoint a special interest party who then represents the public along with Musk / in Musk’s place, and then she might find that it’s objectionable and block it.
Or all of this fuss and what the judge says here might prompt the attorney general, either the California one or the Delaware one, to chime in and say, “We don’t think that this is up to snuff. We think that modifications have to be made or we’re not going to accept it.” So they could block it. And they definitely have standing.
So I guess I would be a little bit nervous if I was…
Rose Chan Loui: What’s going to ripen first? Because we haven’t heard anything from Delaware for a while. I know that California, they’ve asked for documents, but I don’t know where they are with that either. Again, I’m not sure where people should be weighing in, on which venue we should be focusing on and prioritising. The litigation, we have a little bit of space of time if people want to weigh in with amicus briefs, et cetera. But it’ll be heard in the fall sometime.
The nonprofit board is in a bit of a bind [00:29:20]
Rob Wiblin: It creates an interesting strategic situation for OpenAI and for the foundation board. They’ve been kind of put on notice a little bit that this conversion is in legal jeopardy, that maybe it’s not acceptable, that there are various ways that it could go wrong.
It’s not obvious to me that they necessarily want to barrel ahead and try to rush this through or make it happen without any modifications that might placate people who are objecting.
To begin with, these people are, as we’ve mentioned before, very serious people. They were put on the board because they had independently great reputations, because they were respected. And as far as anyone knows, they’re not kind of lackeys for OpenAI’s corporate interests. They don’t have any at least direct financial interest in OpenAI. I think they would want to think that they are pursuing the foundation’s charitable interest and that they’re not doing anything that’s legally dicey. So they might well want to make some changes in order to make this more likely to go through.
And furthermore, it would be commercially questionable to have this as your dominant plan. This is all of their financial fundraising governance plans, basically. And if they just go ahead with it, knowing that there’s a chance that it could be brought to a screeching halt at some point later in the year, if that happened, it would be very embarrassing. It’d be bad for staff morale, it would be bad for their fundraising.
In the meantime, investors are going to be on edge because they’re going to be thinking, “All of this is at risk. Maybe none of this will actually happen as we plan.”
They’ve got to make some big decisions.
Rose Chan Loui: A bit of a bind, right? Yeah. I don’t know what they thought when they first announced this plan, whether they thought this will just sail through. Our first article was on that, that they should expect some obstacles, but I didn’t know if they realised how many obstacles might be thrown their way.
Rob Wiblin: So given that it seemed like their initial plan was quite aggressive, I suppose, in terms of how many objections people might have to it — it looked like they were really changing the charitable purpose, it looked like they were going to get the foundation to give up all control of the company, and it looked like they were maybe going to basically not give it as much money as was probably commercially deserved on the open market — so there’s many different margins on which they could try to reduce their ambition, so to speak, in order to make it more palatable.
Rose Chan Loui: Yeah. I do think that there’s a way to allow some kind of restructure to go through that is also fair to the nonprofit and its purpose and preserves its purpose. I mean, like I said, they think that they are still following purpose under this restructure. I’m not sure I totally understand how, because they’ve not really enlightened us as to how that would work if they remove the nonprofit from control. I think you and I agree that just being a grantmaking foundation is not the answer.
Rob Wiblin: That can’t be everything. There has to be more to it than that on the best strategy.
Is it in the public interest to race? [00:32:23]
Rob Wiblin: It’s very interesting if the case kind of comes down and hinges on this question of: to what extent is it in the public’s interest for OpenAI to race to be the first to produce, say, artificial general intelligence? Because judges and courts are not typically in the position of judging such difficult empirical thorny questions about what is in the interests of humanity.
Rose Chan Loui: It requires so much trust on everybody’s part. Why would we necessarily agree that they’re the best actor? And I’m not the AI expert, but that’s clearly what I’m hearing is that’s how they interpret this restructure and consistency with purpose. They don’t see it as a change. And they’ve consistently continued to declare that that’s their purpose.
Rob Wiblin: Right, right. Yeah, I don’t know how the court would try to establish whether it’s a reasonable strategy or not. I guess they could give a survey to people in the industry — well, I guess it’s hard maybe to find people in the industry who don’t have a commercial incentive one way or the other — but try to survey people on is it actually in the public’s interest for OpenAI to rush to build AGI ASAP? I’d be interested to know what responses they would get. But I guess courts don’t tend to do polling operations.
Rose Chan Loui: Yeah, I don’t know. Anyway, that’s what I’m leaving you with. You can mull over whether or not it is achieving purpose to allow OpenAI, or to enable them, support them in their effort to be first, because they’re wearing the cape.
Could the board be personally negligent? [00:34:06]
Rob Wiblin: Yeah. A question I saw one or two legal analysts raise is: given that the board has sort of been put on notice about the legal diciness of the conversion in this court order, could they conceivably be personally negligent or personally liable in some way if they barrel forward with this conversion and try to do it before the court has their say?
You’ve got to go, so maybe that’s one that we can return to later. I’ll stick up a link to an article that raises this question mark.
Rose Chan Loui: They can certainly be sued for breach of fiduciary duty. I would assume that any smart board member would make sure they are protected with directors and officers liability insurance — under which they just have to show that they conducted themselves with ordinary business judgement and that they were diligent about reviewing all the information.
I think you might not be protected if you were reckless in your decision, but hopefully they are not in that territory. I think this is a complicated enough case that if they were liable, that they would be protected by their insurance.
Rob Wiblin: Yeah, I imagine the legal standard there is reasonably high. To prove that someone has truly been reckless and negligent, there would have to be pretty damning emails somewhere suggesting that they just didn’t care. And again, these are serious people who I think, especially in this context, they’re going to be reviewing things pretty carefully.
Rose Chan Loui: I mean, I think that they are facing strong headwinds in terms of the desire to make a profit. And so I don’t envy their position, but I hope that they’re listening to everyone who’s talking about the importance of the nonprofit and its original purpose.
Rob Wiblin: Brilliant. All right. Well, I know lots of other people want to talk to you. You’ve got a busy day ahead, so we’ll leave it there until the next shoe drops.
Rose Chan Loui: Well, I gave you my voice first for the day, Rob.
Rob Wiblin: And we appreciate it very much. We got great feedback on the last emergency podcast, so I think there’ll be more to come. Have a great day.
Rose Chan Loui: Thanks for having me. Bye.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.