Transcript
Cold open [00:00:00]
Rose Chan Loui: This is a very different nonprofit. It’s not a foundation that the importance of which is the giving of philanthropic money out. The reason they’re so important is because they’re in the middle of a corporation that is doing this work — and only from that position can they really make sure that what’s being done is good and safe for humanity.
Once they’re spun out, they’ll be more like any typical corporate foundation: they’re giving grants out to whatever, presumably still in the scientific artificial intelligence research world. But they won’t have the inside track to guide the work that’s being done. And that is pretty hard to compensate. It’s not a numerical amount. It’s a position that is rare.
What’s coming up [00:00:50]
Rob Wiblin: Today’s episode is on a massive issue you might have heard mentioned, but not know many details about: OpenAI deciding it wants to shed its nonprofit structure permanently and become a normal for-profit company.
In the process, they might give birth to one of the biggest charitable foundations to ever exist, a 100-pound gorilla focused on safe and beneficial development of AGI with over $40 billion or $80 billion in paper assets.
Or maybe the nonprofit board will be messed around and not get what they’re truly owed.
Legal expert Rose Chan Loui and I cover:
- How this can happen, and whether it’s a legal loophole or something more reasonable.
- How OpenAI the for-profit, its staff, and its investors, are in a direct financial conflict with their nonprofit owner.
Why the nonprofit’s control of OpenAI could actually be priceless in pursuing its mission - Why maybe it’s understandable for them to do so anyway.
- How the outgunned independent members of the OpenAI board can best play their hand to get the money they deserve
- The active interest shown by the California and Delaware attorneys general, which might give the nonprofit board the support they need to get a good outcome for themselves.
- How you might go about valuing the nonprofit’s ownership stake in OpenAI — including profits, control, and the hypothetical of how much bidders might be willing to pay.
- Why it’s essential that the nonprofit get a big steady stream of cash — and not just equity in OpenAI that is locked up for years.
- How any of this can be regarded as an “arm’s-length transaction” as required by law.
- A weird separate conflict between OpenAI the for-profit and their investor Microsoft.
- Why Rose and I have some decent amount of optimism about all this.
Just so you know going in, the OpenAI nonprofit board has nine members — including Sam Altman, who is the CEO of OpenAI, three tech entrepreneurs, a traditional corporate leader, an economist, a military cybersecurity expert, a philanthropic foundation CEO, and an ML researcher. Seven of the nine were appointed in the last year.
This is an exciting episode, and we’ve turned it around quickly while these issues are all still completely life. So without further ado, I bring you Rose Chan Loui.
Who is Rose Chan Loui? [00:03:11]
Rob Wiblin: Today I’m speaking with Rose Chan Loui. Rose is the founding executive director for the Lowell Milken Center for Philanthropy and Nonprofits at UCLA Law. She received her JD from NYU School of Law, and spent decades practicing law with a particular focus on nonprofits and tax controversies.
Earlier in the year, she weighed in on the OpenAI Foundation board situation with two UCLA colleagues in a paper titled, “Board control of a charity’s subsidiaries: The saga of OpenAI.” That paper concluded with, “Whatever happens in OpenAI’s next chapter, protecting the charitable interests is likely to be a heroic task in the face of the overwhelming profit-making incentives.”
Since then, things have advanced a fair deal on the OpenAI front, with the for-profit now seemingly trying to shed the control of its nonprofit owner entirely — which is going to be the topic of today’s conversation.
Thanks so much for coming on the show, Rose.
Rose Chan Loui: Thanks so much, Rob, for inviting me onto the show and letting me talk about this topic that has been an obsession for the last few months.
Rob Wiblin: I’m glad you’ve been obsessed with that, because I’m super interested as well and I think listeners will be too. So let’s dive in.
How OpenAI carefully chose a complex nonprofit structure [00:04:17]
Rob Wiblin: Since its founding back in 2015 or 2016, OpenAI has kind of touted its nonprofit structure as one of the reasons it could be trusted and should be taken seriously. Its CEO Sam Altman said in 2017 that OpenAI is a nonprofit because “we don’t ever want to be making decisions that benefit shareholders. The only people we want to be accountable to is humanity as a whole.” And that was a pretty typical statement around that time, as I recall.
And he even famously said of the nonprofit board in June last year, “No one person should be trusted here. I don’t have super-voting shares. The board can fire me and I think that’s important.”
So its legal structure was very much not an accident or an oversight. It was pretty central, I think, to the organisation’s self-conception, and certainly its public presentation. Can you quickly explain to us what that legal structure has been, and what we think they’re trying to change it into?
Rose Chan Loui: Sure, Rob. As you said, it was very carefully structured. And in the beginning, in 2015, it was pretty straightforward: it was founded as a scientific research organisation. The specific purpose was to provide funding for research, development, and distribution of technology related to AI. Then they also made the promise that the resulting technology will benefit the public, and the corporation will seek to open source technology for the public benefit when applicable.
I just want to emphasise here that that is the legal purpose that is in the certificate of incorporation with the State of Delaware. Then, in its registration with the California Attorney General, it said that its goal is to engage in research activities that advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
So just a little distinction here: that sounds more aspirational. Now, by 2019, they had been able to raise $130 million of their initial $1 billion goal. They decided then that charitable donations were not going to be enough to achieve their charitable purpose of providing funding for research and development and distribution of technology related to AI.
So they set up this structure that would accommodate and attract investors. First step was to form an LP, a limited partnership, under OpenAI, which would be owned by the nonprofit as well as employees and some early investors. That LP would be governed by the nonprofit and operated in accordance with the nonprofit’s charitable purposes.
The LP then created a subsidiary, Open AI LLC, which you could also call the operating company. It’s at this level that Microsoft invested. And interestingly, OpenAI on their website used to call Microsoft a “minority owner.” But they clarified on their website in December 2023 that no, they only have a profits interest in OpenAI LLC. We think that is in response to inquiries by various antitrust authorities.
So again, the operating agreement of LLC, like you said before, broadcast that LLC might never make a profit and is under no obligation to do so. One of my favourite quotes is, “It would be wise to view an investment in LLC in the spirit of a donation.” And, just like with previous investors and the employees, there was a cap on how much they could get on their investment. For Microsoft, we know it’s 100 times of investment. They also said that that cap could be lowered with later investors.
And now, both the LP and the LLC are controlled by OpenAI general partner. And that is what we call a “disregarded entity” — basically, it’s just how the nonprofit is controlling the for-profit entities below it. We think that there are other subsidiaries through which the newer investors have participated, but we don’t have full transparency into that. But they formed a lot of other entities in Delaware, and some of them are registered in California.
Rob Wiblin: For the sake of the audience being able to picture this in their mind, the simplistic picture that I have is that currently there’s a OpenAI nonprofit foundation that basically owns most of and controls an OpenAI for-profit business. So there’s the for-profit business and there’s the nonprofit foundation — and in broad strokes, those are the two entities.
Rose Chan Loui: Yes. Yeah, it’s a lot more complicated if you draw out the whole thing. But yes, it’s a nonprofit at the very top. It’s the parent. Which is different from a lot of corporate foundations, where a corporation makes a lot of money and then they decide they would like to do good, so they form a corporate foundation and they control entirely the corporate foundation. Here, the nonprofit is the genesis of this organisation.
And then, if I could, there’s five features that I think were very carefully set up to protect purpose.
Rob Wiblin: Sure, go for it.
Rose Chan Loui: One is that, as we’ve said, the nonprofit has complete control of the LLC through this general partner and the nonprofit board.
Secondly, the nonprofit board is committed to the nonprofit purpose of the nonprofit, and defined publicly by OpenAI as “development of AGI that is broadly beneficial.”
Third, the nonprofit board was supposed to remain majority independent, but they define “independent” as not having any equity in OpenAI. So, as other people have criticised, Sam Altman, while he didn’t have equity in OpenAI, had a lot of other interests in partners.
Rob Wiblin: Yeah. It’s hard to say that he’s independent of OpenAI in a commonsense sense.
Rose Chan Loui: Yeah, in the common sense of the word. So that’s why I say this is how they define it.
The fourth is that profit allocated to investors and employees is capped, as we’ve talked about. So all residual value created above those caps are to go to the nonprofit “for the benefit of humanity.” But it’s a pretty high threshold.
Then fifth, Microsoft and other investors do not participate in profits from or have any rights to IP once OpenAI has reached AGI — defined, broadly speaking, as “artificial intelligence that is smarter than human intelligence.” And the nonprofit board, under the current terms, determines when OpenAI has attained AGI.
So they put a lot of things in there to protect purpose.
Rob Wiblin: Yeah. A lot of thought went into it. Presumably they were thinking, “There’s a lot of money here, so Microsoft might try to push us around. How are we going to ensure that the nonprofit foundation can remain true to its nonprofit purpose of building AGI that benefits all of humanity and not be corrupted by profit incentives?” So that was clearly part of the goal. And I guess people can judge for themselves at the end of this conversation how well that has gone.
Rose Chan Loui: Correct.
OpenAI’s new plan to become a for-profit [00:11:47]
Rob Wiblin: What is it that they’re trying to change now?
Rose Chan Loui: So here we are in 2024. They make the announcement that OpenAI is going to restructure so that the nonprofit will no longer control the for-profit entities. What they’re talking about…
Well, first of all, they had another round of funding, raised another $6.6 billion. And these new investors were able to get a deal that OpenAI would have two years to complete this conversion or they would have to return the $6.6 billion of funding. The new investors are also seeking to remove the caps on investment returns — at least for them, and perhaps for old investors. I’m not sure where that is in negotiations.
Rob Wiblin: So they previously agreed that their investment would have at most a hundredfold return, and now they’re trying to get out of that basically, or change that so there isn’t a cap anymore?
Rose Chan Loui: Correct. And you know, I’m not the best at math, but I thought, if you have $10 billion of Microsoft, 100 times means $1 trillion.
Rob Wiblin: A trillion, yeah. Substantial valuation.
Rose Chan Loui: Yes, yes. I don’t know if there’s anything comparable. So the proposed restructure: they’re saying the nonprofit will remain, but will not be in control. And then the for-profit company will become what’s called a Delaware public benefit corporation. Not to be confused with a California nonprofit public benefit corporation. The terms are so confusing.
But basically what a Delaware public benefit corporation is is that, while it’s a for-profit, it’s going to be allowed under corporate law not to 100% maximise profits. It will be allowed to take account of public good in its profit-making operations. But just to be clear, that goal is not at all the same as a legally binding commitment under tax-exempt law or state nonprofit laws. Its goal to do good is aspirational — so very, very different.
Rob Wiblin: So I guess we’re going from the nonprofit foundation that owns and controls the business having an obligation to pursue its mission of benefiting humanity, to a situation where now merely they’ve given themselves permission to not maximise profits if they want to. They’ve said, “Well, we might do profits and we might do some combination of other stuff. TBD.”
Rose Chan Loui: Correct, yes. And I’m not an expert on that, but my understanding is you file a report every year and talk about all the good you’ve done. But it’s really, I think, seen more as a public relations move.
Rob Wiblin: OK. It’s not something that has a lot of teeth.
The nonprofit board is out-resourced and in a tough spot [00:14:38]
Rob Wiblin: So I’m going to have a fair number of sharp questions in this interview. But I don’t think there was anything wrong with OpenAI seeking for-profit investment in order to stay in the game and stay relevant. And I think the idea that they had — that they’ll have the investment, it’s kind of capped, and then it will spill over; and at least in theory, the nonprofit foundation should be able to control, to direct the business if they think it’s gone off the rails and is no longer pursuing its mission of benefiting humanity — in theory, this kind of all makes sense.
I don’t think that there was anything untoward, although you might think perhaps there was something a little bit naive about thinking that this would function as originally intended. But if they had just remained a nonprofit and only accepted charitable donations, I think it’s fair to say that they would have become irrelevant, because they just wouldn’t have been able to keep up with the costs involved in training AGI or training the frontier models.
Rose Chan Loui: I think that’s absolutely right. I mean, we can understand why they did what they did. And they’re not at all novel in setting up a for-profit subsidiary; nonprofits can do that.
I think what became challenging here was, first, that most of the nonprofit/for-profit relationships that I’ve seen anyway are wholly owned or mostly wholly owned. You know, you could operate a spaghetti factory, you pay tax on it because there’s nothing to do with your charitable purpose, but the net goes straight up to the parent to use for a nonprofit purpose.
But I think what’s so different here is that the amount of external third-party investment is so huge. The sums are so huge and completely engulf the nonprofit.
Rob Wiblin: Yeah. Which is not really resourced. It’s a bunch of volunteer, part-time board members. I don’t know whether it really has any meaningful staff to represent its own interests really seriously independent of the business. And that’s a major weakness, I guess, with the structure that was set up.
Rose Chan Loui: Right, right. I mean, they only show from their 2022 year about $19 million in assets at the nonprofit level. And here you have a subsidiary that just keeps going up, but the latest number is $156 billion of valuation. So it becomes very hard. It definitely looks like the tail is wagging the dog.
Who could be cheated in a bad conversion to a for-profit? [00:17:11]
Rob Wiblin: So at this point, as OpenAI — both the nonprofit and the business — is considering making this transition, what are the legal obligations the individuals on the OpenAI Foundation’s board are bound to? Is it to maximise the achievement of the foundation’s mission, or something less than that?
Rose Chan Loui: No, the board is still bound by the original purpose. They’ve not changed it, so far. I’m assuming there might be some tweaking of that nonprofit statement of purpose in all of this.
What’s interesting though, Rob, is that when you focus on how they wrote it in the Delaware certificate, it is “to provide funding for research and development.” So going back to the founding of the for-profit subsidiary: like you said, they would probably have become irrelevant because it cost so much to do this R&D. So I think they were still complying with it. The question is, at what point do the for-profit interests of all the investors, the private parties, take over, and as such that the nonprofit goal has been subsumed? Their legal obligation hasn’t changed, at least not yet.
Rob Wiblin: Right, right. I think there’s roughly eight members on the foundation board, and I guess they’re in a tricky spot, because they’re entering this negotiation with this business that wants to become independent, to escape their control, to be freed of their control.
Of course, the CEO of the business is also on the board, is one of the members. They’re very rich and powerful and known for being willing to be fairly aggressive when his interests are challenged.
And compared to the business, and I suppose compared to the investors, they’re very under-resourced, I think, in terms of their staffing. So they’re basically a bunch of volunteers who have stepped up to take this role. I don’t know whether they’re paid very much for this. Certainly they’re not paid to be doing anything like full-time work.
And they’re up against these other organisations, where each dollar they manage to squeeze out of this negotiation they get rather than the nonprofit. It’s just pure gain for them: if OpenAI doesn’t have to compensate the foundation another billion dollars, that’s another billion dollars that the investors and the staff and so on can keep for themselves.
Rose Chan Loui: Right. I think since we wrote our article, what’s encouraging is that at the end of our article we said that we hope someone’s going to look into this. But it does look like even Delaware has written them and said we need to review this conversion, and OpenAI says they’re going to cooperate.
So this might be a little technical, but Delaware governs kind of the governance aspects of OpenAI Nonprofit. But California — where OpenAI is based and primarily has its assets — is interested in making sure that the charitable assets remain protected. And the attorney general of California, we hear that they also are looking at this and that OpenAI is in conversations with the attorney general. So at least that’s going on.
Rob Wiblin: Yeah, I was going to say they’ve got a tough task ahead of themselves standing up for the interests of the foundation, but I guess they are getting a little bit of assistance in the pressure that might come from the Delaware attorney general and the California attorney general. And I think the IRS has an interest here as well, the US tax organisation.
Rose Chan Loui: The IRS does have an interest, but I don’t know what we’ve lost, because they’ve not really had any profits yet, as far as we know. So the for-profit, as far as we know — again, we don’t get to see their tax returns — but based on how much that they are spending in order to do their R&D, they might not really have owed tax even if they were taxed.
Whether they are taxed or not depends on whether the operations, the activities of the for-profit are considered unrelated business income or still related to that original scientific research purpose, whether it’s still charitable. And when does it become research and change into more commercial? I would think ChatGPT was probably some kind of marker there, because that looks very commercial now to me. So I think it’s becoming more and more business.
Rob Wiblin: So in planning for this, I was trying to get conceptual clarity on what’s going on with this conversion. And I found it useful to think about who would be wronged, what would have gone wrong if OpenAI just became a for-profit overnight in exchange for nothing, that it didn’t actually provide any compensation to the nonprofit foundation.
I think the reason that would be legally and morally unacceptable is that it’s made all of these promises and commitments as part of being a nonprofit since it started. And it can’t just promise all of these things and then take them back once things start going well on the business side. Sometimes those were implicit promises, and other times, as we’ve mentioned, they were really explicit — because it very much wasn’t an oversight; it was very carefully thought out how this was going to be.
I think some of those commitments include to society and the tax system. It’s promised that the foundation’s equity stakeholders in the project — which is currently in the tens or hundreds of billions of dollars, maybe — all has to go to this mission of building artificial general intelligence that is safe and benefits all of humanity. That’s what it was constituted to do.
I think what’s really important not to miss is that lots of staff joined and contributed to its success and its accumulation of key research breakthroughs and intellectual property over the years, and they were kind of promised that it would be guided by this goal of building safe AGI that benefits humanity with this nonprofit structure, rather than just being guided by the profit motive.
I think many of these people, many of these fantastic scientists, would have been unwilling to join and would have gone and taken their talents elsewhere if it had just been a for-profit company trying to maximise its revenue, because that’s not something that many of these people would have wanted to help with. So it then would have not ended up being the key player that it is.
A more minor point perhaps is that donors like Elon Musk — who I think gave or committed $100 million or something in the very early days — gave to the nonprofit. I don’t think Elon, if it was a for-profit entity, would have donated the $100 million. And I think there are a few other groups that gave smaller amounts of money. Maybe that’s a little bit by the by, given that it’s small in the scheme of all of the investment that they’ve received by this point. But still notable.
Rose Chan Loui: Well, it’s still a huge amount of contributions, right?
Rob Wiblin: Yeah. On the human scale, $100 million is a lot of money. An interesting thing that is possible to miss — and I’m just going to double-check that this is true after we finish the recording — is that I think early on, when it was just a nonprofit, they were able to sponsor foreign visas. They didn’t have this H1-B visa cap, which allowed them to bring in, in principle, a lot more foreign staff to come and work at what was then thought of as a nonprofit rather than a business. So that could have been a meaningful boost early on. I’m not sure how important it was actually in the scheme of things.
I guess another smaller thing, though I think it matters, is that by purporting to be a nonprofit that’s motivated by the benefit of all of humanity, I think that’s part of how they bought goodwill from a lot of different parties and got a lot of people to be on board and generally supportive, and maybe got a lower level of regulatory scrutiny and got more trust from the government in all of these hearings.
So I think society and a lot of different parties would stand cheated if the nonprofit foundation were not fully compensated so that it could pursue its mandate to the greatest extent possible, basically. Have I got that all right?
Rose Chan Loui: Yeah, I think you absolutely have. I think the action here is at the state level, because even though it has its tax exemption at the nonprofit parent level, the nonprofit hasn’t distributed anything yet, anything of significance.
And then, like I said, even the for-profit subsidiary seems to not have any net profits — again, we have no transparency into their tax returns. But they have made all kinds of representations to the public and to both Delaware and California. So it’s still a promise, it’s still a commitment. And I think that’s why the action is going to be with the attorneys general.
Rob Wiblin: To be clear, nobody is actually arguing that the nonprofit foundation doesn’t deserve compensation. I was just wanting to elaborate on all of the reasons why, so people appreciate the strength of the reasons.
Rose Chan Loui: Yeah, absolutely. I mean, it’s all over their website still. But I think that the most complicated part of this is how to compensate the nonprofit fairly. And one of the issues is, first, what things need to be valued? It seems like they own IP, they have a right to IP.
And then I think they own control right now. So generally, when you have these transactions, there’s a premium when you have a controlling interest. So how much is that valued?
Their interest in profits is kind of difficult for me to figure out how to assess, because like we said, they need to be over $1 trillion before they even get supposedly a profit’s interest — even though they own it. Even though they own it, the way that the deals are structured with Microsoft and other private investors is that it doesn’t go to OpenAI Nonprofit until the investors have recovered their investments and up to 100 times their investments.
So does that mean they own nothing or does that mean they should be compensated for giving up their interest in the residual value? Because they have said that they think they will make gobs more than that. So, while it sounds like a high threshold, they’re expecting the nonprofit [to be paid a lot], or at least that they used to say that. So to me, I don’t know how to value that.
Rob Wiblin: We’ll come back to the valuation in a minute, because it’s quite a can of worms, and we can explain to people just how difficult it is to make sense of what the figure should be.
Is this a unique case? [00:27:24]
Rob Wiblin: But I haven’t heard of another nonprofit owning a business wanting to do this switch before. Is this a really innovative thing, or is there kind of a track record of nonprofit foundations making this kind of conversion already?
Rose Chan Loui: There’s kind of a parallel history with health organisations that went from being nonprofit to for-profit. Jill Horwitz, who is our faculty director, is an expert on all that. There is that example, and she does caution us that a couple of these private foundations that resulted are quite large, but in retrospect they should have been compensated a lot more than they were compensated.
So that’s probably our best example right now. But theoretically, potentially, here this could be the largest nonprofit there is, at least US-based. It seems like there’s a very large one of over $100 billion based out of Denmark.
Rob Wiblin: I think it’s the group that made Mounjaro and other GLP1 inhibitors, the weight loss drugs. They have an enormous foundation.
Rose Chan Loui: Oh, they did that too?
Rob Wiblin: I believe that’s it.
Rose Chan Loui: Yeah, yeah. So they’re huge. And then there’s one in India. But certainly in the United States, I think our biggest one in terms of endowment is Gates, and they’re about $50 billion. Anyway, now I’m jumping again into valuation, so I will stop.
Is control of OpenAI ‘priceless’ to the nonprofit in pursuit of its mission? [00:28:58]
Rob Wiblin: Yeah, yeah. Just before we get to the valuation, I wanted to take a second to consider, is there any amount of compensation — any equity stake or any amount of money — that can really make the nonprofit foundation truly whole for giving up its strategic control of the organisation OpenAI in terms of pursuing its mission?
I think that the case against that is that OpenAI is one of the groups most likely to develop AGI, and this foundation is set up to make it go well. So by having a controlling stake in OpenAI, the nonprofit board gets maybe a 20% chance or something of staffing up; insisting on being in the room where all the decisions are being made, the room where it happens; and literally directing the major decisions about how this transition to an AGI-dominated world takes place — or at least, operating within the worldview of OpenAI, that this is going to happen, and this is how they could influence things.
So this is of enormous value to the pursuit of the organisation’s goal, perhaps a priceless one.
Now, it’s true that you could take some money and make grants to try to influence the development of AGI in a positive direction. But it’s kind of unclear that even trying to make hundreds of billions of dollars in grants would buy you as much ability to actually steer the direction of things in the way you want, as if you just actually retained control of the organisation that matters.
Because there have been various foundations that have tried to influence this sort of thing, but they tend to find it hard to give away more than some number of low billions of dollars. And even that takes years and is very difficult, and they’re not confident about the marginal grants that they’re making, because there just isn’t necessarily the ability to absorb that kind of capital on truly useful projects outside of the businesses that are doing the work. It’s hard to do this kind of stuff outside of the organisations that actually matter, which is the organisation that they control now.
Rose Chan Loui: Yeah. I totally agree, because the core of the purpose was not about making money: it was to raise money, but specifically so that they could guard against bad AI. So how do you compensate for that? No, I think you’re right.
I think the question really comes down to the facts as they are, which is that they’ve invited in so much external investment — can it go on this way? I think originally when it was structured, they were very careful to not have too much private benefit — but there’s an awful lot of private benefit going on right now, or at least it looks like that.
Rob Wiblin: Does the nonprofit foundation ever have to demonstrate that it is better to sell OpenAI? That that’s the best way to pursue its mission? Does it have to prove that to anyone?
Rose Chan Loui: I think that’s part of the analysis the nonprofit board has to do right now. Can they make the argument that this current structure, as carefully structured as it was, is not sustainable? And that the best thing that the nonprofit can do is just become independent, maybe? You know, I’m not sure they can act all that independently right now, or that they are, in fact, acting all that [independent]. I think they may try, but it’s really hard when you have $157 billion —
Rob Wiblin: Set against you.
Rose Chan Loui: Set against you, and you have only the $19 million sitting in your bank account. They do have good counsel, I can tell you that. I’m not sure who the investment banks are representing.
Rob Wiblin: I think Goldman Sachs might represent them.
Rose Chan Loui: Right. But they’re representing OpenAI as a whole. Not necessarily… I think because it’s more about OpenAI versus Microsoft.
Rob Wiblin: I see.
Rose Chan Loui: I can’t remember who’s representing who. I think they have Goldman and then Microsoft has Morgan Stanley. Is that right?
Rob Wiblin: So that’s bad news, I guess. Because what you really want is the nonprofit foundation to have its own totally independent legal counsel, and business analysts who are representing not the interests of the business, and not the interests of Microsoft, certainly.
Rose Chan Loui: They do have separate legal counsel. But I think it’d be nice if they also have their own valuation people. And maybe they do, but it’s not been published. It’s super complicated. Again, we keep ending up there, trying to forestall that discussion.
Rob Wiblin: Yeah, that is what we’re going to talk about next. I do think that everyone on the board, I imagine, wants to do their job. They want to benefit the foundation. At least I see no reason to think that at least the seven members who’ve been added recently — Larry Summers and various others — that they’re acting in bad faith in any way.
It’s just that the deck is stacked a little bit against them. It’s going to take a lot of effort on their part to stick up for this organisation, given the intense attention that is going to be put on trying to drive down the valuation and get them to sell, even if maybe it’s not in the interests of the foundation’s mission in reality.
Rose Chan Loui: Right. And then I don’t know if this is jumping ahead also, but the other thing I keep coming back to is: What kind of cash do they even have? Because one advantage would be, “Just give us a lot of money and we’ll go our merry way.” And I don’t know what, become a watchdog organisation? But like you said, I think the difference was that they weren’t just giving out grants. They were giving out grants, but that’s not where they were having the most impact, or where they’re likely to have the most impact. I completely agree that I don’t know how you compensate for that.
Rob Wiblin: So if I was on the board, I think it would be very understandable to think, “Maybe in theory, in a different timeline, we could have maintained real control of OpenAI the business. But in reality, as things have panned out, the board isn’t really empowered to truly direct the operations of the business, because there’s just too many strong forces set against it. So maybe the best thing that we can do under this circumstance is to give up our stake in exchange for cash, and then use that cash in whatever way we think is best, operating independently of the business as a new entity that can pursue the mission of positive AGI using grants or whatever else.” It’s definitely an understandable take.
Rose Chan Loui: Or maybe some combination of cash and equity, so that you have money to do your work on a present basis and then still keep an interest in this future potential immense value.
Rob Wiblin: Yeah, yeah. We’ll come back to the cash and equity thing, because I think that’s a sleeper issue that I’ve heard almost nobody talk about, that actually could be absolutely central. It could be almost as important as all of the other things that we’re discussing. So I definitely want to bring that to people’s attention.
The crazy difficulty of valuing the profits OpenAI might make [00:35:21]
Rob Wiblin: But let’s turn to this valuation question: How does one figure out a fair valuation for the foundation’s control and equity stake or profit stake in OpenAI the business?
Rose Chan Loui: Again, I’m not an expert in this. I think the investment bankers have to figure it out. What I’m hearing is a number of $37 billion at the low end to compensate them for their interest in the intellectual property, and then a high of, if you take $157 billion divided by two, $80 billion. And I don’t think we’re at $157 billion, because there’s definitely other interests out there that have their stake. But I think it’s more components: what are the components?
Rob Wiblin: Yeah, we can break down the components. As you were saying earlier, it’s a little bit difficult to visualise in audio, but basically there’s all of these other groups — including Microsoft and others — that have invested in the business, and they get all of the profits up to some point. And of course, the staff also have lots of equity, so they own a bunch of the profits, basically the early profits that the organisation would get.
So there’s all these different other interest groups that get paid first up to some level, beyond which they don’t get paid anymore. And it might be at a valuation of $1 trillion. I’ve heard different estimates. We really don’t know the breakdown. I’ve heard someone say $200 billion. Other people say $1 trillion. But beyond some threshold like that, that pays off all of the staff that have put into it, all of the businesses that have invested in it. After that, the nonprofit gets all of the profit. I think basically they own 100% after that stage.
Now, how do you figure out how much that is worth? It’s so hard. You have to literally estimate what’s the probability that OpenAI at some point makes more than $200 billion in net present value of profits, or more than whatever it is, more than $1 trillion of net present value of profits.
And then you have to think, how much more is it going to be? And what timeline would it be? How much do we have to discount it? That’s probably not the main issue.
But the question is like: What’s the probability that it’d be that it’s more than $1 trillion in profits, and how much more? Is it going to be $10 trillion, $100 trillion? Because they have visions of changing everything.
Rose Chan Loui: Correct. That’s why I think that’s the hardest part, because if Microsoft gets its 100 times and they invest $10 [billion], but they’ve already invested more than that, and then that’s not counting anybody else’s interests. So it’s more than $1 trillion before [the foundation] gets [anything], if Microsoft actually gets its deal for its previous investments.
Rob Wiblin: Don’t they get wiped out if the nonprofit board decides they’ve achieved AGI? They lose something if that’s the case, right?
Rose Chan Loui: Yes. So the other question is that no one really agrees on AGI. I mean, they have this definition, but whether it has surpassed human intelligence is anyone’s guess. Which kind of gets back to your point, Rob: if they’re not in control anymore, they have even less transparency into that. And Sam Altman has said recently that he thinks AGI may be a moving target. You know, all of the incentive is to keep moving that point. I mean, certainly on the part of Microsoft, “We’ve not reached AGI yet.”
Rob Wiblin: As I understand it, Microsoft will want to say we haven’t achieved AGI, so that they can keep their access. I can’t remember exactly what the agreement is. Maybe you don’t know exactly either, but it’s something like they lose access to the IP.
Rose Chan Loui: Yes. They don’t have any more rights to the IP after OpenAI has reached AGI.
Rob Wiblin: So it’s in Microsoft’s interests to always say that they haven’t developed AGI, and I think in OpenAI’s interest to say that they have even before they have. And it’s so vague that who even knows, right?
Rose Chan Loui: Yeah. Certainly the nonprofit’s interest is to say, “Yes, you have” — because then all the other stuff is out the window, right? Then all of that belongs to the nonprofit.
Rob Wiblin: So we’ve got this kind of cascading profit thing which is difficult to value.
Rose Chan Loui: Based on a nebulous goal, based on this nebulous concept.
Rob Wiblin: So I think it is kind of the case that OpenAI probably either fizzles out and doesn’t make that much money — and the foundation probably would in fact receive basically nothing, very little in profits — or it does become one of the most important businesses of all time, probably the most important business of all time, in which case its valuation indeed definitely could be in the trillions, could be in the tens of trillions. It’s definitely not unimaginable if we see the kind of economic growth that, in the OpenAI worldview, we’re expecting over the coming decades or centuries.
So, because so much of the profit is concentrated in this minority of blowout, incredible profit scenarios, that suggests that’s good for the nonprofit’s foundation’s valuation — because if OpenAI was guaranteed to have a mediocre outcome, it had 100% probability of making $200 billion, then the foundation would be guaranteed to receive basically nothing. But if you say it’s got a 99% chance of nothing but a 1% chance of $100 trillion, then the nonprofit foundation is basically worth $1 trillion. So the fact that the outcomes are so high variance is definitely to the benefit of the nonprofit foundation.
Rose Chan Loui: And I guess if I were the nonprofit and arguing on their behalf, I would say, “But look at all these investors who are coming in now!” They believe that this is a high-reward investment for them. And they think that cap is real, because they want to remove it.
Rob Wiblin: Oh, yeah. So they must believe that we’re going to receive a bunch of money, otherwise they wouldn’t be trying to get rid of us.
Rose Chan Loui: Yeah. If they thought that was enough, they wouldn’t bother to argue about it. So that’s kind of interesting: we think it’s a huge hurdle, but they’re all thinking, no, we want that removed. We don’t want to be limited to 100 times.
Control of OpenAI is independently incredibly valuable and requires compensation [00:41:22]
Rob Wiblin: So this is the valuation on the business side, of the future stream of profits that is hoped for. It’s a weird circumstance, but a somewhat familiar circumstance.
Another aspect that’s a bit weirder still is the fact that the nonprofit foundation has this in principle control of this potentially historical organisation. And that is something that they really value, or they should value, because it allows them to pursue their mission. It’s also something that I think other organisations, if it were up for auction, would really value an awful lot as well.
You know, if actual control of OpenAI as a business was put up for auction — all of the governments of the world could bid on it, and all of the businesses; Microsoft could bid on it, Google could bid on it — they would value this enormously. And we know that the controlling stake in any business usually gets a big premium, 20% or 40%. In this case, I could imagine it being a lot more, given how important people think it is. Like, this is more important than a furniture company or something like that.
Rose Chan Loui: Right, yeah. The number I’ve heard is also 40%. And I assume that’s 40% of the cost of the value of a share. So you add another 20% to 40%. But even then, is their share…? I mean, ownership-wise it’s 100%, but profits-interest-wise, it’s definitely not that. It’s 50ish%. Right?
Rob Wiblin: Does the nonprofit foundation have to insist on getting extra compensation for giving up its control of OpenAI?
Rose Chan Loui: My understanding is that the nonprofit’s counsel agrees that there should be a control premium paid to the nonprofit. So they’re not disagreeing at all in concept, and I think they’re doing what they can to get that fair compensation, including control premium. But what that is, how to value it, is kind of…
So I think people agree conceptually on the nonprofit side, including their counsel. So that is actually encouraging also.
Rob Wiblin: Yeah, definitely. And suggests they’re not being pushovers. They’re getting good legal advice, or decent legal advice.
So how do you value it? I was thinking about it in terms of putting it up for auction and saying, “What could we get on the open market for this?” That would be one way of conceptually trying to think how much this is worth. But it sounds like that’s maybe not the standard way of doing it?
Rose Chan Loui: When you say that, that would be for for-profit companies? They would just say, “Who would like to buy this?” Because it’s not for sale, right? So it’d kind of be a hypothetical?
Rob Wiblin: But isn’t it the case that in theory they could say, “We’re going to sell our controlling. We’re going to sell control of the organisation to Google, or we’re going to sell it to the US government, or we’re going to sell it to UAE, or we’re going to sell it to the highest bidder — basically whoever’s willing to give us the most cash in exchange for it”?
Rose Chan Loui: Right. Oh, I see what you’re saying. That’s an interesting exercise. Now, whether or not Microsoft and the others would let that happen, that’s a whole other concern. Because I think Microsoft would not love it. But that is an interesting thing to see, because there are definitely investors coming around.
Actually, that leads to the question of: Is anyone going to end up with a controlling interest after Microsoft, after the nonprofit is spun out?
Rob Wiblin: Yeah. Because it seems like you’re saying ownership might be sufficiently distributed that there’ll be no one entity that has the 50% threshold to control it.
Rose Chan Loui: I mean, Microsoft has a big head start, at least in terms of profits interest.
Rob Wiblin: You definitely could ask: If we, the nonprofit foundation, went to Microsoft, and we negotiated really hard to sell control of OpenAI for the most amount of money that we could get, you’d at least think that they should try to get that amount of money, that that sort of compensation would be due.
But I guess you’re saying, because nobody might end up with control, maybe that’s the wrong hypothetical to be imagining. Because instead, you’re distributing the stakes among many different actors, and no one person or no one institution will have everything like the 50% threshold.
Rose Chan Loui: The other thing I’ve been wondering is whether the shares — the equity that the nonprofit gets out of this — will be voting or non-voting, or some combination. We know it’s not going to be majority voting, because that would give them control. But should the nonprofit board insist that they have some voting share so that they’re still in the room?
Rob Wiblin: So they can still speak up.
Rose Chan Loui: So they can still speak up. Even if they don’t drive it, they can still speak up on behalf of that original purpose. Because could there be some point where the for-profit becomes so profit-driven that, ethically — if the nonprofit sticks with its original purpose of protecting development of AI — it’s just not something they want to be involved in, despite the potential for a lot of profit? Sort of the same thing like when the universities are asked to divest: would they need to divest of their own baby because it’s gone so far astray?
Rob Wiblin: Yeah. I think that the nonprofit foundation should put some value on its ability to monitor what the business that it birthed is doing, of course maintaining at least some small number of voting shares.
Rose Chan Loui: Some number of voting shares.
Rob Wiblin: So we’ve heard this number of $37.5 billion in equity get thrown around. I guess the nonprofit board, we probably think it should do its best to bid that up on the basis of where we’re giving up control. That’s of enormous value.
Also, maybe that’s undervaluing the prospects of OpenAI as a business, that it has some chance of being this enormously valuable thing. And look at all these other businesses: look how desperate they are to get control and to get rid of this cap.
But I guess even if it’s $40 billion at the lower level, that would make them one of the biggest charitable foundations around. And if they could bid it up to more like $80 billion — which is a number that I’ve heard is perhaps a more fair amount, all things considered — then you’re saying they would be one of the biggest in the world, really.
Rose Chan Loui: Yes. And perhaps also most fair, because like you have pointed out, they’re probably not going to get cash in that amount, because they’re so cash strapped. Which is interesting that there’s this gigantic valuation, but they’re so cash strapped. That’s why they keep having to fundraise.
So I think, just realistically speaking, it’s going to be hard for the nonprofit to get that much in cash. So what’s the best then? It seems like the best is to get some combination. Or maybe, since they haven’t had any distributions, maybe part of the deal is that they have to distribute cash in some amount every year.
But going back to your point, they are giving up a lot that really can’t be paid for. They no longer get to drive, they no longer get to say that the for-profit entities will follow the charitable purpose of developing AGI and AI safely for the benefit of humanity.
Rob Wiblin: And that’s a huge sacrifice to their mission.
Rose Chan Loui: That is a big sacrifice of mission. The nonprofit board would just have to get there by saying we just don’t have the ability to force that now, with so many external investors.
Rob Wiblin: So there’s two blades to the scissors here. One is: How much would other groups be willing to pay in order to get this stuff from us? What’s the market value of it?
And then there’s the other side, which is: What would we be willing to sell it for? How much do we value it as the nonprofit foundation? And it’s kind of unclear that any amount is worth it, or any amount that they’re likely to get. But they certainly shouldn’t be selling it for less than what they think is sufficient to make up for everything that they’re giving up in terms of pursuit of their mission.
They might think that $40 billion actually just isn’t enough; if that’s all that we’re being offered, then we should actually just retain control. So that’s another hurdle that you have to pass, is arguing that it’s a sufficient amount to actually be a good decision.
Rose Chan Loui: I guess the flip side of that — trying to think, sitting in their chairs — is that, because their purpose is to develop AGI, if you don’t get the additional investment, you can’t actually develop AGI. At least that’s what they’re saying.
Rob Wiblin: OK, so you could argue it down, saying if it’s controlled by the nonprofit foundation, then this company actually isn’t worth that much. It’s only worth that much if it can break free. And then which one is the nonprofit foundation owed? Is it the amount that it’s valued at if they control it or if they don’t? I think the latter.
Rose Chan Loui: Yeah. They can’t achieve purpose without the additional investment. I mean, that’s the whole reason they established the for-profit subsidiary in the first place, and the need for funding just doesn’t seem to go away.
But I think what’s so tricky is: how does the public know when AGI has been developed? Who’s going to tell us that, when all of the for-profit incentive is to say it’s not there yet?
Rob Wiblin: Yeah. Is there anything more to say on the dollar valuation aspect?
Rose Chan Loui: Just to remember that we do have the attorneys general involved now, so there is someone, I think, speaking up for the nonprofit other than the nonprofit itself. And I’m trying to think, Rob, if there are competing interests on the part of the two states? I think they’re going to want OpenAI to stay in California, because if it starts making money, then that’s a good thing.
Rob Wiblin: They’d like to tax it.
Rose Chan Loui: They’d like to tax it. But at the same time, I think at least California is very protective of charitable assets. So I think in the present that we’ll have that assistance with getting a fair deal for the nonprofit here.
Rob Wiblin: That’s great.
It’s very important the nonprofit get cash and not just equity (and few are talking about it) [00:51:37]
Rob Wiblin: Should we talk about this cash-versus-equity issue? Maybe we should explain to people why I think this is so central.
Rose Chan Loui: Yeah, go on.
Rob Wiblin: So you could imagine that they sell OpenAI, and all they get is equity — that is to say, they get shares, basically, in the future profits of the organisation. But very often in these situations, when the company is not yet mature, it’s not yet publicly traded, those shares can’t be sold. You have to continue to hold them, or you’re only allowed to sell them at this very incremental rate, until such time as the business decides that now we’re a mature business, now we’re going public, and everyone can sell their shares as they wish.
So if that is how things go, and the nonprofit foundation only receives equity, and it doesn’t have almost any cash on hand, then it’s not going to be able to make any grants now. It’s not going to be able to actually deploy the hypothetical resources that it has in the valuation to accomplish its mission — which is to guide the development of AGI in a positive direction.
But now is the critical time to be deploying resources to make that happen! If you wait until such time as OpenAI is already a mature business — it’s already making all of the profits, it’s already publicly traded — then we’re already in the AGI world. Probably by that stage, the technology has matured. It’s probably pretty clear what it looks like; there’s not so much room to guide it. And the amount of interest will have increased enormously, such that anything the foundation might hope to do is going to be a drop in the bucket.
So now is the crucial time to be funding governance work. Now is the crucial time to be funding technical AI safety work that might be relevant. And I think that’s the view of almost everyone who’s actually trying to pursue those missions seriously.
So they have to get cash soon; it would be totally irresponsible to only take equity and lock it up for decades. That would be completely inconsistent with their mission, to the point where it would almost seem negligent to me. I don’t know whether legally it’s negligent.
But anyway, I think this is one way that they could end up getting screwed, and not be able to actually do what they’re meant to do, that wouldn’t be immediately obvious. People could say, “But they got this huge amount of money!” — and yeah, but they can’t do anything with it for 10 years, so what really is the point?
Rose Chan Loui: Right. It’s like getting a bequest and you’re sitting around waiting for the other person to die. That’s why I think it probably will have to be, hopefully, some combination of cash and equity. But I think the equity, while not controlling, I would say that I would ask for some amount of it to be voting so that you have a real voice, even if you’re not controlling.
But you know, you make such a good point that I hadn’t really thought about, in terms of can they have impact independently? On the one hand they could just really be independent, so the nonprofit board really could protect purpose and safe development of AGI. But you’ve made the point that there’s all these other organisations out there doing that — and they don’t have, at least in your view, the same impact as OpenAI Nonprofit could by being inside the hen house.
Rob Wiblin: Yeah, yeah. I mean, people might have different views on that. To be clear, I’m not saying that the grants that they have made have been bad or haven’t been effective. But the question is, given that there’s already a lot of philanthropic interest in this area, does extra money make that much difference above and beyond the billions that are already being deployed in this area?
It’s hard. You can’t just deploy $100 billion or $10 billion all at once. These sectors, like AI governance, can only grow at a particular pace. And there’s lots of work that can only happen within the government itself; it can’t happen in nonprofits that are funded through grants.
So there’s a lot of limitations. People imagine that being a nonprofit foundation is just this fantastic qqqposition. And in some sense it is, but you also struggle to actually accomplish your mission. It’s not trivial to get the right people matched up with the projects and to grow everything really quickly.
Rose Chan Loui: I think where you’re having me conclude now is that this is a very different nonprofit. It’s not a foundation that the importance of which is the giving of philanthropic money out. They do that, but really the reason they’re so important is because they’re in the middle of a corporation that is doing this work — and only from that position can they really make sure that what’s being done is good and safe for humanity.
Once they’re spun out, they’ll be more like any typical corporate foundation. They’re giving grants out to whatever, presumably still in the scientific artificial intelligence research world. And when I say control, not just like the voting, but they won’t have the inside track to guide the work that’s being done. And that is pretty hard to compensate. It’s not a numerical amount. It’s a position that is rare.
Rob Wiblin: So I’m not sure that they should not sell it. I’m not sure that that actually is worse. But I think you could make a strong case. And I think if I was representing the nonprofit foundation’s interests, as legal counsel or as a valuation person, I would be making all these arguments that it’s of almost irreplaceable value.
So we need an enormous amount of compensation in order to be willing to give up what is this plum position in the ecosystem, and pointing out how difficult it is for the foundation to accomplish its mission just by making grants. Certainly if the money is locked up in equity, well, what use is that to our mission? You’ve got to give us something better. That’s what you would do in a tough negotiation if you were really backing that group’s corner. And I hope they get the advice that they need to do so.
Rose Chan Loui: Right. Yeah, I think that’s a really important point, Rob. I was looking at it very much more from dollars and cents, and how you get that. But there is a part of it that is irreplaceable.
Rob Wiblin: I mean, let’s say hypothetically that the valuation was super low, that somehow they got talked down to some ridiculous amount, like only $10 billion. That would kind of be negligent on their part. Perhaps it would be an accident. But how could that get challenged? I suppose you’re saying the attorneys general in California or Delaware could say, “This is crazy. This is inappropriate.” Would anyone else have standing to object or to say this foundation is corrupted or it’s not doing its job?
Rose Chan Loui: I think a couple things. Elon Musk is illustrating one: he’s a previous donor, and is saying that misrepresentations were made and so he has standing to bring suit.
The attorneys general could also start what’s called a special interest lawsuit or something like that. Let’s say they just don’t want to bring the litigation: they could appoint, say, one of these AI research organisations that really cares about this to bring a suit on their behalf. It’s rare, but that could be done.
And there’s a couple examples, the ones I know of that Jill [Horwitz] has cited. There’s a case in Hawaii where the attorney general was actually on one side and the neighbours were on the opposite side. And the courts allowed the neighbourhood collective to bring suit to defend a trust and not allow a food concession to be on this property.
So if there is a group with a special interest in it, but not a direct economic interest, the AG could do that. But I don’t know. I mean, I think California would probably step in. But I think the best result here, at least speaking as a former practitioner, is that they reach a deal that the AG can support and that they think protects the public’s interest in it. But you would have to get over that hurdle that the current structure and the influence of third-party investors makes the nonprofit board’s position untenable in the long term.
I think you just have to say that the reality is that they can’t continue to do this in this structure, no matter how they tried to do this nonprofit / for-profit.
Rob Wiblin: So you’re saying part of the argument would be that the current structure in practice isn’t allowing the nonprofit foundation to pursue its mission because it’s just outgunned?
Rose Chan Loui: It needs the money.
Rob Wiblin: Oh, I see. It needs the cash. It needs the money. There’s a little bit of an irony here, because of course, they could give up 1% and then get a whole lot of cash for that and then use that to skill up and staff up and then they could try to back their corner. But I suppose the challenge is that OpenAI the business doesn’t want that to happen, and the investors don’t really want that to happen: they don’t want to see the nonprofit foundation — with control and empowered and with lots of its own independent staff — having its own thoughts and imposing itself.
Rose Chan Loui: I think it was one thing when it was vague what kind of profits OpenAI was going to make, and it was a startup. And now that they see… And now they’re so connected with Microsoft, for example, just operationally. So if your investors are, in essence, revolting, and they’re like, “We won’t put any more in unless the nonprofit control goes away,” then from that perspective, the nonprofit can’t continue to pursue its charitable purpose.
Rob Wiblin: I see. OK. So the argument would be not that it would be impossible in principle for the nonprofit foundation to insert itself and to be more assertive and to pursue its mission, but rather that it’s got itself into this tangle — where it’s now so dependent on Microsoft; it’s now so dependent on all these other interests that hate it and want to get rid of it — that it’s now made itself too vulnerable, and now it has to accept this kind of exit strategy that saves some ability to pursue its mission, even if ideally it could have gone down a different path five years ago.
Rose Chan Loui: Right.
Rob Wiblin: And that’s what they’ll say to the California attorney general?
Rose Chan Loui: That I don’t have ears on. I mean, not yet, not now. But I imagine, in answer to your question, that that’s what they would say. You know, it was like that when they formed the for-profit, but the for-profit and the investors were willing to agree to all those terms at the beginning. But if they’re not willing to agree to those terms anymore, and they won’t put in any money otherwise, unless those terms are lifted, then where do they go?
Rob Wiblin: OK. The idea is that the business might just fall apart because it needs a constant infusion of cash. It needs a constant infusion of investment.
Rose Chan Loui: Or they’ll be beat to the finish line.
Rob Wiblin: I see. Anthropic or Google.
Rose Chan Loui: Right, right. We haven’t really talked about that. They’ve done a lot, but they’re not the only ones in this game.
Rob Wiblin: Yeah, right. So I can kind of see that case. It does rely on OpenAI the business not being able to get investment from elsewhere to continue fueling its work and fueling its growth. I guess there’s a lot of question marks about how true that really is. Could they really not get any investment from someone else? Would not SoftBank or some other group be willing to put in money at the 100x return multiple? Maybe not Microsoft, because Microsoft wants to stand its ground and wants to get rid of the nonprofit foundation, but other groups might be willing to stump up some investment.
Rose Chan Loui: That’s interesting. Maybe you go outside of that network of tech companies and go to financial or whatever.
Rob Wiblin: That’s the kind of thing that I guess the California attorney general might want to come back with and probe: How true is this argument, really?
Rose Chan Loui: Yes. Maybe we’ll have given them some other questions that they can ask.
Rob Wiblin: Fingers crossed. If you’re listening in.
Is it a farce to call this an “arm’s-length transaction”? [01:03:50]
Rob Wiblin: Another puzzle for me is: As I understand it, legally, this sale has to happen at arm’s length. Obviously all of these groups are kind of entangled with one another: you’ve got the business, you’ve got the nonprofit, you’ve got the investors and so on. But the sale of this one part of this broad entity to the other part of the entity has to happen in such a way that the interests of the nonprofit foundation aren’t corrupted. I guess the legal term for that is it has to happen “at arm’s length”: it has to be sort of independent in some way. Is that right?
Rose Chan Loui: Yes. Because that’s how you’re supposed to determine your fair market value and all that: it should be an arm’s-length negotiation. So, going back to the point that that should mean that the nonprofit has its own counsel, and I would say hopefully they also have their own valuation expert. Because OpenAI, in some ways it’s the same, but in other ways it’s not. Their interests are not completely aligned with the for-profit, right?
Rob Wiblin: It just seems so hard for this to be an arm’s-length transaction in reality. Because the CEO of the business who’s pursuing this is on the board. I guess he might have to recuse himself from the vote, but surely he’s part of the discussions. You would imagine that it’s the business that is proposing this thing.
Rose Chan Loui: Well, he’s definitely not, because now he wants equity.
Rob Wiblin: I see. So he can’t vote. Is that the idea?
Rose Chan Loui: Well, I just mean his interests are at odds also.
Rob Wiblin: Completely at odds.
Rose Chan Loui: Yeah, yeah. Even more now, because in the past he could say that he didn’t have equity, so he wasn’t making money from the for-profit, at least not directly, so he had no conflict with the nonprofit’s purpose and activities. But now, again, he becomes one of the other people wanting money out of this.
Rob Wiblin: He’s like another investor.
Rose Chan Loui: He’s like another investor, yeah.
Rob Wiblin: And I guess this is true to a greater or lesser extent for almost all of the staff at OpenAI who own equity in the company: that they’re all at odds with the nonprofit foundation in the sense that every dollar they manage to squeeze out of it, they get to keep for themselves.
Rose Chan Loui: Right. That’s why when he was ousted and everyone said, “Look, all the employees want him back!” Well, yes, because they all have an interest in the for-profit. And really it’s just the nonprofit board at the nonprofit level. I mean, I don’t know, I’d have to look at the 990, they might have an employee or two — but all the employees who were at that nonprofit level initially got moved down to the for-profit, and now have interest through the holding company. So there’s not a lot of people standing up for that nonprofit.
Rob Wiblin: Yeah, it requires a heroic effort, as you’ve said.
Rose Chan Loui: Yes, that’s what we said. It’s a heroic effort.
Rob Wiblin: So in terms of establishing that this has truly happened at arm’s length, I would think that Altman would have to have nothing to do with it. He would have to not be part of the conversation almost at all. He certainly couldn’t propose it, because he’s completely conflicted.
And all of the valuation would have to be done by lawyers and banks that have a total fiduciary obligation to the nonprofit only, and not to the for-profit in any way.
And I guess you’d even have to find out if any of the other people on the board are conflicted through some sort of financial relationships that they have, or conceivably even personal relationships where the pressure might be applied to them.
It also struck me, you pointed out earlier that the for-profit has taken $6.6 billion in investment from companies, and that all has to be given back if this conversion to a for-profit doesn’t happen within two years. I feel like this is holding hostage the business basically and saying, “If you don’t do what we want, then we’re going to hold a gun to your head.”
How can this be at arm’s length? It feels so crazy.
Rose Chan Loui: Yeah. I think the only people not conflicted are the nonprofit board members who really don’t have any interest in the for-profit activities of OpenAI. Sam Altman can present his case, but he can’t be involved in discussion and the eventual vote of the board. The vote of the board will be required to approve this.
Rob Wiblin: Does it require just a simple majority? Or supermajority or unanimity?
Rose Chan Loui: I’d have to look at the bylaws. I think it’s probably a majority. Unless someone had the foresight to make it a supermajority.
Rob Wiblin: Probably just a majority.
Rose Chan Loui: Right. Yeah, it would have to be a majority board of the ones who don’t have any conflicts of interest. Where I think the new ones don’t have a conflict of interest.
Rob Wiblin: People like Larry Summers, I think they’ve been deliberately chosen with this in mind. And I think there’s also an ML researcher on there. There’s someone from national security. So these are people who I’m putting my faith in that they’ve got to stick up for the interests of this organisation — because they’ve been put on there, I think, because they don’t have any direct relationship, and the hope is that they can stick up for it. It’s just they have to be heroes.
Rose Chan Loui: Yeah. I think there was an assumption, when the former ones were ousted and the new ones came in, that these were chosen because they were friendlier to Sam Altman. But hopefully friendlier doesn’t mean that they don’t exercise independent judgement about what’s best for the nonprofit.
How the nonprofit board can best play their hand [01:09:04]
Rob Wiblin: Yeah. Would you have any other advice for the folks on the board? I mean, I really do think that we should assume good faith and that they’re trying to do their best. What would you tell them if they called you in to give them advice, other than what you’ve said already?
Rose Chan Loui: Really, just to remember their fiduciary duties. And despite what the public or the investors might want to see, that they really think through what’s best for the nonprofit and what’s best for that purpose.
And remember that you are really giving up a lot by stepping out of the control position — and even though that’s irreplaceable, that they should make sure that sufficient compensation goes to the nonprofit, to pay them back for that.
And then hopefully that they figure out how the nonprofit best serves the community once it’s jettisoned from its controlling position here. Because there’s options there, and I don’t know what the best option is for how they prioritise what they do.
Rob Wiblin: Yeah, that’s a whole other aspect of this that I guess we might end up coming to at some future time.
Rose Chan Loui: Potentially, with the size of endowment that they get, maybe they can have impact that’s different from the other organisations that exist now that are watchdogs. I don’t know how well funded those organisations are.
Rob Wiblin: Yeah. An argument that I can make on the other side is that in the past we haven’t really known what to fund. It’s all seemed quite speculative, a bit all pie-in-the-sky. But now there are concrete projects that have huge compute requirements, that have huge infrastructure requirements. You know, some of the technical safety research just is getting quite expensive in absolute terms, because we’re talking tens of millions, possibly hundreds of millions of budget just to have all of the hardware that you need in order to do it.
So that’s a way that you might be able to deploy serious resources, if you had it as cash rather than equity, that really could push forward the field, that could push forward the science in a useful way. That’s an opportunity that they have that people didn’t have so clearly five years ago.
Rose Chan Loui: Rob, I don’t know where this goes, but what if they decided that one of the other for-profit organisations, let’s say the one that Ilya has gone off and started, is in a better position to develop AGI safely? I suppose they could put their money behind that if they had cash. I hadn’t thought of that until now, but if they really were independent, they could decide which horse to back.
Rob Wiblin: Yeah. And choose a different horse if they want to.
Rose Chan Loui: And choose a different horse, potentially.
Rob Wiblin: It’s totally true. And they could choose to invest in it on a for-profit basis as well. They could try to influence things that way.
Rose Chan Loui: Right, right.
Rob Wiblin: I mean, being realistic, that probably won’t happen. That’s one reason why, I hope that this doesn’t happen, but you could imagine them being quite conservative. That, given their position and the scrutiny they’re under, they might not be willing to fund more speculative, hits-based giving — stuff that could backfire or make them look bad. Funding people with the kind of cutting-edge ideas that people don’t agree with early on: it’s going to be tricky for a foundation with that kind of public exposure to do that.
And that’s unfortunate, because just as investing in OpenAI early was a crazy idea but turned out to be massive, it’s the high-risk giving that usually ends up mattering the most in the end. So I hope that they don’t become too conservative.
Rose Chan Loui: Now, knowing more about this industry than I do, the actual operations of it: when do you think they’ll start turning a profit?
Rob Wiblin: OpenAI? Well, I think probably not for a very long time, because they will just want to keep reinvesting all of the profits in more compute. That would be my guess. So when they would turn a profit…
Rose Chan Loui: If distributions to OpenAI the nonprofit depended on that, when would that happen?
Rob Wiblin: I guess within the OpenAI worldview, I think most of the staff are expecting economic growth to really pick up. So we’re used to like 2% or 3% economic growth, and they’re expecting that economic growth could hit 10%, 20%, 30%, more than that as a result of automation of the economy through artificial general intelligence. So you could see enormous economic growth in general. People will become way richer, at least if things go well. That’s the vision.
But there would presumably be just a lot of demand for building more computational infrastructure, building more robots, building more factories to construct all of these things. Their appetite for further investment and for further revenue to fund their growth could be very large.
So in terms of the number of years, I’m not sure. But in terms of how much different the world might be until they actually start paying dividends to shareholders, I think their picture would be that it would be a very different world that we’d be in.
Rose Chan Loui: So it’s really hard to guess at that. Which goes back to your point of, even though it might be hard, that they might want to insist on some upfront cash.
Rob Wiblin: Or at least the right to sell it at a reasonable pace. You don’t want it all to be locked up. Maybe you want to be able to sell 10% of it every year, basically.
Rose Chan Loui: OK, right. Because they get their shares and then they can… Oh, yeah.
Rob Wiblin: Because if you were dispersing 10% every year, I think that’s about as much as they could probably sensibly disperse anyway. And so that seems like a reasonable pace to go at, in my mind.
I suppose some people who think that what really matters is the next three years, they would argue that you’ve got to spend it all almost immediately, because some people believe that the world’s going to be transformed in three years’ time: 2028 could be the year that we develop much smarter than human intelligence, and it upends so much. But I think you’d want to be diversified across different scenarios. And 10% every year is maybe a reasonable middle ground.
Rose Chan Loui: What’s your personal view of how much risk there is with development of AGI?
Rob Wiblin: It depends on the day. Depends on what side of the bed I got up in the morning. I think there’s more than 10% chance that we end up going extinct, basically, one way or another.
Rose Chan Loui: Maybe we don’t want to end there.
Rob Wiblin: No. Yeah, we don’t want to end there. Hopefully we can go a couple more minutes. But I also do buy the bull case. I do also think that there’s a greater chance that things go incredibly well, and that we get to live through a renaissance of brilliant technology being invented and the economy growing and lots of things being automated, like drudge work that we previously had to do. And lots of wonders coming up.
We just have to skate past all of these risks that are created by the big revolutionary changes that come with such an important technology. And if we can get past all of those hurdles, then we get to enjoy the fruits of our labour.
Rose Chan Loui: Right, right.
Who can mount a court challenge and how that would work [01:15:41]
Rob Wiblin: Maybe just a final question is: I guess you think that the baseline scenario is that the for-profit, the nonprofit, and the California attorney general will negotiate something that they all think is acceptable, and that has gotten legal acceptance, and then they’ll go ahead with that. And I guess that’s good, because like two of the three groups in the room will have an interest in the nonprofit or in the charitable purpose.
If it doesn’t go down that path, what do you think is the chance that it could end up being reversed? Or what would be the remedy if the courts thought that this had been an unacceptable process in some way?
Rose Chan Loui: I think it’s all still going to come down to compensation. So if they got sued for not being fair to the nonprofit, I think the courts would redo the valuation process, the analysis, and say, “You need to give this much more. You weren’t fair here.” I can’t see them really saying, “You must stay with this structure.”
Rob Wiblin: Because it’s just like the courts imposing themselves. I guess it would require a court to think that it knows best how to pursue the nonprofit’s mission better than the board members do, which is a high bar. They’re not going to really feel like they’re in a position to do that.
Rose Chan Loui: Yeah. It’s not such a clear violation of fiduciary duty. You might question their analysis, but I think they’re in a tough enough position that you can’t just say, “You can’t restructure.”
Rob Wiblin: That makes sense. OK, so they would go back and redo the valuation and basically demand that they receive more. That makes sense. And the most likely way, I suppose, that that would happen is that the California attorney general isn’t pleased with how things went, and then they take it to court and they say, “No, you’ve got to give more”?
Rose Chan Loui: Yes. Or they’re just going to say, “We don’t approve.” And then maybe OpenAI has to sue in order to —
Rob Wiblin: Do they have to approve it?
Rose Chan Loui: They do have to approve it, because you’re required to give notice of conversion from a public benefit corporation to either a mutual benefit (which is another type of nonprofit) or to a for-profit. You also have to give notice when you have a significant transaction affecting a significant amount of your assets.
Rob Wiblin: I didn’t realise that they had to affirmatively support it. That’s great.
Rose Chan Loui: Yeah, you have to give notice. And they’ve already gotten ahead of it, because they’re in conversations with the AG, which is the smart thing to do. You know, we’re all talking about courts and stuff, but in reality, if you have good counsel, they will try to settle.
Rob Wiblin: It will be sorted out ahead of time. It should never get to that point.
Rose Chan Loui: Yeah, it should not get to that point.
Rob Wiblin: And you know a bit about… I don’t even know who the California attorney general is. I guess I knew who it was many years ago, but…
Rose Chan Loui: It’s Rob Bonta. Another Rob.
Rob Wiblin: What should I think of Rob Bonta?
Rose Chan Loui: Well, he has said that he will protect the public’s interest in charitable assets. That’s where we think the action will likely be. It was interesting to see Delaware weigh in, though, because they’re known as pretty hands-off regulators. But I think this is big enough that they’ve decided that they want to look at it too.
Rob Wiblin: Remind me what they said?
Rose Chan Loui: They just issued a list of questions, inquiries, and OpenAI just said they would comply. So they just, I think, issued them a list of questions, maybe including things we’ve talked about.
Rob Wiblin: I guess I’m coming away from this conversation feeling a little bit more optimistic. I suppose I’ve tried to paint the picture of how this could be very difficult and how it’s a very interesting and exciting thing to watch. But fingers crossed, people step up and do their jobs, and actually we end up with a pretty good outcome in the end.
Rose Chan Loui: Yes, I definitely think that’s our hope now. I think from the time we first wrote the article, when we thought, was anyone going to look at this? Because, you know, initially it was sort of like, “Who are these people? Why are they ousting Sam Altman? What do they know about AI, and who are they to think that they can do this?”
And we’re like, “Wait, they’re a nonprofit board! They have a specific purpose, and the reason they’re not involved is very intentional.” So I think from that point to now, there’s definitely been progress and attention to the fact that there’s a reason that that nonprofit was established in the first place, and the fact that it started it all.
Rob Wiblin: Yeah, that’s something that most people were missing. Most journalists were missing, for sure.
Rose Chan Loui: Yes, it’s like it all started with a nonprofit, so it needs to be taken care of. But hopefully they figure out how to remain relevant. I love that word that you use. That they remain front and centre in terms of protecting the development of AI. But the optimistic way to look at it is maybe they can look at it more globally, and they’ll have — probably more than most of the organisations trying to protect humanity — a much bigger chequebook.
Rob Wiblin: Absolutely, yeah. Fingers crossed.
Rose Chan Loui: We can end there. That’s the most optimistic ending I can come up with.
Rob Wiblin: Brilliant. I really appreciate you and your colleagues drawing attention to this early on. You were on the ball and you saw something important when I think a lot of people were missing it. So depending on how things go, maybe we can check in and see whether the optimistic story has panned out in a year or two.
Rose Chan Loui: Yes, sounds good. Thanks so much for inviting me on here.
Rob Wiblin: It’s been so much fun. Thanks for joining.
Rob’s outro [01:21:25]
Rob Wiblin: All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.
Video editing by Simon Monsour. Audio engineering by Simon Monsour, Ben Cordell, Milo McGuire, and Dominic Armstrong.
Full transcripts and an extensive collection of links to learn more are available on our site, and put together as always by Katy Moore.
Thanks for joining, talk to you again soon.