In a recent lawsuit filed in San Francisco by Elon Musk against OpenAI, Musk, one of the original founders of OpenAI that subsequently resigned from the organization, alleges that the maker of ChatGPT has violated their original contract as a nonprofit venture with the goal of developing AI for the benefit of humanity. Yet today, especially with its deep relationship with companies such as Microsoft, OpenAI is flagrantly pursuing profit, shirking the humanitarian mission Elon Musk claims he initially invested in. Worse, the failed ouster of Sam Altman in November 2023 showed that the cautious side of OpenAI is lost. Now, for an organization whose purported initial mission was to advance society through artificial intelligence, it may very well be placing society in much greater danger. This is a clear example of a project that went astray and did not take the values and considerations of its major stakeholder, Elon Musk, into consideration.
Organizations and large-scale projects can go awry for many reasons. At one extreme, perhaps OpenAI always had a master hidden agenda to advance a technology with profit motives while announcing its altruistic mission of advancing society. Founded in 2015, OpenAI’s publicly stated motive was pure – developing “safe and beneficial” artificial general intelligence (OpenAI, 2018). But in 2019, with its partnership with Microsoft and a huge cash infusion of $1 billion, OpenAI transitioned to a hybrid company, a capped for-profit venture in which the profit is capped at 100 times any investment. This allows the organization’s for-profit subsidiary, Open AI Global LLC, to legally attract investments from others. It also allows OpenAI to distribute equity to its employees, which in a high-tech industry, is arguably essential to attract the best talent.
At another extreme, perhaps OpenAI was largely innocent in these profit motives in 2015. Found by a group of tech visionary stakeholders such as Elon Musk, Greg Brockman, Sam Altman, Ilya Sutskever, John Schulman, and Wojciech Zaremba, the main motivation was to collectively create an environment for the ethical development of artificial intelligence. But along this journey, changes occurred. For example, Musk, who provided much of the original capital to start the organization under the impression that the technology was being created for the betterment of society, left in 2018 citing a potential conflict of interest in the future of his role as the CEO of Tesla, which was developing its AI for self-driving cars. In addition, as any technology company can attest, talent is the biggest ingredient to success. OpenAI is no different, and when Microsoft decided to invest $1 billion in 2019, the window to profit opened completely. Thus, the original project went sideways as far as Musk was concerned. From the perspective of OpenAI, it seems the one hundred and eighty degree change in the mission of the company was largely due to the everyday actions and reactions of its competitive environment.
The truth is most likely somewhere in between. As with any grand plans with many major players, motivations vary. It is completely likely that all founders were genuinely interested in the rapid development of AI, and all had the desire to seek ways to benefit humanity. But once the journey started, there are infinite paths and possibilities. The conflict shown in this lawsuit is not unique to OpenAI with its nonprofit mission and for-profit motives. For example, Goodwill has a retail for-profit operation that sells donated goods. American Automobile Association (AAA) has numerous for-profit subsidiaries that sell insurance and travel services. The biggest membership organization in the U.S. with about 38 million members, AARP, operates mainly as a nonprofit, but it has many for-profit subsidiaries that offer insurance products and financial services. As CEO of project management consulting and training firm PMO Advisory, I work closely with nonprofits who also have had experiences in which employees of nonprofits question the revenue motive of their development office. But unlike OpenAI, none of these nonprofits have received an infusion of $1 billion from a mammoth corporation like Microsoft, whereby an intimate partnership was formed.
Projects can go astray for many reasons. In the case of OpenAI and especially with the introduction of ChatGPT, the impact on our society has already been significant. Whether the future will be darker in the pursuit of profit, as Musk alleges, lighter as the organization adheres to its stated mission, or more complex shades continue to arise, only time will tell. One thing is certain, at least from an ex-founder’s perspective, a major stakeholder of OpenAI in the early days, Musk’s donation of over $44 million between 2016 and 2020 appeared to have contributed to the creation of one of the greatest competitors in the field of AI.
The key lesson for project executives is the necessity of being open and transparent about the motivations and the need to manage subsequent changes to the charter. This is why developing and maintaining updates to a project charter is vitally important for the organization. It helps to align the interests and expectations of key stakeholders. When the guiding charter states “…best interests of humanity throughout its development” (OpenAI, 2018), it can be confusing at best to early loyal project stakeholders, and inviting of lawsuits at worst, when the organization’s actions and activities are contradictory.
Reference:
OpenAI, 2018. “OpenAI Charter: Our Charter describes the principles we use to execute on OpenAI’s mission.” Open AI. April 9, 2018.
Image by Trevor Cokley | Credit: U.S. Air Force Academy
Dr. Te Wu is CEO and CPO of PMO Advisory, a project management training and consulting firm that establishes projects, programs, portfolios, and PMOs for companies, including Global 500 and nonprofit organizations.
He is an Associate Professor at Montclair State University and Chair of Project Management Institute’s Portfolio Management Standard Committee. Te is certified in Portfolio, Program, Project, and Risk Management.