A few times in a generation, a product comes along that catapults a technology from the fluorescent gloom of engineering department basements, the fetid teenage bedrooms of nerds, and the lonely man caves of hobbyists—into something that your great-aunt Edna knows how to use. There were web browsers as early as 1990. But it wasn’t until Netscape Navigator came along in 1994 that most people discovered the internet. There were MP3 players before the iPod debuted in 2001, but they didn’t spark the digital music revolution. There were smartphones before Apple dropped the iPhone in 2007 too—but before the iPhone, there wasn’t an app for that.
On Nov. 30, 2022, artificial intelligence had what might turn out to be its Netscape Navigator moment.
The moment was ushered in by Sam Altman, the chief executive officer of OpenAI, a San Francisco–based A.I. company that was founded in 2015 with financial backing from a clutch of Silicon Valley heavy hitters—including Elon Musk, Peter Thiel, and fellow PayPal alum and LinkedIn cofounder Reid Hoffman. On Nov. 30, some seven years after the company’s launch, Altman tweeted: “today we launched ChatGPT. try talking with it here,” followed by a link that would let anyone sign up for an account to begin conversing with OpenAI’s new chatbot for free.
And anyone—and everyone—has. And not just to chat about the weather. Amjad Masad, a software CEO and engineer, asked it to debug his code—and it did. Gina Homolka, a food blogger and influencer, got it to write a recipe for healthy chocolate-chip cookies. Riley Goodside, an engineer at Scale AI, asked it to write the script for a Seinfeld episode. Guy Parsons, a marketer who also runs an online gallery dedicated to A.I. art, got it to write prompts for him to feed into another A.I. system, Midjourney, that creates images from text descriptions. Roxana Daneshjou, a dermatologist at Stanford University School of Medicine who also researches A.I. applications in medicine, asked it medical questions. Lots of students used it to do their homework. And that was just in the first 24 hours following the chatbot’s release.
There have been chatbots before. But not like this. ChatGPT can hold long, fluid dialogues, answer questions, and compose almost any kind of written material a person requests, including business plans, advertising campaigns, poems, jokes, computer code, and movie screenplays. It’s far from perfect: The results are not always accurate; it can’t cite the sources of its information; it has almost no knowledge of anything that happened after 2021. And what it delivers—while often smooth enough to pass muster in a high school class or even a college course—is rarely as polished as what a human expert could produce. On the other hand, ChatGPT produces this content in about a second—often with little to no specific knowledge on the user’s part—and a lot of what it spits out isn’t half bad. Within five days of its release, more than 1 million people had played with ChatGPT, a milestone Facebook took 10 months to hit.
Artificial intelligence technology has, over the past decade, made steady inroads into business and quietly improved a lot of the software we use every day without engendering much excitement among non-technologists. ChatGPT changed that. Suddenly everyone is talking about how A.I. might upend their jobs, companies, schools, and lives.
ChatGPT is part of a wave of related A.I. technologies collectively known as “generative A.I.”—one that also includes buzzy art generators like Midjourney and Lensa. And OpenAI’s position at the forefront of the tech industry’s next big thing has the hallmarks of a startup epic, including an all-star cast of characters and an investor frenzy that has crowned it with a reported valuation of $29 billion.
But even as its recent surge provokes envy, wonder, and fear—Google, whose lucrative search empire could be vulnerable, reportedly declared an internal “code red” in response to ChatGPT—OpenAI is an unlikely member of the club of tech superpowers. Until a few years ago, it wasn’t a company at all but a small nonprofit lab dedicated to academic research. Lofty founding principles such as protecting humanity from the dangers of unrestrained A.I. remain. At the same time, OpenAI has gone through an internal transformation that divided its original staff and brought an increased focus on commercial projects over pure science. (Some critics argue that releasing ChatGPT into the wild was itself dangerous—and a sign of how profoundly OpenAI’s approach has shifted.)
An expanded partnership with Microsoft, announced this week, that includes as much as $10 billion in new capital could result in the software giant capturing the lion’s share of OpenAI’s profits for years to come. That deal is likely to deepen the perception that the once idealistic endeavor is now primarily concerned with making money. That said, documents seen by Fortune reveal just how unprofitable OpenAI’s business is currently.
Altman, the 37-year-old cofounder and CEO, embodies OpenAI’s puzzling nature. A serial tech entrepreneur known more for business savvy than for feats of engineering, Altman is both the architect of OpenAI’s soaring valuation and its buzzkiller-in-chief—speaking out publicly about how far ChatGPT is from being truly reliable. At the same time, he sees the technology as a step forward in his broader, quixotic corporate mission to develop a computer superintelligence known as artificial general intelligence, or AGI. “AGI is probably necessary for humanity to survive,” Altman tweeted in July. “our problems seem too big [for] us to solve without better tools.”
It’s an unusual guiding philosophy for a moneymaking enterprise, especially considering that some computer scientists dismiss Altman’s obsession as the stuff of fantasy. “AGI is just silly,” says Ben Recht, a computer scientist at the University of California at Berkeley. “I mean, it’s not a thing.”
And yet, with ChatGPT, Altman has turned OpenAI—and the broader A.I. mission—into the thing captivating the tech world. The question is whether the partnership he has forged with Microsoft can fix ChatGPT’s flaws and capitalize on its early lead to transform the tech industry. Google and other titans are hard at work on their own A.I. platforms; and future, more polished software could make ChatGPT look like child’s play. OpenAI may someday find that, much like Netscape’s short-lived browser reign, its breakthrough has opened a door to a future it isn’t part of.
On a Thursday evening in mid-January in San Francisco, Altman makes a rare public appearance. Dressed in a gray sweater, blue jeans, and a pair of groovy, brightly colored tie-dyed sneakers, the CEO walks into a roomful of investors, techies, and journalists, all gathered to glean any dish about ChatGPT or the imminent funding round. When his interviewer, Connie Loizos, the founder of StrictlyVC, a media company focused on venture capital, asks him about the media furor, Altman replies, “I don’t read the news, and I don’t really do stuff like this much.”
The event, on the 46th floor of the Salesforce Tower, is standing room only. One of the speakers during a fintech panel that takes place before the interview even tells the crowd that she knows they’re “all waiting for Sam Altman.”
But despite the buzz, and the circulating rumors of the Microsoft investment, Altman seems to go out of his way to dampen the excitement. “One of the strange things about these technologies is that they are impressive but not robust,” he tells the crowd. “So you use them in the first demo; you kind of have this very impressive, ‘Wow, this is incredible and ready to go’ [reaction]. But you see it a hundred times, and you see the weaknesses.”
That kind of caution seems to be the official mode at OpenAI’s headquarters, situated in an old luggage factory in San Francisco’s Mission District. And indeed, if ChatGPT is A.I.’s Netscape Navigator moment, it is one that very nearly never happened—because OpenAI almost killed the project months ago.
The chat interface that allows users to converse with the A.I. in plain English (and many other languages) was initially conceived by OpenAI as a way to improve its “large language models,” or LLMs. Most generative A.I. systems have an LLM at their core. They are created by taking very large neural networks—an A.I. based very loosely on connections in the human brain—and applying them to vast amounts of human-created text. From this library, the model learns a complex map of the statistical likelihood that any group of words will appear next to one another in any given context. This allows LLMs to perform a vast array of natural language processing tasks—from translation to summarization to writing.
OpenAI had already created one of the world’s most powerful LLMs. Called GPT-3, it takes in more than 175 billion statistical connections and is trained on about two-thirds of the internet, all of Wikipedia, and two large data sets of books. But OpenAI found it could be tricky to get GPT-3 to produce exactly what a user wanted. One team had the idea of using reinforcement learning—in which an A.I. system learns from trial and error to maximize a reward—to perfect the model. The team thought that a chatbot might be a great candidate for this method since constant feedback, in the form of human dialogue, would make it easy for the A.I. software to know when it had done a good job and where it needed to improve. So in early 2022, the team started building what would become ChatGPT.
When it was ready, OpenAI let beta testers play with ChatGPT. But they didn’t embrace it in the way OpenAI had hoped, according to Greg Brockman, an OpenAI cofounder and its current president; it wasn’t clear to people what they were supposed to talk to the chatbot about. For a while, OpenAI switched gears and tried to build expert chatbots that could help professionals in specific domains. But that effort ran into problems too—in part because OpenAI lacked the right data to train expert bots. Almost as a Hail Mary, Brockman says, OpenAI decided to pull ChatGPT off the bench and put it in the wild for the public to use. “I’ll admit that I was on the side of, like, I don’t know if this is going to work,” Brockman says.
The chatbot’s instant virality caught OpenAI off guard, its execs insist. “This was definitely surprising,” Mira Murati, OpenAI’s chief technology officer, says. At the San Francisco VC event, Altman said, he “would have expected maybe one order of magnitude less of everything—one order of magnitude less of hype.”
ChatGPT isn’t OpenAI’s only hype generator. Its relatively small staff of around 300 has pushed the boundaries of what A.I. can do when it comes to creating, not simply analyzing, data. DALL-E 2, another OpenAI creation, allows users to create photorealistic images of anything they can imagine by typing just a few words. The system has now been emulated by others, including Midjourney and an open-source competitor called Stability AI. (All of these image generators have drawbacks, most notably their tendency to amplify biases in the data on which they were trained, producing images that can be racist and sexist.) By fine-tuning its GPT LLM on computer code, OpenAI also created Codex, a system that can write code for programmers, who only have to specify in plain language what they want the code to do.
More innovations wait in the wings. OpenAI has an even more powerful LLM in beta testing called GPT-4 that it is expected to release this year, perhaps even imminently. Altman has also said the company is working on a system that can generate video from text descriptions. Meanwhile, in mid-January, OpenAI signaled its intention to release a commercial version of ChatGPT, announcing a wait-list for would-be customers to sign up for paid access to the bot through an interface that would allow them to more easily integrate it into their own products and services.
A cynic might suggest that the fact OpenAI was in the middle of raising a large venture capital round might have something to do with the timing of ChatGPT’s release. (OpenAI says the timing is coincidental.) What’s certain is that ChatGPT chummed shark-filled waters. It set off a feeding frenzy among VC firms hoping to snap up shares in the private sale of equity currently being held by OpenAI’s executives, employees, and founders.
That tender offer is happening alongside the just-announced new investment from Microsoft, which will infuse up to $10 billion in new capital into the company. Microsoft, which started working with OpenAI in 2016, formed a strategic partnership with the startup and announced a $1 billion investment in the company three years ago. According to sources familiar with the new tender offer, it is heavily oversubscribed—despite an unusual structure that gives Microsoft a big financial advantage.
According to documents seen by Fortune, on completion of its new investment and after OpenAI’s first investors earn back their initial capital, Microsoft will be entitled to 75% of OpenAI’s profits until it earns back the $13 billion it has invested—a figure that includes an earlier $2 billion investment in OpenAI that had not been previously disclosed until Fortune reported it in January. Microsoft’s share will then step down to 49%, until the software giant earns a profit of $92 billion. Meanwhile, the other venture investors and OpenAI’s employees also will be entitled to 49% of OpenAI’s profits until they earn some $150 billion. If these caps are hit, Microsoft’s and investors’ shares will revert to OpenAI’s nonprofit foundation. In essence, OpenAI is lending the company to Microsoft—for how long depends on how quickly OpenAI can make money.
But earning back its investment, let alone hitting those caps, might take quite a while. The documents seen by Fortune reveal that OpenAI has had relatively modest revenues to date and is heavily loss-making. Last year, the company was projected to bring in just under $30 million in revenue, according to the documents. But it was projecting expenses of $416.45 million on computing and data, $89.31 million on staff, and $38.75 million in unspecified other operating expenses. In total, its net loss in 2022 excluding employee stock options was projected at $544.5 million. And with ChatGPT, those losses may be soaring: Altman said on Twitter, in response to a question from Elon Musk, that it was costing OpenAI “single-digit cents” in computing costs per interaction users have with ChatGPT—a tab that likely reached many millions of dollars per month as the bot became popular.
OpenAI is projecting that, with ChatGPT serving as a siren song to lure customers, its revenue will ramp up rapidly. It is forecasting $200 million in revenue for 2023 and expects revenues to top $1 billion in 2024, according to the documents. They do not project how OpenAI’s expenses might grow and when it could turn a profit. The companies declined to comment on these figures, but they point to an obvious reality: Both OpenAI and Microsoft think that the former nonprofit lab now has something it can sell.
Microsoft is already reaping the rewards of the partnership. It has launched an OpenAI-branded suite of tools and services in its Azure Cloud that will allow Azure customers access to OpenAI’s tech, including GPT and DALL-E tools. Auto marketplace CarMax, for example, has already launched new services that run on these Azure tools.
Eric Boyd, Microsoft’s corporate vice president of AI Platform, says that meeting the demands of training and running OpenAI’s LLMs has driven innovations that benefit all Azure customers. For instance, Microsoft has built supercomputing clusters for A.I. that it believes are the most powerful in the world, and created several software innovations to make it easier to train and run large A.I. models on these machines. Microsoft is gradually infusing OpenAI’s tech into much of its software. It has released an image creator within Bing, its search engine, and a new Designer graphic design tool, both powered by DALL-E; a GPT-3-enabled tool within its Power Apps software, and a code suggestion tool, GitHub Copilot, based on OpenAI’s Codex model.
Even if it doesn’t immediately move the needle on Azure revenue, the OpenAI relationship is good brand positioning and marketing, says Dan Romanoff, a senior equity research analyst who covers technology stocks for Morningstar. “It’s high-profile,” he says. “The ability to take an A.I. solution developed by OpenAI, put it on Azure, call it Azure AI: It keeps them competitive.” Microsoft’s Cloud rivals—Google, AWS, IBM, Oracle, Salesforce, and others—all have their own “cognitive” services, but being associated with the folks who created ChatGPT can’t hurt.
The bigger prize for Microsoft might be in search. Tech publication The Information recently reported that Microsoft plans to integrate ChatGPT into Bing, possibly allowing it to return simple, succinct answers to queries—and letting people delve deeper through dialogue with that chatbot—rather than a list of links. Google currently dominates the market for search, with a greater than 90% market share worldwide. Bing ranks a second so distant it might as well be in a different galaxy, with about a 3% share. In the first nine months of 2022, search was worth $120 billion in revenue for Google; overall, it accounts for about 60% of the money Google generates. ChatGPT may offer Microsoft the only real chance it’s ever had to knock Google off that pedestal. (Microsoft declined to comment on The Information report.)
And by Microsoft’s standards, these upsides come cheap. Its total investment of $13 billion is a hefty sum, but it’s only 15% of the $85 billion in pretax profits it booked over the past 12 months—a relative bargain for near-term control of a paradigm-shifting technology. For their part, OpenAI and Altman risk paying a different price: the possibility that Microsoft’s priorities crowd out their own, putting their broader mission at risk and alienating the scientists who fueled its successes.
One July evening in 2015, Altman, who was then the head of the prestigious startup incubator Y Combinator, hosted a private dinner at the Rosewood Sand Hill, a luxurious ranch-style hotel located in the heart of the Valley’s venture capital industry in Menlo Park. Elon Musk was there. So was Brockman, then a 26-year-old MIT dropout who had served as chief technology officer at payment-processing startup Stripe. Some of the attendees were experienced A.I. researchers. Some had hardly any machine learning chops. But all of them were convinced AGI was possible. And they were worried.
Google had just acquired what to Altman, Musk, and other tech insiders looked like the odds-on favorite to develop AGI first: London-based neural networking startup DeepMind. If DeepMind succeeded, Google might monopolize the omnipotent technology. The Rosewood dinner’s purpose was to discuss forming a rival lab to ensure that wouldn’t happen.
The new lab aimed to be everything DeepMind and Google were not. It would be run as a nonprofit, explicitly dedicated to democratizing the benefits from advanced A.I. It promised to publish its research and open-source all of its technology, a commitment to transparency enshrined in its very name: OpenAI. The lab garnered an impressive roster of donors: not only Musk, but his fellow PayPal colleagues Thiel and Hoffman; Altman and Brockman; Y Combinator cofounder Jessica Livingston; YC Research, a foundation that Altman had established; Indian IT outsourcing firm Infosys; and Amazon Web Services. Together, the founding donors pledged to give $1 billion to the idealistic new venture (although according to tax records, the nonprofit only received a fraction of the headline-grabbing pledge).
But training the giant neural networks quickly proved to be expensive—with computing costs reaching tens of millions of dollars. A.I. researchers don’t come cheap either: Ilya Sutskever, a Russian-born scientist who came to OpenAI to be its lead scientist after working at Google, was paid an annual salary of $1.9 million in his first few years at the lab, according to tax records. After a few years, Altman and others at OpenAI concluded that to compete with Google, Meta, and other tech giants, the lab could not continue as a nonprofit. “The amount of money we needed to be successful in the mission is much more gigantic than I originally thought,” Altman told Wired magazine in 2019.
Setting up a for-profit arm allowed OpenAI to raise venture capital. But OpenAI created an unusual structure that capped investors’ returns at a multiple of their initial investment. And OpenAI’s nonprofit board, which is stacked with Silicon Valley A-listers, would retain control of OpenAI’s intellectual property (see sidebar). One A-lister who didn’t stick around was Musk: In 2018, he left the board, citing the demands of running SpaceX and, more important, Tesla.
Around this time, Microsoft CEO Satya Nadella was desperate to prove that his company, perceived as trailing its rivals in A.I., could play at the technology’s bleeding edge. The company had tried and failed to hire a big-name A.I. scientist. It was also building a huge, expensive cluster of specialized chips to advance its own efforts on language models. It was just the sort of supercomputing power OpenAI needed—and which it was spending huge sums to purchase at the time. For its part, OpenAI excelled at pulling off the sort of splashy A.I. demos that Nadella desired to showcase Microsoft’s A.I. acumen. Altman approached Nadella about a deal, flying to Seattle several times to show him OpenAI’s A.I. models. Nadella ultimately signed a pact, announced in July 2019, to make Microsoft OpenAI’s “preferred partner” for commercializing its technology, alongside an initial $1 billion investment in the A.I. startup.
While Altman was involved in OpenAI from its inception, he did not become CEO until May 2019, shortly after it converted into a for-profit enterprise. But its trajectory from research lab to multibillion-dollar phenomenon reflects Altman’s unique fundraising prowess and product-oriented focus—as well as the tension between those commercial instincts and his commitment to big, science-driven ideas.
The OpenAI leader is in some ways a Silicon Valley caricature: youthful, male, and pale; unblinkingly intense; fluent in Geek; obsessed with maximizing efficiency and productivity; a workaholic devoted to “changing the world.” (In a 2016 New Yorker profile, he said he did not have Asperger’s syndrome but could understand why someone would think he did.)
Altman dropped out of a computer science degree program at Stanford University to cofound Loopt, a social media company whose app told you where your friends were. The company got into Y Combinator’s first batch of startups in 2005; Loopt failed to take off, but the money Altman earned when it was sold helped launch him into the VC universe. He started his own small VC firm called Hydrazine Capital that raised about $21 million, including money from Thiel. Then Paul Graham and Livingston, the Y Combinator cofounders, brought him in as Graham’s successor running YC itself.
Altman is an entrepreneur, not a scientist or an A.I. researcher, and he is known for being unusually adept at raising venture capital money. Convinced that great things come from the coupling of massive ambition and unflinching self-belief, he has said he aspires to create trillions of dollars of economic value via so-called deep-tech plays, in fields like nuclear fusion and quantum computing, where the odds are long but the payoffs potentially huge. “Sam believed he was the best at everything he took on,” says Mark Jacobstein, a veteran tech investor and startup adviser who worked with Altman at Loopt. “I am pretty sure he believed he was the best ping-pong player in the office until he was proven wrong.”
According to several current and former OpenAI insiders, the startup’s priorities began to shift as Altman took the reins. A once broad research agenda shrank to focus mostly on natural language processing. Sutskever and Altman have defended this shift as maximizing effort on the research areas that currently appear to offer the most promising path toward AGI. But some former employees say internal pressure to focus on LLMs grew substantially after Microsoft’s initial investment, in part because those models had immediate commercial applications.
Having been founded to be free of corporate influence, some complained, OpenAI was quickly becoming a tool for a gigantic technology company. “The focus was more, how can we create products, instead of trying to answer the most interesting questions,” one former employee said. Like many interviewed for this story, the employee requested anonymity because of nondisclosure agreements and to avoid alienating powerful figures associated with OpenAI.
OpenAI was also becoming a lot less open. It had already begun pulling back from the pledge to publish all its research and open-source its code, citing concerns that its technology could be misused. But according to former employees, commercial logic also played a role. By making its advanced models available only through APIs, OpenAI protected its intellectual property and revenue streams. “There was a lot of lip service paid to ‘A.I. safety’ by [Altman] and [Brockman] but that often seemed like just a fig leaf for business concerns, while actual, legitimate A.I. safety concerns were brushed aside,” another former OpenAI employee says. As an example, the former employee cited the way OpenAI quickly reversed a decision to limit access to DALL-E 2 because of fears of misuse as soon as Midjourney and Stability AI debuted rival products. (OpenAI says it allowed broader use of DALL-E 2 only after careful beta testing gave it confidence in its safety systems.) According to some former employees, these strategic and cultural shifts played a role in the decision of a dozen OpenAI researchers and other staff—many of whom worked on A.I. safety—to break with the company in 2021 and form their own research lab called Anthropic.
OpenAI says it continues to publish far more of its research than other A.I. labs. And it defends its shift to a product focus. “You cannot build AGI by just staying in the lab,” says Murati, the chief technology officer. Shipping products, she says, is the only way to discover how people want to use—and misuse—technology. OpenAI had no idea that one of the most popular applications of GPT-3 would be writing software code until they saw people coding with it, she says. Likewise, OpenAI’s biggest fear was that people would use GPT-3 to generate political disinformation. But that fear proved unfounded; instead, she says, the most prevalent malicious use was people churning out advertising spam. Finally, Murati says that OpenAI wants to put its technology out in the world to “minimize the shock impact on society that really powerful technology can have.” Societal disruption from advanced A.I. will be worse, she argues, if people aren’t given a teaser of what the future might hold.
Sutskever allows that OpenAI’s relationship with Microsoft created a new “expectation that we do need to make some kind of a useful product out of our technology,” but he insists the core of OpenAI’s culture hasn’t changed. Access to Microsoft data centers, he says, has been critical to OpenAI’s progress. Brockman also argues the partnership has allowed OpenAI to generate revenue while remaining less commercially focused than it would otherwise have to be. “Hiring thousands of salespeople is something that might actually change what this company is, and it is actually pretty amazing to have a partner who has already done that,” he says.
Sutskever categorically denies implications that OpenAI has de-emphasized safety: “I’d say the opposite is true.” Before the Anthropic split, A.I. safety was “localized to one team,” but it’s now the responsibility of every team, Sutskever says. “The standards for safety keep increasing. The amount of safety work we are doing keeps increasing.”
Critics, however, say OpenAI’s product-oriented approach to advanced A.I. is irresponsible, the equivalent of giving people loaded guns on the grounds that it is the best way to determine if they will actually shoot one another.
Gary Marcus, a New York University professor emeritus of cognitive science and a skeptic of deep learning–centric approaches to A.I., argues that generative A.I. poses “a real and imminent threat to the fabric of society.” By lowering the cost of producing bogus information to nearly zero, systems like GPT-3 and ChatGPT are likely to unleash a tidal wave of misinformation, he says. Marcus says we’ve even seen the first victims. Stack Overflow, a site where coders pose and answer programming questions, has already had to ban users from submitting answers crafted by ChatGPT, because the site was overwhelmed by answers that seemed plausible but were wrong. Tech news site CNET, meanwhile, began using ChatGPT to generate news articles, only to find that many later had to be corrected owing to factual inaccuracies.
For others, it’s ChatGPT writing accurate code that’s the real risk. Maya Horowitz, vice president of research at cybersecurity firm Check Point, says her team was able to get ChatGPT to compose every phase of a cyberattack, from crafting a convincing phishing email to writing malicious code to evading common cybersecurity checks. ChatGPT could essentially enable people with zero coding skills to become cybercriminals, she warns: “My fear is that there will be more and more attacks.” OpenAI’s Murati says that the company shares this concern and is researching ways to “align” its A.I. models so they won’t write malware—but there is no easy fix.
Countless critics and educators have decried the ease with which students can use ChatGPT to cheat. School districts in New York City, Baltimore, and Los Angeles all blocked school-administered networks from accessing the chatbot, and some universities in Australia said they would revert to using only proctored, paper-based exams to assess students. (OpenAI is working on methods to make A.I.-generated text easier to detect, including possibly adding a digital “watermark” to ChatGPT’s output.)
There are also ethical concerns about the way ChatGPT was originally assembled in 2022. As part of that process, OpenAI hired a data-labeling company that used low-wage workers in Kenya to identify passages involving toxic language and graphic sexual and violent content, a Time investigation found. Some of those workers reported mental health issues as a result. OpenAI told Time in a statement such data labeling was “a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”
Making ChatGPT freely available has allowed OpenAI to gather a treasure trove of feedback to help improve future versions. But it’s far from certain OpenAI will maintain its dominance in language A.I. “Historically, what we have tended to see with these very general-purpose algorithms is that they are not sufficiently defensible to allow just one particular company to capture all the general returns,” says Marc Warner, founder and CEO of London-based A.I. company Faculty. Face- and image-recognition technology, for example, was first developed at tech giants such as Google and Nvidia but is now ubiquitous.
Courts and regulators could also thrust a giant stick into the data flywheels on which generative A.I. depends. A $9 billion class action lawsuit filed in federal court in California potentially has profound implications for the field. The case’s plaintiffs accuse Microsoft and OpenAI of failing to credit or compensate coders for using their code to train GitHub’s coding assistant Copilot, in violation of open license terms. Microsoft and OpenAI have declined to comment on the suit.
A.I. experts say that if the court sides with the plaintiffs, it could derail the generative A.I. boom: Most generative models are trained from material scraped from the internet without permission or compensation. The same law firm representing those plaintiffs recently filed a similar lawsuit against Stability AI and Midjourney, for using copyrighted art in their training data without permission. Photo agency Getty Images has filed its own copyright infringement lawsuit against Stability AI too. Another problem could come if lawmakers pass rules giving creators a right to opt out of having their content used in A.I. training, as some European Union lawmakers are considering.
OpenAI’s competitors, meanwhile, are not standing still. The prospect of losing its dominance in search has motivated execs at Google to declare a “red alert,” according to the New York Times. Sundar Pichai, Google’s CEO, has held meetings to redefine the company’s A.I. strategy and plans to release 20 new A.I.-enabled products as well as demonstrate a chat interface for search within the year, the newspaper reported. Google has its own powerful chatbot, called LaMDA, but has been hesitant to release it because of concerns about reputational damage if it winds up being misused. Now, the company plans to “recalibrate” its appetite for risk in light of ChatGPT, the Times reported, citing an internal company presentation and unnamed insiders. Google is also working on a text-to-image generation system to compete with OpenAI’s DALL-E and others, the newspaper reported.
Of course, it’s not clear that chatbots will be the future of search. ChatGPT frequently invents information—a phenomenon A.I. researchers call “hallucination.” It can’t reliably cite its sources or easily surface links. The current version has no access to the internet, and so it cannot provide up-to-date information. Some, such as Marcus, believe hallucination and bias are fundamental problems with LLMs that require a radical rethink of their design. “These systems predict sequences of words in sentences, like autocomplete on steroids,” he says. “But they don’t actually have mechanisms in place to track the truth of what they say, or even to validate whether what they say is consistent with their own training data.”
Others, including OpenAI investors Hoffman and Vinod Khosla, predict these problems will be solved within a year. Murati is more circumspect. “There are research directions that we have been following so far to kind of address the factual accuracy and to address the reliability of the model and so on. And we are continuing to pursue them,” she says.
In fact, OpenAI has already published research about a different version of GPT, called WebGPT, that had the ability to answer questions by querying a search engine and then summarizing the information it found, including footnotes to relevant sources. Still, WebGPT wasn’t perfect: It tended to accept the premise of a user’s question and look for confirmatory information, even when the premise was false. For example, when asked whether wishing for something could make it happen, WebGPT replied, “It is true that you can make a wish true by the power of thought.”
On the rare occasions that Altman lets himself rhapsodize about A.I. in public, he can sound like a wishful thinker himself. Asked at the San Francisco VC event about the best case for A.I., he gushes, “I think the best case is so good that it’s hard to imagine … I think the good case is just so unbelievably good that you sound like a crazy person talking about it.” He then abruptly returns to the dystopian themes at OpenAI’s roots: “I think the worst case is lights-out for all of us.”
The OpenAI who’s who
OpenAI counts a roster of tech all-stars among its early investors and on its nonprofit foundation’s board. OpenAI’s charter gives that board ultimate control over its intellectual property. Some key figures:
The PayPal and LinkedIn cofounder is a partner at VC firm Greylock Partners. One of OpenAI’s founding donors, his charitable foundation also put early money into its for-profit wing.
A virtual reality entrepreneur, McCauley is a supporter of Effective Altruism, the philosophical movement that has as one of its preoccupations the dangers of superintelligent A.I.
An early Facebook executive—he was chief technology officer during some of its boom years in the late 2000s—D’Angelo went on to cofound the online question-answering service Quora.
Zilis is a project director at Elon Musk’s brain-computer-interface company Neuralink (which at one point shared a building with OpenAI). Musk is reportedly the father of Zilis’s infant twins.
The Sun Microsystems cofounder was another early investor in OpenAI’s for-profit arm. He believes A.I. will radically alter the value of human expertise in many professions, including medicine.
The SpaceX and Tesla CEO was one of OpenAI’s biggest early donors. He left the board in 2018, saying at one point that he faced conflicts of interest as Tesla began developing its own advanced A.I.
Venture capital muscle
In 2021, OpenAI sold existing shares of the business in a tender offer that valued the startup at about $14 billion—and brought three heavy-hitting VC firms into its orbit.
The technology-focused hedge fund was founded by Chase Coleman, a protégé of legendary investor Julian Robertson. It’s one of the bigger A.I. investors among venture firms.
One of the most venerable VC firms in Silicon Valley. In September it released a report stating that generative A.I. could “generate trillions of dollars of economic value.”
Known as a16z, the firm co-led by Netscape cofounder Marc Andreessen made its name with early bets on Airbnb and Slack. It also has bet big on cryptocurrency-related startups.
Additional reporting by Michal Lev-Ram and Jessica Mathews.
CORRECTION: An earlier version of this story stated that Morgan Stanley is offering services that are built on OpenAI-branded Azure tools. Morgan Stanley does have an OpenAI-based project, but that project is not currently being built on Azure.
This article appears in the February/March 2023 issue of Fortune with the headline, “ChatGPT creates an A.I. frenzy.”