Thank heavens for former OpenAI engineers inspired to blog about their time at the famously secretive firm, for without them we would have no idea what a wild mess it is in there.
Calvin French-Owen, who spent a year at OpenAI working on Codex until leaving in June, didn’t speak ill of his former employer in the post, noting he might even want to return eventually. However, he nonetheless pointed out some chaotic startup-like activity at the company, even though it’s now got more than 3,000 employees, he blogged.
Take, for example, day-to-day operations. According to French-Owen, the company’s strong bias to action means engineers “can just do things,” starting projects willy-nilly without any broader oversight or planning until efforts bump into each other.
“It wasn’t unusual for similar teams but unrelated teams to converge on various ideas,” French-Owen wrote in a July 15 blog post. “Efforts are usually taken by a small handful of individuals without asking permission. Teams tend to quickly form around them as they show promise.”
That structure, according to French-Owen, makes OpenAI more like a government research operation like Los Alamos, with people working on their own projects to see what sticks, rather than being a single monolithic company working toward a single profit-driven objective.
OpenAI is incredibly bottoms-up, especially in research
“OpenAI is incredibly bottoms-up, especially in research,” French-Owen said. “Rather than a grand ‘master plan’, progress is iterative and uncovered as new research bears fruit.”
Unfortunately, French-Owen also noted that OpenAI is a “frighteningly ambitious,” “very secretive” organization that “changes direction on a dime” and where “everything is measured in terms of ‘pro subs'” – not exactly the same sort of environment as a publicly-funded research lab, and that’s before tossing in the attached for-profit arm.
“Even for a product like Codex, we thought of the onboarding primarily related to individual usage rather than teams,” French-Owen explained. If that’s true of the general philosophy at OpenAI, it would suggest the company is more concerned with user-facing growth than enterprise sales – suggesting, again, that OpenAI believes its technology is more likely to take off from the bottom-up within companies than be imposed via top-down mandates. OpenAI didn’t respond to our questions, and to quote French-Owen’s own writing about the company, company direction changes on a dime. In other words, we’re not assuming anything.
All in all, French-Owen paints a picture of a company that’s grown so fast it’s made a bit of a mess.
Everything breaks when you scale that quickly
“When I joined, the company was a little over 1,000 people. One year later, it is over 3,000 and I was in the top 30 percent by tenure,” French-Owen wrote in his post. “Everything breaks when you scale that quickly.”
He noted that communication, reporting structures, hiring – all of it gets out of sync when there are so many teams growing in so many different directions.
“Rather than having some central architecture or planning committee, decisions are typically made by whichever team plans to do the work,” French-Owen said. “The result is that there’s a strong bias for action, and often a number of duplicate parts of the codebase.”
Other interesting insights
The boiling turmoil under the OpenAI hood is bad enough, but it’s not the only insight French-Owen shared.
One (ultimately unsurprising) fact about OpenAI that the engineer shared was the company’s exclusive use of Azure for running “everything,” to quote the blog author – a fact he doesn’t seem crazy about.
“There’s no true equivalents of Dynamo, Spanner, Bigtable, Bigquery Kinesis or Aurora,” French-Owen said. “The [identity and access management] implementations tend to be way more limited than what you might get from an AWS, and there’s a strong bias to implement in-house.”
That might change in the near future if recent bumps in the relationship between Microsoft and OpenAI are any indication. French-Owen left OpenAI in June.
Nearly everything is a rounding error compared to GPU cost
The engineer also noted that “everything, and I mean everything runs on Slack,” and that he received only around 10 emails in his entire year at OpenAI. That may be an outgrowth of the lack of coordination among groups – Slack is historically great for communicating within teams, but starts to fall apart for cross-team coordination, with the endless proliferation of channels making it impossible to track every initiative. Also, Slack has long been an end-to-end encryption laggard, and there’s no indication that’s changed, meaning the company’s messages aren’t as secure as they could be – a potentially big deal when it comes to preserving trade secrets.
Finally, while French-Owen promised not to spill any OpenAI trade secrets, he did let slip one interesting tidbit about the company’s finances: “Nearly everything is a rounding error compared to GPU cost.”
No real surprise there, either – now if only French-Owen would have been willing to talk about the energy footprint of a query. We reached out with a number of questions, but he declined to offer further comment. ®