The year is 1910. The Ford Company’s Model T has sold millions of units since its debut two years ago. It is one of the most successful product launches in history. Henry Ford, the head of the company behind this amazing innovation, is considered an expert on faster-than-light travel. When he appears in public, reporters ask what political system will be best-suited to the new society we’ll build near Proxima Centauri. His answers are eagerly telegraphed all around the world. Thus speaketh the oracle.
What a preposterous story. Yet is it so different from the current hype around AI? ChatGPT is two years old. It’s taken the world by storm. Although imperfect, it is genuinely useful. Artificial general intelligence—i.e. the thing people actually think of when they think of AI—is promised to be imminent. There are schisms at OpenAI over how to develop this terrifying new text generation technology without destroying humanity. Coalitions are lobbying the government for policy changes. Without them, they promise we might be facing an extinction-level event.
AGI might be right around the corner. Or it might not.
The transformer is the architecture behind LLMs and the new chatbots. It was invented by Google in 2017. Because Google is a broken company, they failed to turn this innovation into a meaningful product. OpenAI made a genuine contribution in turning the transformer into ChatGPT, and, more importantly, releasing it as a consumer product. OpenAI and Sam Altman deserve credit for this.
But what role does the transformer have to play in the development of AGI? We don’t know. Perhaps the transformer is exactly the foundation we need to achieve meaningful AGI. Or perhaps it is just a local maximum for text generation, and with future billions of dollars invested we’ll have something that confabulates a little less, but is still essentially a chatbot.
Or maybe the transformer is the first step to AGI the same way jumping off the ground is the first step to traveling to the moon.
We simply do not know. And treating Sam Altman like an oracle, hanging on his every word about the prognosis of AGI and the future of humanity, is as preposterous as asking Henry Ford if a parliamentary system is the best choice for the first government off Terra.
OpenAI’s biggest product is based on a technology they didn’t invent. Their biggest market advantage has been their name, “Open” AI, which appeals to the vanity of academic researchers, allowing them feel less guilty about taking exorbitant compensation packages to go and work for a “non-profit” that has aspirations of guiding the future of humanity while being anything but open.
Altman, meanwhile, has not contributed to the core technology. He has no research expertise in AI or machine learning. His past is checkered, having been fired from Y Combinator for what can broadly be called lack of integrity. He is a wildly successful fundraiser and a capable executive.
I don’t know him in real life. Perhaps the few negative stories don’t tell the full picture. But it sure seems to me like this is exactly the type of person we should not feel good about empowering, let alone lionizing and treating like an oracle.
If there is a ship of progress, it is oared by researchers and academics. And then you have the Sam Altmans of the world—rats steering the ship.
This is the Sam launch everyone should be talking about 🚀
> Or perhaps it is just a local maximum for text generation, and with future billions of dollars invested we’ll have something that confabulates a little less, but is still essentially a chatbot.
*chefs kiss*