TL;DR: Transformer models, and therefore ChatGPT, Bing/Sydney, and their ilk still have fundamental limitations; I think these are important barriers to something we might call “AGI”—but I also think we might consider jettisoning the term and just saying what we mean. We should be impressed by what’s possible now, but be aware of limitations, think clearly about what these limitations mean, and better conceptualize our own flourishing.
Intro
On February 24, OpenAI’s CEO Sam Altman released a post titled “Planning for AGI and Beyond.” Given OpenAI’s charter to bring safe general intelligence to the world, it’s not much of a surprise that we’d see Altman write something like this. The article first lays out three principles they care about:
AGI should “empower humanity to maximally flourish in the universe”
Benefits of, access to, and governance of AGI should be widely shared
Successfully navigating massive risks will require iterative learning and adaptation by deploying less powerful versions of the technology [ “the technology” here is presumably “AGI” ]
Altman then discusses how we should prepare for AGI in the short term: this section re-iterates principle 3 above and expands on the notion that we should learn to navigate AI deployment challenges and that a gradual approach to releasing more powerful technologies will give people, policymakers, and institutions time to understand and adapt to new advances. First-hand experience with these technologies will allow the greater public to understand their downsides, “adapt our economy,” and put regulations in place.
In planning for the long term, Altman thinks that “the future of humanity should be determined by humanity” and that there should be great scrutiny of any effort attempting to build AGI; accordingly, major decisions should require public consultation.
In this article, I want to do a few things—broadly, I want to answer the question “Should we be ‘planning for AGI’ right now?” Answering that question requires two things first:
Articulating what we mean by “AGI.” And, in turn,
Articulating what we mean by “intelligence.”
Along the way, we should realize that the precise nature of what we’re talking about matters a lot more than the terms we use. Overloading terms without much explanation often allows us to make vague statements that sound important, but say little. I will center much of this discussion around terms like AGI because we use them so frequently, but we should try to remember that actions like “achieving AGI” can mean very different things to different people.
Loaded Terms
TL;DR: This section attempts to clarify the terms “intelligence” and “AGI,” drawing from a motley of perspectives on what those might mean. I consider a notion of intelligence, and of AGI, that defines a threshold for intelligence based on the capacity to act and perceive/associate contexts for action, and a gradation of intelligence based on the efficiency with which one learns new skills.
Keep reading with a 7-day free trial
Subscribe to Last Week in AI to keep reading this post and get 7 days of free access to the full post archives.