Why Are We So Obsessed with AGI and ASI?

Aug 6, 2025

I’ve been thinking about something lately. Why is almost everyone so eager for AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence)?

It’s strange when you really consider it. People often warn that these systems could take all our jobs, manipulate us, or even become an existential threat to humanity. Yet despite the risks, we continue chasing the dream of AGI, a machine that can think, learn, reason, and adapt like a human. A system capable of performing any intellectual task we can, across any domain.

Part of the reason might be how we imagine it. The popular image of AGI often looks like something from science fiction. A godlike intelligence. A creator. Something powerful, mysterious, and perhaps even dangerous. But maybe our obsession is not just about building better tools. Maybe it reflects a deeper curiosity about intelligence itself, and about who we are.

Science fiction has long shaped our collective imagination about advanced AI, serving as a fertile ground for exploring its possibilities and consequences. These cultural portrayals often fall into familiar archetypes:

  • The Benevolent Overlord: In some stories, AI evolves into a wise and compassionate guide, helping humanity move toward a better future.

  • The Rogue Servant: More often, AI turns against its creators, as seen in films like The Terminator and The Matrix. These narratives reflect our fear of losing control and being overtaken by our own inventions.

  • The Philosophical Companion: Some of the most powerful depictions explore AI grappling with consciousness, identity, and existence, prompting us to reflect on those same questions ourselves.

Personally, I am not waiting for a sci-fi version of AGI. I will be genuinely impressed the day OpenAI stops reminding us that ChatGPT can make mistakes. That alone would feel like a major milestone. ChatGPT is already more capable than humans in many ways, but accuracy is still crucial, especially in fields like finance, the medical field, and law. While I deeply admire the progress made by OpenAI and other AI companies, building real trust requires models that make mistakes much less often than humans. When AI reaches a point where its responses are consistently reliable, it can support better decision-making and have a more meaningful impact on the world.

But why do we still want it? AGI, or powerful AI, represents the idea of an entity capable of solving humanity’s most intractable problems, from disease and poverty to climate change. It taps into a deep-seated desire for salvation and a better future. Still, the fascination is hard to shake. AGI, and especially ASI, are more than technological milestones. They raise profound questions. What is consciousness? Can a machine be creative? What does it truly mean to be human? The prospect of a non-biological intelligence that can think, reason, and perhaps even feel challenges our long-held beliefs about consciousness, creativity, and human exceptionalism.

Creating intelligence that mirrors or surpasses our own is more than a technical challenge. It is a journey into philosophy, into the unknown. And maybe that is why we are so drawn to it, even when we are afraid of what it might lead to.