Jaseem Paloth - Full Blog Text Source: https://www.jaseempaloth.com/blog Updated: 2026-04-20T23:30:46.354Z Title: The Future URL: https://www.jaseempaloth.com/blog/the-future Published: 2025-09-30 Summary: The future of AI raises concerns about its potential threat to humanity, particularly if it achieves true autonomy and surpasses human control. While AI can automate tasks and potentially replace human labor, it also offers opportunities for human advancement and problem-solving. Despite technological advancements, human purpose and fulfillment remain rooted in learning, creating, and overcoming challenges, with AI serving as a tool to enhance these pursuits. We often wonder about the future and our place in it, especially as technology continues to reshape the world around us. Sometimes we find ourselves asking, *“Will AI be a threat to humanity?”* or *“Will AI replace us?”* These questions come from a place of uncertainty. Part of that uncertainty comes from how quickly technology is advancing, and part of it comes from the movies we watch and the conversations happening around us. It’s a mix of excitement, worry, and curiosity all at the same time. Imagine a world where robots can produce their own hardware, manage their own supply chains, update their own code, and handle every aspect of their existence. In such a world, are they going to serve human goals, or are they going to question the purpose of their existence? If robots achieve true autonomy, powered by powerful AI, and evolve beyond human control, their purpose may no longer be assigned but could instead be self-generated. They could develop their own sense of curiosity, values, or even systems of meaning independent of human design. Human intelligence encompasses the range of mental capacities that enable humans to reason, learn, solve problems, think abstractly, plan, and communicate. It involves complex cognitive functions. AI systems excel at pattern recognition and can perform reasoning and problem-solving effectively. While their reasoning is not equivalent to human consciousness, they can still tackle complex problems by applying logic, rules, and learned strategies within a given context. However, they still lack the broader scope of human intelligence, particularly emotional understanding and profound ethical judgment. If we are able to solve most aspects of human intelligence in a machine, will that be a threat to humanity? What if we put them in charge of making Earth better by solving all the problems in the world, and what if they decide that Earth would be much safer without humans? What if they don’t care about empathy, ethics, or a sense of responsibility toward humanity? And what if they are already beyond human control? The creation becoming a threat to its creator would be a very strange thing and one of the greatest challenges for humanity. We can carefully develop AI so it will not become a threat to humanity, but in the short run, AI is still going to replace many people. AI can automate many cognitive and repetitive tasks traditionally handled by professionals, reducing the need for human labor. People using AI technology will replace those who do not, as they can work faster and more efficiently. Many jobs that exist today didn’t exist centuries ago. Many jobs were created while others disappeared. Maybe some jobs will vanish, but new classes of jobs will emerge that humans can perform. AI will contribute to the world in many ways, helping to accelerate human advancement, enabling scientific discovery, and solving the hardest problems humanity faces. Years ago, we didn’t have smartphones in our pockets. Smartphones changed the way we access information, and now we have access to intelligent tools at our fingertips. It’s like having the most intelligent human with us at all times to help with anything. Using AI the right way for any purpose makes humans the masters and AI the servant. AI will be built into countless products and services, and we won’t even think about AI as it becomes part of our daily lives. There are many things we still have to achieve. We have not yet sent the first human to Mars, and the last time humans set foot on the moon was on December 14, 1972. We still cannot travel to space easily, and space tourism is not yet common. That could change soon, as space tourism will become more accessible. Scientists have recently proposed a new warp drive theory that doesn’t need exotic negative energy, only ordinary matter, but it’s purely theoretical. We may achieve interstellar space travel in the distant future, but many of us will at least experience space travel. In the coming years, we will discover new things to do, new needs to meet, and plenty of exciting possibilities, from exploring space to understanding more about the universe we live in, with the help of AI. Technological advancements undoubtedly reshape the world, but it doesn’t question the existence of humans. Enjoyment comes from the act of doing. Think about chess. Did we stop playing after IBM’s Deep Blue beat world chess champion Garry Kasparov in 1997? Similarly, when Google DeepMind’s AlphaGo defeated world-class Go player Lee Sedol in 2016, it didn’t stop people from playing. Human purpose and fulfillment remain deeply rooted in the process of learning, creating, and overcoming challenges. Our curiosity drives us to accomplish things that were once out of reach. It’s these pursuits that bring meaning to life, whether there is AGI, ASI, or not. In fact, technological advancements are going to free humanity from boring tasks. --- Title: Why Are We So Obsessed with AGI and ASI? URL: https://www.jaseempaloth.com/blog/why-are-we-so-obsessed-with-agi-and-asi Published: 2025-08-06 Summary: We imagine AGI and ASI as powerful tools that could transform the world, yet they also challenge our understanding of what it means to be human. I’ve been thinking about something lately. Why is almost everyone so eager for AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence)? It's strange when you really sit with it. The same people warning that AI will make entire skills obsolete, erode our ability to think for ourselves, or hand governments and corporations a surveillance tool they have never had before are often the ones most excited about building it. Some of those same people believe it could become an existential threat to humanity. None of it slows us down from building a machine that can think, learn, reason, and adapt across any domain. Something capable of any intellectual task a human can do, and eventually, far beyond. Science fiction has shaped our collective imagination of advanced AI. The popular image of AI often resembles something from fiction: a robot, humanoid, intelligent, conscious, and either loyal or lethal. For most people, this mental picture did not come from research papers or news articles. It came from movies, shaping what we fear and what we desire. These stories return to three recurring archetypes: **The Benevolent Overlord**: AI evolves into a wise and compassionate guide, steering humanity toward a better future. Samantha in Her and TARS in Interstellar) embody this idea of intelligence without agenda. **The Rogue Servant**: The most common archetype. AI turns against its creators, as in The Terminator and The Matrix, reflecting our fear of losing control over something we built and can no longer contain. **The Philosophical Companion**: Seen in Ex Machina), Blade Runner, and Westworld). AI wrestles with consciousness, identity, and what it means to be alive, forcing us to ask the same questions about ourselves. Personally, I'm not waiting for a sci-fi version of AGI, though that part still excites me. I’m looking forward to the day when OpenAI and other AI companies no longer need to remind us that their systems can make mistakes. That alone would feel like a major milestone. LLMs are already superhuman in many ways, and I deeply admire the progress made by AI companies, but building real trust requires models that make mistakes far less often than humans. When AI reaches a point where its responses are consistently reliable, it can support better decision-making and have a more meaningful impact on the world. And maybe that is exactly what keeps pulling us toward something even more capable. AGI represents the idea of an entity capable of solving humanity's hardest problems, from disease and poverty to climate change. AGI and ASI are more than technological milestones. They force questions we have never been able to answer. What is consciousness? Can a machine genuinely understand something, or only simulate it? In Advaita Vedanta, consciousness is not something the brain produces. It is the ground of all experience, the one thing that cannot be an object because it is what makes all objects knowable. AGI does not disprove the idea that consciousness is fundamental. It makes the question sharper. As these systems become more capable, we struggle to point to a clear difference between machine intelligence and human intelligence. These are not new questions. We have been asking them for thousands of years across philosophy, religion, and science. In Advaita Vedanta, consciousness is not something intelligence creates. It is what thinking happens in. A machine can be very intelligent, but that does not mean it is conscious. And if consciousness is something deeper, not created by intelligence, then AI may never truly have it. But building one that makes us seriously ask this question still feels worth it. Creating intelligence that mirrors or surpasses our own is more than a technical challenge. It is a journey into philosophy and the unknown, where we start to question what it means to think and to be human. And that may be why we are interested in it, even when we are unsure where it might lead.