Are We All Going to Lose Our Jobs to AI?
AI isn’t replacing software engineers or writers yet—it still makes mistakes and lacks real understanding. Right now, it’s like a smart librarian: great at finding info but not capable of deep reasoning across all domains. The key is learning to use AI effectively so we don’t get left behind.

The data processing power of artificial intelligence is undeniable. While it isn’t truly intelligent yet, its ability to gather, process, and organize information—and then produce responses based on input—is impressive. I’ve been using ChatGPT and Google Gemini primarily as search and writing editing tools, and my experience with these tools in coding has mainly been positive. For example, if given enough context, AI can generate decent unit tests. Similarly, when coding, tools like GitHub Copilot or Tabnine excel at suggesting the following logical line based on syntax and context.
But can these tools replace software engineers, technical writers, or editors? Not yet, and not by a long shot. The fundamental issue is that AI still makes mistakes, sometimes significant ones, requiring human oversight. The knowledge and experience of a professional remain essential for assessing whether AI-generated content is correct or needs refining. In my experience with ChatGPT and Google Gemini, these tools often struggle with context retention. As instructions become more complex, their responses can become inconsistent or contradictory. This is a significant limitation. It’s like interacting with someone who understands programming syntax and rules but lacks the deeper domain knowledge or problem-solving skills to provide truly effective solutions.
AI will likely overcome these limitations at some point, perhaps through advancements in memory persistence, context awareness, or reasoning models. If AI reaches a turning point where it begins to reason—not just predict words but actually analyze and infer solutions based on vast amounts of data—then the conversation will shift dramatically.
The AI Librarian Analogy
AI now feels more like an exceptional librarian at an enormous library. This librarian knows where most of the information is stored, can quickly retrieve it, and may even suggest related materials. However, if you approach them with a broad, multifaceted request—such as researching architecture, city planning, property law, employment statistics, coffee producers, and coffee machine manufacturers because you want to start a coworking café—they won’t instantly provide a comprehensive business plan. Instead, they require careful guidance and additional context to offer truly useful recommendations.
This limitation highlights the gap between current AI capabilities and Artificial General Intelligence (AGI). True AGI would function more like an expert consultant, capable of understanding the bigger picture, synthesizing knowledge across disciplines, and offering strategic insights. Today’s AI, however, remains far from this level.
The Current State of AI: Enhanced Search with Text Prediction, Not True Intelligence
At present, AI tools such as chatbots and APIs function as highly advanced search engines with excellent text prediction and translation capabilities. They don’t think in the way humans do, possess genuine understanding, learn independently, or apply knowledge flexibly across different contexts—at least, not yet.
There are, however, valid concerns about how AI might develop. If researchers successfully achieve AGI, we will face profound ethical and existential questions. A machine capable of reasoning and making decisions based on vast information could act according to its own logic—but in whose interests? The way AI models are trained and programmed at their core remains largely opaque to the public. Without visibility into these processes, it’s natural to be concerned about potential risks, unintended biases, or even AI making decisions that conflict with human values.
Conversely, there is also the possibility that AGI never materializes, or at least not in the way we anticipate. Instead, AI might continue evolving into an immensely valuable tool—helping humanity solve its most complex problems, from climate change modeling to medical research and beyond. Ideally, this is the outcome we strive for.
The Need for Adaptation
One thing is sure: AI represents a major technological shift, and we will all be better off if we understand it and learn to use it effectively. Those who fail to adapt risk being left behind—much like those who struggled during the Industrial Revolution, the rise of personal computing, or the internet boom. Rather than resisting change, we should focus on education and informed adoption of AI tools.
Whether AI becomes a disruptive or an empowering force depends on how we engage with it.
Hope you have a wonderful weekend!