• AI can write emails and s

    From Mike Powell@1:2320/105 to All on Fri Mar 6 11:47:53 2026
    AI can write emails and summarize meetings, but heres what it still cant do
    in 2026

    Date:
    Thu, 05 Mar 2026 13:38:51 +0000

    Description:
    If were going to use AI well, its important we get clear on its limitations
    and how it works.

    FULL STORY

    Go on X or LinkedIn for five minutes and youll find plenty of people talking about what AI can do. It can summarize meeting notes, write code , turn a
    photo of you into a caricature , or give your emails a more assertive vibe. Those are just a few examples I saw in LinkedIn posts earlier today.

    But for all the things AI can do, there are still plenty it cant . In fact, some limitations trip up the most popular AI tools time and time again. Im
    not dunking on the technology here (I do sometimes, but thats not what this is). I think its good to talk about what AI cant do so were clear on its boundaries. When people are new to AI tools, or dazzled by the hype, they can easily misinterpret what these systems actually are and what theyre capable
    of. Thats how we end up with reports filled with made up statistics. Of
    course, different AI tools have different strengths. But here are some common things your favorite AI tool might still struggle with in 2026 and, importantly, why those struggles still exist.

    1. Admit it doesnt know something

    This is the most important one AI tools can
    hallucinate, which is the industry term for when they make things up.

    Whats crucial to understand is that this isnt a bug thats going to be fixed
    in a future update. Instead, its at the core of how a lot of LLMs (large language models), like ChatGPT and Claude, work.

    Despite how it may seem, theyre not retrieving facts from a big store of information. Instead, theyre predicting the next word based on patterns
    learned from churning over huge amounts of training data.

    Hallucination can look like your favorite AI tool confidently stating
    incorrect information, inventing citations or blending some real sources with made up ones.

    What makes this worse is the confidence. These systems are designed to
    produce fluent language that sounds authoritative and were wired to trust authority. That makes it easy to overlook errors if were not careful.

    Thats why its essential to fact check anything an AI tool tells you. This is good practice for everyday use, but its critical when the stakes are high,
    like they are with legal advice, medical information or financial decisions.

    Weve already seen multiple cases of people being caught out after submitting documents that included fabricated citations or incorrect claims generated by AI.

    2. Counting

    Have you seen the viral videos
    of people asking ChatGPT or Grok how many "r"s are in strawberry? If not, Ill spoil them for you. AI often gets it wrong. ChatGPT has been known to confidently say there are only two, then after some pushing, concede that
    there are in fact three.

    Ive tested this myself and had mixed results. Sometimes it answers correctly. Sometimes it doesnt. So whats going on?

    AI tools like ChatGPT, Claude ,and Grok dont process text the way we do. They dont scan each letter in order. Instead, they break language into tokens,
    which are words or smaller chunks of words the LLM has learned from its training data.

    So when it sees strawberry, it isnt counting each letter. Its predicting a plausible answer based on patterns it has learned before.

    Once you understand how it happens, the simple mistake makes more sense. But
    we tend to associate fluency with intelligence, so when we know ChatGPT can write an essay in seconds but cant count letters, it feels jarring.

    3. Replacing a therapist

    There are conflicting
    opinions about whether people should rely on AI as a therapy tool . But the broad consensus tends to be: use it cautiously, and only as a supplement to real therapy.

    Many people find value in sharing things with their chatbot of choice, especially given how inaccessible traditional therapy can be in many
    countries. They might ask ChatGPT to help interpret the tone of a text or clarify goals. But beyond that, experts warn it could do more harm than good.

    Again, this all comes down to how these tools are designed. They tend to
    agree, reflect your views back to you and validate your experience. They are structurally optimized to be helpful and agreeable. Even with guardrails in place, they are more likely to affirm than challenge.

    But true growth needs friction. It requires someone who can push back, notice blind spots, and establish boundaries. Sure, a small amount of validation can be reassuring. But too much without challenge can subtly distort how you see yourself and the world.

    There are also practical limits too. An AI system cant assess risk the way a trained professional can. It cant intervene in a crisis and it cant
    participate in the patient and therapist dynamic that makes therapy
    effective. It can simulate it, but misses out on the lived experience, training, professional accountability and duty of care.

    4. Understanding lived experience

    This one might sound obvious, but stick with me.
    Acknowledging that AI hasnt lived and never will is central to understanding what it cant do.

    It doesnt have a body, memories, a childhood, needs or stakes. That doesnt matter much if youre asking it to proofread a technical blog post or generate code.

    But rely on it for philosophical debate, therapy or creative work and
    something important shifts. Its important to understand that its not drawing from a past or from dreams or experiences. Its drawing from existing material and then recombining it.

    Because it hasnt lived, it has no skin in the game. It can describe ethical frameworks, weigh arguments for and against controversial decisions, and simulate moral reasoning. But it cant bear consequences or be held
    accountable the way a person can. So if an AI system causes harm, responsibility lies with the humans who built it, deployed it or use it. The model itself has no awareness or care.

    This raises deeper questions about creativity, originality and moral agency. Those debates are ongoing. But for now, its enough to recognize that some
    forms of judgement do rely on experience, vulnerability and a sense of responsibility, AI doesnt have those.

    5. Updating knowledge in real time

    AI tools are trained on vast amounts of data.
    But that data has cut-off points and those vary depending on the tool. That means a model might not know about recent events, evolving norms, or shifts
    in language unless you explicitly provide the context or check how up to date its knowledge is.

    Sometimes this becomes a problem because older information is delivered with the same confidence as everything else. Theres no built-in signal that says, This might be out of date.

    This really matters if youve started relying on AI as a news source or if you work in journalism, law, policy, or any fast-moving field. Its increasingly normal for people to lean on these tools for research and summaries too. But you cant guarantee information will be current. Recognizing the limits of AI When you first use AI, it can feel intelligent because it tends to handle language well. It can certainly give the impression of reasoning, empathy, creativity and even authority.

    But its important to remember that underneath, its predicting patterns rather than understanding meaning. Recognizing these limits doesnt diminish what the technology can do. But itll help you use it more clearly, more deliberately, and in ways that actually serve your goals.

    ======================================================================
    Link to news story: https://www.techradar.com/ai-platforms-assistants/ai-can-write-emails-and-summ arize-meetings-but-heres-what-it-still-cant-do-in-2026

    $$
    --- SBBSecho 3.28-Linux
    * Origin: Capitol City Online (1:2320/105)