Navigating the New Personalities of AI in Your Business Ecosystem

Navigating the New Personalities of AI in Your Business Ecosystem

Over the past three years, I’ve watched large language models (LLMs) shift from novelty to necessity. What began as sandbox tools for drafting content have become embedded across the enterprise – not as add-ons, but as core layers in productivity suites, procurement systems, customer platforms, and third-party apps. Most organisations are already running multiple models, some licensed, others self-paid or free, each carrying different implications for data privacy and risk.

That’s why I no longer treat AI as a single technology. These models behave differently. They respond differently. They’re more like a cast of digital characters – each with its own strengths, blind spots, and operational footprint. In this article, I introduce six of the most influential LLM “personalities” I see in use today – and share five principles I use to help businesses stay in control.

Meet The Models

ChatGPT – 100 PhDs But Still Paying Its Student Loan

ChatGPT is still the most widely used LLM on the planet. It’s helpful, eloquent, and surprisingly competent at everything from contract drafting to supplier Q&As. But behind that confidence is some fragility: occasional hallucinations, and a business still burning billions of dollars year.

If ChatGPT were a colleague, it’d be the brilliant overqualified intern who quietly runs the office – but occasionally guesses up when they don’t know the answer. In many organisations I work with, it’s become the unofficial third wheel. When Copilot or another embedded tool falls short, people jump to ChatGPT. Sensitive content often goes with them.

Microsoft Copilot – The Embedded Ally (That Doesn’t Always Deliver)

Copilot is Microsoft’s safe pair of hands. It’s integrated, secure, and familiar. But it’s also limited, and when it doesn’t deliver, people find workarounds. That’s where shadow usage creeps in. I often find that even in well-governed businesses, users are switching tools without realising the implications.

Anthropic Claude – The Ethical Coder and Quiet Performer

Claude is subtle but powerful. Anthropic has built its reputation around alignment and safety, and Claude reflects that. It handles long-context tasks, legal logic, and coding prompts better than most. Developers increasingly use it by default. I think it’s likely to be the first model that breaks into mainstream profitability, not because it shouts loudest, but because it just works.

Grok – The Wild Card with a Friendly Face

Grok is built by xAI and tied into the X (Twitter) platform. It’s chatty, fast, often kind, but can have a bit of attitude. Most of the time it’s fine, but occasionally it goes rogue. It’s a bit like the helpful colleague who might say something hilarious, or highly inappropriate, that is definitely not granny friendly on a Zoom call.

Baidu, DeepSeek & Co – The Fast, Cheap, and (Sometimes) Risky Option

Chinese LLMs are catching up fast. They’re backed by serious investment, strong academic networks, government support and sharp pricing. But let’s not kid ourselves, they raise questions about data jurisdiction. I always advise caution when dealing with anything commercially sensitive. These models have a role to play, but only when used with clear boundaries.

Google Gemini – Playing a Different Game

Google’s not just trying to win the Gen AI race. With Gemini, they’re building towards world models, systems that can simulate and interact with complex environments. That makes them powerful in robotics, supply chain, and simulation-heavy scenarios. If others are improving tools, Google’s reshaping the toolbox. I think this has long-term implications for anyone in industrial, logistics, or systems-led businesses.

Why This Matters

Here’s the uncomfortable bit: these models are already in your ecosystem, whether you’ve sanctioned them or not. Your teams use them to check facts, draft content, or shortcut processes. Your suppliers use them to respond to RFPs, complete onboarding, or generate pricing. Your customers use them to interact with your channels.

If I’ve learned anything over the past 12 months, it’s that most businesses don’t have a full handle on what’s actually in use. Worse, they don’t realise how easily confidential data ends up pasted into public tools. All it takes is one shortcut.

And once that data is out, you don’t get it back.

Five Things I Recommend You Do

  1. Map Your AI Estate
    Get visibility on where LLMs are being used, officially and unofficially. This includes internal systems, licensed software, third-party tools, and supplier processes. You can’t govern what you haven’t mapped.
  2. Define Clear Policies
    Set boundaries. What’s okay to enter into a prompt? Which tools are approved? What do suppliers need to follow? I’ve seen too many “guidelines” fail because they lacked clarity or teeth.
  3. Safeguard Sensitive Data
    Don’t rely on users to remember what’s sensitive. Automate redaction where you can. Mask confidential information such as IP, product details, contract terms, or pricing details before they hit the model. If you wouldn’t email it to a competitor, don’t paste it into a prompt or build it into an API.
  4. Stay Cyber Smart
    LLMs open up new attack surfaces, through plugins, extensions, prompt logs and more. Work with your security teams to monitor this. I’ve seen prompt injection attacks sneak through in places no one thought to look.
  5. Review Regularly
    AI isn’t static. What’s safe today might not be tomorrow. I advise clients to review their AI model landscape, supplier usage, and risk register every quarter. It’s a discipline, not a one-off task.

Final Thoughts

2023 was the year of experimentation. 2024 was the year of expansion. 2025 is the year of control. This isn’t about banning tools or stifling innovation. It’s about understanding what’s in play and staying in control.

If you want real business value from GenAI, you’ve got to manage it like any other strategic capability, with visibility, governance, and intent.

And if you haven’t started yet, now’s the time.

Images courtesy of ChatGPT… of course.