Back to Resources

Your Team Is Already Using AI. Here's How to Not Get Burned

Your staff are pasting client data into personal AI accounts right now. Here's the data governance crash course your board needs this quarter.

Your team is already using AI tools, whether you have a policy or not. About 78% of employees using AI at work are bringing their own tools (Salesforce, 2024). The gap between "no policy" and "basic policy" is where data leaks happen. Five quick wins can close that gap today.

Here's an uncomfortable truth: someone on your team has probably already pasted customer data into ChatGPT. Not maliciously. They were trying to write a better email, summarise a report, or draft a response. They didn't think about where that data goes. This is the AI security and GDPR gap that most small teams don't know they have.

This post isn't about scaring you. It's about giving you a practical plan for handling AI security without hiring a compliance consultant or banning AI entirely (which doesn't work anyway, people just use it on their phones).

Where does your data actually go?

Not all AI platforms handle your data the same way. This table shows the key differences as of early 2026:

PlatformFree tier trainingPaid tier trainingData retentionGDPR compliantSOC 2
ChatGPT FreeYes, used for trainingN/A30 daysYes (DPA available)No
ChatGPT Plus/TeamOpt-out availableNo (Team/Enterprise)30 days (adjustable)Yes (DPA available)Yes (Enterprise)
Claude FreeNoN/AAs stated in policyYes (DPA available)No
Claude Pro/TeamNoNo90 daysYes (DPA available)Yes (Team+)
Gemini FreeYes, used for trainingN/AUp to 18 monthsYes (limited)No
Gemini BusinessNoNoConfigurableYes (full DPA)Yes
Copilot (Microsoft 365)NoNoTenant boundaryYes (via M365 DPA)Yes

The critical column is "Free tier training." If it says "Yes," anything your team types into the free version could be used to train the model, which means it could theoretically appear in responses to other users. The probability is extremely low, but the regulatory risk is real.

Key takeaway

Free AI tools are not safe for any data you wouldn't post publicly. If your team handles customer data, personal information, or business secrets, a paid plan with training opt-out is the minimum requirement.

[IMAGE: Data flow diagram showing where user input goes on free vs paid AI tiers]

  • Type: diagram
  • Filename: data-flow-free-vs-paid.png
  • Alt text: A data flow diagram comparing how user input is handled on free tier AI tools versus paid enterprise tiers, showing training data pathways
  • Caption: Free tiers may use your inputs for training. Paid tiers typically don't.

What is the gap between personal and enterprise plans?

The difference between a $20/month personal plan and a $30/month team plan isn't just features. It's a fundamentally different data handling agreement.

Personal plans (ChatGPT Plus, Claude Pro) give you better models and longer conversations. But the data handling is still consumer-grade. Your conversations are stored on the provider's servers. You can delete them, but there's limited audit trail and no admin controls.

Team and enterprise plans add:

  • Admin dashboard with usage monitoring
  • Data processing agreements (DPAs) that satisfy GDPR Article 28
  • No training on your data, contractually guaranteed
  • SSO and access controls so you manage who can use what
  • Audit logs showing what was asked and when
  • Data residency options (EU hosting for GDPR compliance)

For a 10-person team, the cost difference between personal and team plans is roughly $100-200/month. That's the price of not getting a nasty letter from a regulator. When I mentioned this figure in a workshop, one founder called it "the cheapest insurance policy I've ever heard of."

According to GDPR.eu, the average GDPR fine in 2024 was EUR 1.6 million. Even the minimum fine for a small business can reach EUR 10,000-50,000. The maths on team plans writes itself.

What are five quick wins your board can approve today?

You don't need a six-month security project. These five actions take less than a day combined and close the biggest gaps:

  1. Upgrade to team plans. Move your most active AI users (usually 3-5 people) to team-tier plans. Cost: $25-30 per user per month. This immediately stops your data from being used for training and gives you admin controls.

  2. Publish a one-page AI usage policy. It doesn't need to be a legal document. One page covering: which tools are approved, what data can and cannot be entered, and who to ask if you're unsure. We've seen effective policies that fit on a single A4 sheet.

  3. Create a "never paste" list. Give every team member a clear list of data types that must never go into any AI tool: passwords, API keys, customer financial data, health records, full customer databases, legal documents under NDA. Print it. Stick it next to monitors.

  4. Enable conversation history controls. On ChatGPT: Settings > Data Controls > turn off "Improve the model for everyone." On Claude: this is off by default on paid plans. On Gemini: Activity controls > turn off Gemini Apps Activity. Takes 2 minutes per person.

  5. Quarterly 15-minute review. Set a calendar reminder. Every quarter, check: are we still on the right plans? Has anyone found a new AI tool we should evaluate? Any incidents? Fifteen minutes prevents drift.

Key takeaway

These five wins cost under $500/month for most small teams and close roughly 80% of the risk gap between "no policy" and "properly managed."

What are the five red lines your team should never cross?

Some things are non-negotiable regardless of which plan you're on. Make these absolute rules, not guidelines:

  1. Never paste combined customer personal data into any AI tool that doesn't have a signed DPA. Individual first names in a generic prompt are low risk. A full customer list with emails and purchase history is a GDPR breach waiting to happen.

  2. Never paste credentials, API keys, passwords, or access tokens. This sounds obvious. It happens constantly. A 2024 GitGuardian report found that AI coding assistants were a top source of accidental credential exposure.

  3. Never use AI to make automated decisions about people (hiring, firing, loan approvals, insurance) without human review and explicit documentation. The EU AI Act classifies these as high-risk AI applications with specific compliance requirements.

  4. Never assume a free tool's privacy policy is permanent. Companies change terms regularly. OpenAI updated its data usage terms three times in 2024 alone. Review terms quarterly.

  5. Never use AI-generated legal, medical, or financial advice as final output. AI can draft these documents, but a qualified human must review and approve them. This isn't just risk management; in many jurisdictions it's a legal requirement.

[IMAGE: The five red lines displayed as a clear warning-style graphic]

  • Type: diagram
  • Filename: five-red-lines.png
  • Alt text: Five red lines for AI usage displayed as prohibited actions: no customer PII, no credentials, no automated people decisions, no assumption of permanent privacy terms, no unreviewed professional advice
  • Caption: Five rules. No exceptions. Print this out.

What should an enterprise AI plan include?

If your team is growing and AI usage is increasing, here's the checklist for moving from "quick wins" to a proper enterprise AI framework:

  • Approved tools list with version and plan tier for each
  • Data classification system (public, internal, confidential, restricted) with clear rules for which classification can go into which tool
  • Signed DPAs with every AI provider your team uses
  • Training program covering at minimum: what AI can and can't do, data handling rules, the "never paste" list, and how to report concerns
  • Incident response plan for AI-related data exposures (what happens when someone pastes client data into the wrong tool? who to notify, what to document, regulatory reporting timelines under GDPR: 72 hours)
  • Regular audits of AI usage patterns (quarterly minimum)
  • Procurement review so new AI tools go through security assessment before adoption
  • Documentation of AI use in customer-facing processes (GDPR requires transparency about automated processing)

This looks like a lot. It isn't, really. Most of these are one-time setup tasks. A 20-person company can work through this checklist in 2-3 days of focused effort. If you want to review the full course context before building your framework, start from the AI for Business course hub.

Key takeaway

The enterprise checklist is for when you're ready to do this properly. The five quick wins are for right now. Don't let the perfect framework prevent you from taking basic protective steps today.

FAQ

Do I need to tell customers we use AI?

Under GDPR, if AI is involved in processing personal data or making decisions that affect customers, yes. The safest approach: add a line to your privacy policy stating that you use AI tools for specific purposes and name the providers. Transparency builds trust.

What if a team member accidentally pastes sensitive data into ChatGPT?

Don't panic. Delete the conversation immediately (this removes it from your account, though it may have been briefly processed on the server). Document the incident internally. Assess whether it constitutes a personal data breach under GDPR (it might not, depending on what was pasted). If it does, you have 72 hours to notify your data protection authority.

Is using AI for recruitment legal?

It depends on your jurisdiction. The EU AI Act (effective 2025-2026) classifies recruitment AI as high-risk, requiring human oversight, bias testing, and documentation. In Australia, anti-discrimination laws apply to AI-assisted decisions just as they do to human ones. Use AI to draft job descriptions or summarise CVs, but keep humans in the decision loop.

Should we ban AI use entirely to be safe?

No. Banning AI doesn't stop usage; it pushes it underground. A Salesforce survey found that 55% of employees who use AI at work hide it from their employer. A ban with no enforcement is worse than a policy with clear guidelines, because the ban gives you a false sense of security.

How do open-source models change the picture?

Running open-source models (like Llama, Mistral, or Qwen) on your own infrastructure eliminates the data-sharing concern entirely since the data never leaves your servers. The trade-off is that you need technical capacity to host and maintain them. For most small teams, a paid cloud plan with a DPA is simpler and more cost-effective.


Next up: Your AI Action Plan: 5 Challenges, 20 Minutes -- put everything together with five timed challenges that give you real, usable outputs.

This is Post 6 of 7 in the AI for Business free course. Previous: Competitor Analysis

Share