Santa Cruz, CA
Productivity and AI hero
The Flow Report

AI Safety for Small Business: What You Actually Need to Worry About

Practical AI safety for small businesses. What to worry about, what to skip, and what you should never put into a public AI tool.

Rock Hudson··4 min read
ai technology

A bookkeeper I know pasted an entire client spreadsheet into ChatGPT to ask it to find discrepancies. The spreadsheet had names, Social Security numbers, and bank account details. She didn't think twice about it.

That's the kind of AI safety problem that actually happens in small businesses. Not dystopian robot scenarios. Just people trying to save time without realizing what they're sharing.

The Three Things Worth Worrying About

I'm going to keep this simple because the AI safety conversation tends to spiral into either paranoia or paralysis. Neither is useful. There are three categories of risk that actually matter for a small business using AI tools.

1. Data You're Feeding Into AI

When you type something into ChatGPT, Claude, or any cloud-based AI tool, that text goes to a server somewhere. Depending on the tool and your plan, that data might be used to train future models. Even if it isn't, it's been transmitted to a third party.

For casual use, this is fine. Drafting a marketing email, brainstorming taglines, asking how to word a tricky paragraph. No risk there.

For anything involving client data, financial information, personal details, health records, proprietary business information, or anything covered by an NDA, you need to pause.

Here's a practical rule: if you wouldn't email it to a stranger, don't paste it into a public AI tool.

The workaround isn't complicated. Most paid AI plans (ChatGPT Team, Claude Pro, enterprise tiers) come with data privacy agreements that exclude your inputs from training data. Use those. Or strip identifying information before pasting. Or use a local AI model that never sends data anywhere, which is increasingly viable for smaller tasks.

2. AI Output You're Trusting Without Checking

AI hallucinates. It generates confident-sounding text that may contain fabricated facts, wrong numbers, or invented citations. This is a known limitation, not a bug that's getting fixed next quarter.

For a small business, the risk is real but manageable. If you use AI to draft a proposal and it includes a made-up statistic about your industry, that's embarrassing. If you use it to generate a contract clause and it invents a legal standard, that's potentially expensive.

The protocol is simple. Treat AI output as a draft. Always. Review factual claims. Double-check numbers. Don't send AI-generated content externally without a human reading it first.

This sounds obvious, but I've seen it go sideways. Speed is the whole appeal of AI, and the temptation to skip the review step is real. Build the review into your process so it's automatic, not optional.

3. Automations Running Without Oversight

This is the less obvious one. A tool you use manually (like chatting with Claude) has a natural check built in: you're reading the output before you do anything with it. An automation that runs in the background doesn't have that check.

If you've set up an automation that drafts and sends emails based on incoming requests, what happens when it misreads a request? If you've got an AI categorizing support tickets and routing them, what happens when it categorizes something wrong and a client's urgent issue sits in the wrong queue for two days?

Every automation needs a failure mode. For high-stakes automations (anything client-facing, anything financial), build in a human approval step. For low-stakes automations (internal sorting, draft generation), at least build in logging so you can spot problems after the fact.

What You Can Skip Worrying About

AI taking over your business. AI making autonomous decisions you didn't authorize. AI becoming sentient and demanding a raise. These are not things that apply to you using Claude to draft follow-up emails.

You can also skip most of the regulatory panic. If you're a small business not in healthcare, finance, or government contracting, AI-specific regulations probably don't apply to you yet. Basic data privacy rules (don't share client data carelessly) cover most of what you need.

A Minimal Safety Protocol

If you want something concrete to put in place today, here's what I'd recommend for a small business:

Make a list of what's off-limits. Decide as a team what categories of information never go into a public AI tool. Client financials, personal data, anything under NDA, internal compensation data. Write it down. Keep it short and specific.

Use paid tiers with data agreements. The free versions of most AI tools have weaker privacy protections. The paid versions typically include contractual commitments about data handling. The cost is modest, usually under $25 per person per month.

Review before sending. Any AI-generated content that goes to a client, gets published, or becomes part of a deliverable gets reviewed by a human. No exceptions.

Audit your automations monthly. Spend 30 minutes once a month looking at your automated workflows. Check the logs. See if anything misfired. Adjust prompts or rules as needed.

That's it. Four things. If you do these four things, you've covered the realistic risks without building a compliance department.

If you want help thinking through what's appropriate for your particular business, especially if you handle sensitive client data, that's a good reason to set up a call. Not to sell you anything, just to make sure you're not accidentally exposing something you shouldn't be.