How do I create an AI usage policy for my company?
A step-by-step guide to creating an AI usage policy that protects your business data while empowering employees to use AI productively.
Key Takeaways
- Your employees are already using AI at work - a policy protects you from data leaks, compliance violations, and liability
- Key sections: approved tools, data classification rules, prohibited uses, output review requirements, and training obligations
- Use a traffic-light system for data classification - green (safe for AI), yellow (enterprise tools only), red (never input into AI)
- Industry-specific concerns like HIPAA, PCI, and client NDAs should shape your policy's restrictions
- Start with a one-page policy and review quarterly - a simple policy today beats a perfect policy next quarter
Your employees are already using AI. A recent Microsoft survey found that 78% of AI users bring their own AI tools to work, most without telling their manager or IT team. That means right now, someone at your company may be pasting customer data, financial records, or proprietary information into a consumer AI tool with no data protection guarantees.
You need an AI usage policy. Not next quarter - now. Here’s how to build one that protects your business without killing the productivity gains AI delivers.
Why You Need a Policy (And Why It’s Urgent)
The Real Risks of No Policy
Without guidelines, employees are making individual decisions about what’s appropriate to put into AI tools. Here’s what that looks like in practice:
- HR manager pastes employee performance reviews into ChatGPT for help writing summaries - personal data is now in a third-party system
- Sales rep uploads a prospect’s confidential RFP to generate a proposal response - that prospect’s proprietary information is potentially exposed
- Finance team asks AI to analyze spreadsheets with revenue figures and customer payment details - sensitive financial data leaves your control
- Healthcare office pastes patient notes into an AI tool to draft referral letters - a potential HIPAA violation carrying fines up to $1.5 million
None of these employees are being malicious. They’re trying to be productive. But without a policy, “productive” and “risky” are the same thing.
Regulatory and Insurance Pressure
This isn’t theoretical anymore:
- Cyber insurance carriers are asking about AI governance during underwriting. Documented AI-usage policies are quickly becoming a standard requirement for coverage.
- Industry regulators are expanding their scope to include AI tool usage. HIPAA, FINRA, and state privacy laws are all being interpreted to cover how employees use AI.
- Client contracts that require data handling controls extend to AI tools. If your agreement says you’ll protect client data, pasting it into a consumer AI tool may breach that contract.
- Litigation risk is growing. If a data breach traces back to unregulated AI usage, your liability increases significantly.
The Business Upside
Beyond risk management, a good policy actually helps your business:
- Gives employees permission to use AI productively (many are unsure what’s allowed and play it safe by avoiding AI entirely)
- Standardizes tools so you get volume licensing, better security, and consistent data protection
- Improves output quality by requiring human review of AI-generated content
- Reduces shadow IT by approving specific tools rather than forcing employees to sneak them in
- Demonstrates maturity to clients, partners, and auditors
The Key Sections Every AI Policy Needs
Your policy doesn’t need to be 30 pages. A shorter document that people actually read is better than a comprehensive one nobody opens. Focus on these core areas.
Section 1: Approved Tools
Be explicit about which AI tools employees may use for work purposes.
| Category | Approved | Notes |
|---|---|---|
| General AI assistants | Microsoft Copilot, ChatGPT Enterprise/Team | Enterprise versions with data protection agreements |
| Meeting transcription | Microsoft Copilot in Teams, Otter.ai Business | Approved for internal meetings only |
| Writing assistance | Grammarly Business, Copilot | Company-managed accounts only |
| Code assistance | GitHub Copilot | Engineering team only |
| Not approved | Consumer ChatGPT (free), personal Claude accounts, unvetted browser extensions | No data protection agreements in place |
Why it matters
Enterprise and business versions of AI tools typically include data protection agreements that prevent your inputs from being used for model training. Consumer (free) versions usually do not offer this protection. The distinction is critical.
Section 2: Data Classification Rules
Employees need clear, simple rules about what data can and cannot go into AI tools.
The Traffic Light System
Green - Safe to use with approved AI tools:
- Publicly available information
- General industry knowledge and best practices
- Internal drafts containing no sensitive data
- Marketing content and social media drafts
- Generic templates and standard procedures
Yellow - Use only with enterprise AI tools on the approved list:
- Internal business documents (non-confidential)
- Anonymized or de-identified data
- General financial summaries (no account numbers or customer specifics)
- Vendor communications
- Internal project plans and timelines
Red - Never input into any AI tool:
- Customer personal information (names, addresses, SSNs, dates of birth)
- Protected Health Information (PHI)
- Payment card data (credit card numbers, CVVs, account numbers)
- Employee personal records (compensation, performance reviews, medical information)
- Trade secrets, proprietary formulas, or source code
- Legal privileged communications
- Passwords, API keys, or access credentials
- Client confidential information covered by NDA
Example
An employee wants to use AI to draft a client proposal. They can input general information about your services and the project scope (green). They can reference anonymized details about similar past projects using an enterprise tool (yellow). They cannot paste the client’s confidential RFP, financial data, or proprietary specifications (red).
Section 3: Prohibited Uses
Be specific about what AI should never be used for:
- Final decision-making without human review (hiring decisions, legal conclusions, medical assessments)
- Generating content presented as original research without verification
- Automating communications where recipients would reasonably expect a human (sensitive client conversations, HR discussions)
- Processing data that violates regulations (HIPAA, PCI, FERPA, GDPR)
- Creating content that could be defamatory, discriminatory, or misleading
- Bypassing security controls or using AI to circumvent company policies
Section 4: Output Review Process
AI generates content that sounds confident but can be wrong. Your policy should require:
- Human review of all AI-generated content before external distribution
- Fact-checking of any statistics, claims, or technical information
- Legal review of AI-generated contracts, agreements, or compliance documents
- Brand review of customer-facing marketing content
- Source verification - if AI cites a study or regulation, verify it exists and says what AI claims
Key point
“The AI wrote it” is never a valid excuse for errors. The person who submits AI-generated work is responsible for its accuracy, appropriateness, and compliance.
Section 5: Training Requirements
A policy without training is a document, not a program. Require:
- Initial training for all employees within 30 days of policy adoption (or within first week for new hires)
- Content coverage: what tools are approved, the traffic light data system, how to review AI outputs, where to report concerns
- Practical examples - walk through real scenarios relevant to your business
- Refresher training annually, or whenever the policy is significantly updated
- Department-specific sessions for teams with unique AI use cases or elevated data access
Example
Training should include live demonstrations. Show employees what it looks like to correctly use AI for a task (inputting only green-category data, reviewing outputs, editing for accuracy) versus what a violation looks like (pasting a client contract into a consumer AI tool).
Section 6: Review and Update Process
AI evolves rapidly. A policy written in January will be partially outdated by summer. Build in:
- Quarterly reviews of the approved tools list and data classification rules
- Incident-triggered updates - if a data exposure occurs through AI usage, update the policy immediately
- New tool evaluation process - a clear path for employees to request new AI tools, with security review before approval
- Annual comprehensive overhaul - revisit every section, benchmark against current regulations and industry practices
- Version control - date every update and maintain a changelog
Industry-Specific Considerations
Healthcare (HIPAA)
- PHI must never be entered into consumer AI tools - period
- Business Associate Agreements (BAAs) are required with any AI vendor processing PHI
- Document all AI tools that touch patient data in your HIPAA risk assessment
- Train staff on the specific intersection of AI usage and HIPAA obligations
- Consider HIPAA-specific language in your policy that mirrors your existing Notice of Privacy Practices
Financial Services (SEC, FINRA, SOX)
- Recordkeeping requirements may apply to AI-assisted communications
- Supervisory review obligations extend to AI-generated client communications
- Audit trails should capture when and how AI was used in financial analysis or reporting
- Model risk management frameworks may need to encompass AI tool usage
Legal
- Attorney-client privilege considerations when inputting case details into AI
- Ethical obligations around competence and diligence apply to AI-assisted work product
- Court rules on AI disclosure are evolving - some jurisdictions now require it
- Confidentiality obligations to clients extend to AI tool usage
Government Contractors
- CUI (Controlled Unclassified Information) handling rules apply to AI tools
- FedRAMP authorization may be required for AI tools processing government data
- CMMC requirements should be reflected in your AI policy
- Export control regulations (ITAR, EAR) may restrict AI usage for certain technical data
A Simple Policy Template
Here’s a framework you can adapt. Aim for one to two pages for the core policy.
[Company Name] AI Usage Policy
Purpose: Guidelines for responsible AI tool usage that protect company, client, and employee data while enabling productivity.
Scope: All employees, contractors, and vendors performing work for [Company Name].
Approved Tools: [Insert table of approved tools by category]
Data Rules:
- Green data: May be used with any approved AI tool
- Yellow data: Enterprise AI tools on the approved list only
- Red data: Never input into any AI tool under any circumstances
Prohibited Uses: [List 4-6 specific prohibitions relevant to your business]
Output Requirements:
- All AI-generated content must be reviewed by a human before external use
- Verify all facts, statistics, and technical claims independently
- The submitting employee is responsible for accuracy
Training: All employees must complete AI usage training within 30 days of hire or policy adoption.
Reporting: Report AI-related concerns or incidents to [contact name] at [email].
Enforcement: Violations will be addressed through [Company Name]‘s standard disciplinary process.
Effective Date: [Date] | Next Review: [Date + 90 days]
Enforcement: Making the Policy Stick
Technical Controls
Where possible, back up your policy with technology:
- Block unapproved AI tools at the network or endpoint level
- Deploy approved tools centrally through IT so employees have easy, sanctioned access
- Monitor for shadow AI using your endpoint management or CASB (Cloud Access Security Broker) tools
- Data loss prevention (DLP) rules that flag sensitive data being sent to AI services
Cultural Controls
Technical controls aren’t enough on their own:
- Lead from the top - leadership should visibly use approved AI tools and follow the policy
- Make it easy to comply - if your approved tools are harder to use than the unapproved alternatives, people will find workarounds
- Encourage questions - create a safe channel for employees to ask “Can I use AI for this?”
- Celebrate good usage - share examples of employees using AI effectively within policy guidelines
Common Mistakes to Avoid
Banning AI Entirely
Banning AI doesn’t stop usage. It drives it underground onto personal devices and personal accounts where you have zero visibility. It’s always better to approve specific tools with guardrails.
Making the Policy Too Complex
If your AI policy requires a lawyer to interpret, nobody will follow it. Use plain language, provide examples, and keep the core policy to two pages maximum.
Setting It and Forgetting It
An AI policy that isn’t reviewed quarterly will fall behind the pace of change. New tools, new capabilities, and new regulations emerge constantly.
Ignoring What Employees Actually Need
If your approved tools don’t meet employee needs, they’ll find unapproved ones. Listen to what people want from AI and find approved ways to deliver it.
The Bottom Line
Your employees are using AI whether you have a policy or not. The question is whether they’re doing it safely. A clear, practical AI usage policy protects your business data, satisfies regulators and insurers, and empowers employees to use AI more effectively.
Don’t wait for an incident to force the conversation. Start with a simple one-page policy covering approved tools, data classification rules, and output review requirements. You can expand it over time. What you can’t do is un-paste confidential data from a consumer AI tool after the fact.
A basic policy this week is infinitely better than a perfect policy next quarter.
Need help building an AI usage policy for your business? Contact us for guidance on AI governance that balances security with productivity.
Have More Questions?
Our team is here to help. Whether you're evaluating IT services or have a specific question about your technology, we're happy to have a conversation.