Here’s a common scenario that plays out more often than most IT directors admit: a department head drops into a meeting, mentions that the team has been using ChatGPT to draft client proposals, and the room goes quiet. Nobody approved it. But nobody blocked it either. The data that’s already been entered – customer names, pricing, project scopes – is out there. And the IT director is the one who’ll be expected to explain what happened.
This is the reality for most mid-size organizations right now. The question is no longer whether your staff are using AI. According to a 2025 WalkMe survey, 78% of employees admit to using AI tools that were not approved by their employer. Over half say they’ve received conflicting guidance on when and how to use them. AI tools are already being used across your organization. The question is whether you have a plan around them.
Policy Before Tools
The instinct when leadership asks about AI is to start evaluating platforms. Which tool? What vendor? What’s the cost? Those are reasonable questions, but they’re second-order. Before you select any tool, you need a written acceptable use policy that defines how AI can and cannot be used within your organization.
That policy doesn’t have to be long. It does have to be specific. It should state which categories of data are off-limits for AI tools: personally identifiable information, financial records, proprietary client data, and student records if you’re a school. It should clarify whether staff can use consumer-grade AI for work tasks at all. And it should spell out who approves exceptions.
Without this, you get shadow AI. And shadow AI is expensive. IBM’s 2025 Cost of a Data Breach Report found that shadow AI incidents now account for one in five breaches, and organizations with high levels of shadow AI faced an extra $670,000 in breach costs compared to those without it. Unfortunately, that’s the cost of having tools in your environment that nobody approved, nobody monitors, and nobody knows how to secure.
Know What Data You’re Sitting On
AI tools are only as risky as the data they’re given access to. This means your AI readiness work is inseparable from your data governance. Most organizations with 80 to 200 employees have data spread across shared drives, cloud platforms, email archives, and applications that haven’t been audited in years. Before any AI implementation, you need to understand what is where.
Start with the basics. What sensitive data do you hold? Where does it live? Who has access to it? If you can’t answer those questions confidently, you’re not ready to add AI into the mix. This is particularly relevant for manufacturers handling proprietary processes, private schools managing student and family data, and local firms bound by client confidentiality agreements.
A data access review requires someone mapping out your most sensitive data stores, confirming permissions are appropriate, and documenting what should never be shared with external tools, AI or otherwise.
Staff Behavior Is the Biggest Variable
Technology controls matter, but human behavior is the bigger variable. BlackFog’s January 2026 research found that 86% of workers now use AI tools at least weekly for work, and more than a third are using free, consumer-grade versions of those tools – the ones with no enterprise security, no data governance, and no guarantees about where your information ends up.
What makes this harder to manage is the culture around it. Nearly half of employees admit to pretending they know how to use AI tools in meetings to avoid scrutiny. Almost as many have hidden their AI use from colleagues entirely. That means the people who most need guidance are the ones least likely to ask for it.
If staff know exactly what’s permitted, what tools are available, and how to use them responsibly, shadow usage drops. The organizations that handle this well publish a short, readable AI usage guide, pair it with a brief training session, and revisit it quarterly.
Your Checklist: Seven Things to Have in Place
If you’re evaluating where your organization stands, these are the items that separate a starting point from a defensible position.
- A written AI acceptable use policy that covers which tools are permitted, which data is off-limits, and what requires approval.
- A data classification exercise so you know which information is sensitive and where it lives. This doesn’t have to be exhaustive, just start with your top ten data stores.
- An access control review confirming that permissions match current roles and that former employees or contractors don’t still have access to sensitive systems.
- A staff training baseline covering what AI is, what the organization’s policy says, and what safe usage looks like in practice. Even thirty minutes makes a measurable difference.
- An inventory of AI tools already in use, both sanctioned and unsanctioned. You can’t govern what you can’t see. Ask departments directly; you’ll be surprised.
- A named owner for AI governance, because when no one is responsible, no one acts. This doesn’t have to be a new hire. It must be someone with the authority to make and enforce decisions.
- A review cadence to revisit your policy, your tool list, and your training on a quarterly or biannual basis. AI is moving fast enough that a policy written six months ago may already have gaps.
None of these require a large budget. Most require time, attention, and someone willing to own the outcome.
Governance, Not Experimentation for Its Own Sake
There’s a temptation to treat AI readiness as an innovation project – spin up some pilots, test a few tools, and see what sticks. That approach feels productive, but it often creates more risk than value. A 2025 study by Wharton and GBK Collective found that enterprises without a formal AI strategy reported only a 37% success rate in AI adoption, compared to 80% for those with a defined plan.
For organizations in the 80 to 200-seat range, governance doesn’t mean hiring a Chief AI Officer or standing up a committee. It means having a clear set of rules, a named point of accountability, and a rhythm for reviewing how AI is being used. It means treating AI adoption the way you’d treat any other technology rollout – with a policy, a plan, and a responsible party.
Where to Go from Here
If you read through that checklist and felt confident about every item, you’re ahead of most organizations your size. If a few of them made you stop and think, then that’s a starting point.
We’re hosting an AI Lunch & Learn in Jacksonville this March where we’ll walk through this checklist in real-world terms and show how other IT leaders in the region are approaching these decisions. It’s designed for IT directors and technology leads at mid-size organizations, the people who are fielding questions from leadership and trying to build a defensible plan with limited bandwidth.


