Business professional reviewing policy document on laptop screen in office setting

AI Acceptable Use Policy Template for Business

Why AI Policies Matter Now

A year ago, AI policy was a “nice to have” for most businesses. Today, it’s a necessity. Employees are already using AI tools for work—whether you’ve approved them or not. The question isn’t whether to address AI governance but how to do it in a way that works.

The best AI policies share common characteristics: they’re clear enough to follow, flexible enough to be practical, and specific enough to actually guide decisions. The worst policies are either so restrictive that everyone ignores them or so vague that they provide no real guidance.

Take Our 2-Minute Security Assessment

centrexIT has helped businesses develop technology policies since 2002. If you’re not sure where to start with AI governance, let’s find out together.

Take the 2-Minute Cybersecurity Assessment

The Core Components of an AI Policy

Every effective AI acceptable use policy addresses these elements:

Scope: Who does this policy apply to? Employees, contractors, vendors? Does it cover only AI tools used for work, or also personal AI use on company devices?

Data classification: What types of information can interact with AI tools? Public information might have different rules than customer data, financial records, or proprietary processes.

Approved tools: Which specific AI tools are sanctioned for business use? Naming specific tools is clearer than describing categories.

Prohibited uses: What activities are explicitly not allowed? This might include processing regulated data, making final decisions without human review, or representing AI-generated content as human-created.

Accountability: Who is responsible for AI-generated outputs? If AI helps draft a customer communication that contains an error, who owns that mistake?

Review and updates: How often will the policy be reviewed? AI tools and capabilities change rapidly—policies need to keep pace.

AI Acceptable Use Policy Template

The following is a framework you can adapt for your organization. This is a starting point, not a complete policy—customize based on your industry, size, regulatory requirements, and risk tolerance.

Template

Artificial Intelligence Acceptable Use Policy

[Company Name] — Effective Date: [Date]


Section 1: Purpose and Scope

This policy establishes guidelines for the acceptable use of artificial intelligence tools by [Company Name] employees, contractors, and authorized third parties. It applies to all AI tools used in connection with company business, regardless of whether those tools are provided by the company or accessed through personal accounts.


Section 2: Data Classification for AI Use

Tier Data Type AI Tool Permissions
Tier 1 — Public Published marketing materials, public-facing website content, general industry research May be used with any reputable AI tool
Tier 2 — Internal Internal communications, general process documentation, non-confidential project materials Approved enterprise AI tools with company accounts only
Tier 3 — Sensitive Customer data, employee personal information, financial records, proprietary processes, regulated information Requires explicit approval; designated tools with data handling agreements only
Tier 4 — Prohibited Authentication credentials, encryption keys, [industry-specific categories] Must never be processed by AI tools under any circumstances

Section 3: Approved AI Tools

Data Tier Approved Tools
Tier 1–2 [e.g., Microsoft Copilot (enterprise), Claude (business tier), ChatGPT (enterprise)]
Tier 3 [Specifically vetted tools with appropriate data handling agreements]

This list will be maintained by [IT/Security team] and reviewed quarterly. Employees should check the current approved list before using any AI tool for work purposes.


Section 4: General Use Guidelines

4.1 — All AI use must comply with existing company policies regarding data protection, confidentiality, and acceptable use of technology resources.

4.2 — Employees are responsible for reviewing AI-generated content before use. AI outputs should be treated as drafts requiring human verification.

4.3 — AI-generated content used in customer-facing communications, proposals, or official documents must be reviewed for accuracy before distribution.

4.4 — Personal AI accounts should not be used for company business unless explicitly approved.


Section 5: Prohibited Uses

The following uses of AI tools are prohibited:

5.1 — Processing customer personal information, financial data, or health information without explicit approval and designated tools.

5.2 — Using AI to make automated decisions affecting customers or employees without human review.

5.3 — Representing AI-generated content as entirely human-created when such representation would be materially misleading.

5.4 — Using AI tools to bypass or circumvent security controls or access restrictions.

5.5 — Uploading company proprietary code, algorithms, or trade secrets to AI tools without specific authorization.


Section 6: Accountability and Compliance

6.1 — Employees are accountable for their use of AI tools and the outputs generated.

6.2 — Violations of this policy may result in disciplinary action consistent with existing company policies.

6.3 — Questions about whether specific AI use is appropriate should be directed to [IT/Security/Management contact].


Section 7: Policy Review

This policy will be reviewed quarterly and updated as needed to reflect changes in AI technology, business requirements, and regulatory guidance.

Last Revised: [Date] — Next Review: [Date]
Policy Owner: [Name/Title]

Implementation Tips

Creating a policy is just the first step. Making it work requires additional effort:

Communicate clearly: Don’t just email the policy and hope people read it. Walk teams through what it means for their specific work. Answer questions. Provide examples.

Make approved tools available: If you’re requiring enterprise AI tools, ensure employees actually have access to them. Nothing undermines a policy faster than approving tools nobody can use.

Provide training: Help employees understand not just the rules but the reasons. People who understand why AI governance matters make better decisions in ambiguous situations.

Create a feedback channel: Employees will encounter situations the policy doesn’t explicitly address. Give them a way to ask questions and flag issues. Use that feedback to improve the policy.

Review and adjust: AI capabilities change rapidly. A policy that made sense six months ago might need updates. Schedule regular reviews to ensure your guidance remains relevant.

Take Our 2-Minute Security Assessment

centrexIT has helped businesses develop and implement technology policies since 2002. If you’re looking to establish AI governance that works, let’s find out together how we can help.

Take the 2-Minute Cybersecurity Assessment

Sources

  • NIST – “AI Risk Management Framework” (2024)
  • Gartner – “Enterprise AI Governance Recommendations” (2025)
  • IEEE – “Ethical AI Implementation Guidelines” (2024)
governance-without-red-tape-ai-policies-smbs

Governance Without the Red Tape: AI Policies for SMBs

I talked to a business owner last month who told me they’d decided to “just ban AI” at their company. No ChatGPT. No Copilot. No AI tools of any kind.

I asked how that was going.

“Honestly? I have no idea if anyone’s following the policy.”

That’s the problem with prohibition: it doesn’t work. Employees who find tools useful will use them regardless of policy. The only difference is whether they do it openly or hide it.

Why Most AI Policies Fail

I’ve seen a lot of AI policies over the past year. The ones that fail share common characteristics:

They try to ban everything. This ignores the reality that AI tools provide genuine productivity benefits. When policies conflict with getting work done, work wins.

They’re written by legal teams in isolation. Policies filled with legalese that nobody reads don’t change behavior. They just provide CYA documentation.

They don’t distinguish between different types of AI use. Using AI to draft a marketing email is different from using AI to analyze customer health records. Policies that treat all AI use identically miss the nuance.

They require approval processes that create friction. If using an approved AI tool requires submitting a request and waiting three days for approval, employees will use unapproved tools with zero wait time.

The policies that work look completely different.

What Actually Works

The businesses I’ve seen successfully manage AI adoption share a different approach:

They start with data classification, not tool restrictions. Before deciding which AI tools are acceptable, they define what data is sensitive. Customer information. Financial records. Proprietary processes. Employee data. Once you know what needs protection, you can evaluate tools based on how they handle that specific information.

They approve specific tools rather than categories. Instead of saying “you can use AI for X but not Y,” they identify specific tools that meet their security requirements. “Use Microsoft Copilot for document work. Use Claude for analysis. Don’t paste customer data into any AI tool.”

They make compliance easier than non-compliance. If the approved AI tool is readily available and works well, employees have no reason to seek alternatives. If the approved tool requires jumping through hoops while ChatGPT is two clicks away, you’ve already lost.

They train on principles, not just rules. Employees who understand why AI governance matters make better decisions in situations the policy doesn’t explicitly cover. “This data is sensitive because…” is more effective than “Don’t do this because the policy says so.”

A Practical Framework

Here’s what I recommend for businesses that want AI governance without bureaucratic overhead:

Tier 1: Public information. Marketing content, general research, public-facing communications. These can use any reputable AI tool with minimal restrictions.

Tier 2: Internal information. Internal processes, non-sensitive business data, general productivity tasks. These should use approved enterprise tools with business accounts, not personal subscriptions.

Tier 3: Sensitive information. Customer data, financial records, employee information, proprietary processes. These either require specially configured AI tools with appropriate data handling agreements, or they don’t touch AI at all.

The framework is simple: know what category your data falls into, and you know what tools you can use with it.

The Training Component

Policies without training are just documents. The businesses that make this work invest time in helping employees understand the why.

Why does it matter where data is processed? Because AI tools send information to external servers that you don’t control.

Why do enterprise tools cost more? Because they come with data handling agreements, admin controls, and compliance features that free tiers don’t include.

Why can’t we just use whatever’s convenient? Because convenience without security creates risk that compounds over time.

When employees understand the reasoning, they become partners in governance rather than obstacles to it.

Making It Real

I’ll be honest: creating an AI policy isn’t the hard part. Following through is. You need to actually provide the approved tools. You need to train people on using them. You need to check periodically whether the policy is being followed or ignored.

The businesses that succeed treat AI governance as an ongoing process, not a one-time project. They review their approved tools quarterly. They survey employees about what’s working and what’s not. They adjust based on what they learn.

The businesses that fail publish a policy, send an all-hands email, and assume the job is done.

Where to Start

If you don’t have AI governance yet, here’s the minimum viable approach:

This week: Find out what AI tools employees are actually using. Ask directly. You might be surprised.

This month: Classify your data. What’s public? What’s internal? What’s sensitive? Most organizations already have some sense of this from other compliance work.

This quarter: Identify approved tools for each data tier. Make them available. Train employees on using them.

That’s it. Not a 40-page policy document. Not a committee that meets for six months. Just clarity about what data matters and what tools are acceptable.

AI governance doesn’t have to mean red tape. It just means intentional decisions about how powerful tools interact with your information.

What’s working at your organization? What’s not? I’m always interested in hearing what other business leaders are learning.

Small business owner and IT consultant reviewing cybersecurity assessment together in a conference room

Under Armour Ransomware Breach — 72 Million Customers Exposed

Another Retail Giant Falls to Ransomware

Under Armour has confirmed that a ransomware attack resulted in customer data appearing on dark web forums. The breach exposed approximately 72 million customer records—names, email addresses, purchase histories, and in some cases, partial payment information.

For context, 72 million records represents more than the entire population of California, Texas, and Florida combined. This wasn’t a small data leak. This was a catastrophic exposure of customer trust.

Take Our 2-Minute Security Assessment

centrexIT has helped businesses protect customer data and respond to security incidents since 2002. If you’re not sure how your customer data is protected, let’s find out together.

Take the 2-Minute Cybersecurity Assessment

The Anatomy of a Retail Data Breach

Retail companies make attractive ransomware targets for several reasons. They collect massive amounts of customer data—names, addresses, payment information, purchase histories, loyalty program details. They often operate on thin margins that make security investments compete with other priorities. And they typically have complex technology environments spanning point-of-sale systems, e-commerce platforms, supply chain integrations, and corporate networks.

When ransomware operators breach a retail environment, they don’t just encrypt files anymore. Modern ransomware attacks follow a double-extortion model: attackers steal data first, then encrypt systems. If the victim refuses to pay for decryption, the attackers threaten to publish the stolen data. If the victim still refuses, the data hits the dark web—exactly what happened with Under Armour.

According to the Verizon 2025 Data Breach Investigations Report, ransomware was present in 44% of all breaches analyzed—a significant jump from 32% the previous year. The same report found that ransomware attacks rose 37% overall year-over-year, making it the dominant attack pattern across industries.

What Happens When Customer Data Hits the Dark Web

Once customer records appear on dark web forums, they become tools for additional attacks. Here’s what typically happens:

Credential stuffing: Attackers test the stolen email and password combinations against other services. Since many people reuse passwords, a breach at one company can compromise accounts elsewhere.

Phishing campaigns: With detailed purchase histories, attackers can craft convincing phishing emails. “We noticed a problem with your recent Under Armour order” becomes much more believable when they actually know what you ordered.

Identity theft: Names, addresses, and purchase patterns provide building blocks for identity fraud. Combined with information from other breaches, attackers can construct detailed profiles of potential victims.

Secondary sales: Initial buyers on dark web forums often repackage and resell data to other criminal groups, extending the exposure window indefinitely.

The SMB Connection

Under Armour is a multi-billion dollar company with dedicated security teams and significant technology budgets. If they can suffer a breach of this magnitude, what does that mean for smaller businesses?

The uncomfortable truth: small and mid-size businesses face the same threats but often with fewer resources. According to the Verizon 2025 DBIR, 88% of breaches at small and mid-sized businesses involved ransomware. And the Sophos State of Ransomware 2025 report found that the average recovery cost from a ransomware attack—excluding any ransom payment—was $1.53 million.

But there’s a nuance here that matters. Large enterprises get breached because they’re big targets with complex environments and valuable data. SMBs get breached because they’re easier targets with fewer defenses. Different reasons, but the same devastating outcomes.

Lessons for Every Business

The Under Armour breach reinforces several security fundamentals that apply regardless of company size:

Customer data requires special protection. Not all data is equal. Information that identifies individuals—names, addresses, purchase histories, payment details—deserves the highest level of protection because the consequences of exposure are severe and long-lasting.

Backup alone isn’t ransomware protection. Double-extortion attacks mean that even if you can restore from backups, attackers still have your data. Recovery planning must account for data theft, not just system encryption.

Detection speed matters enormously. The longer attackers remain in your environment before detection, the more data they can steal. Reducing dwell time from months to days can mean the difference between a contained incident and a catastrophic breach.

Incident response planning is essential. When a breach occurs, you don’t have time to figure out who does what. Response plans, communication templates, and decision trees need to exist before you need them.

Questions to Ask Your IT Team

Whether you manage IT internally or work with a partner, these questions can help assess your readiness:

Where does our customer data live? Can you map every system, database, and application that stores or processes customer information?

How would we know if data was being exfiltrated? Do you have monitoring that would detect unusual data transfers?

What’s our ransomware response plan? Beyond restoring from backup, how would you handle a situation where attackers have already stolen data?

When did we last test our backups? Having backups is different from having working, restorable, verified backups.

Take Our 2-Minute Security Assessment

centrexIT has helped businesses protect customer data and prepare for security incidents since 2002. If you’re not sure how your organization would handle a ransomware attack, let’s find out together.

Take the 2-Minute Cybersecurity Assessment

Sources

security professional reviewing network access logs on multiple monitors in a standard office environment

Nike Data Breach Exposes 1.4TB of Internal Data

Another Major Brand Falls to Sustained Network Intrusion

Nike has confirmed it’s investigating unauthorized access that resulted in approximately 1.4 terabytes of internal data being extracted from its systems. The scale of the extraction—1.4TB is roughly equivalent to 280 million pages of documents—signals this wasn’t a quick smash-and-grab. Someone had sustained access to Nike’s internal network long enough to locate, collect, and transfer a massive volume of files.

Take the 2-Minute Cybersecurity Assessment

centrexIT has helped businesses detect unauthorized access and protect internal systems since 2002. If you’re not sure who has access to your network right now, let’s find out together.

What 1.4TB of Data Actually Means

To put that volume in perspective: the average business email is about 75 kilobytes. At that size, 1.4TB represents roughly 18 million emails worth of data. Even if Nike’s stolen files were larger documents, spreadsheets, and databases, we’re still talking about millions of files.

That doesn’t happen in an afternoon. Extracting that volume of data requires time—time to identify valuable systems, time to navigate internal networks, time to compress and transfer files without triggering alerts. This pattern matches what security researchers call “sustained access,” where attackers establish persistent presence within a network before making their move.

The Uncomfortable Reality of Dwell Time

According to IBM’s 2025 Cost of a Data Breach Report, the average time to identify and contain a breach is 241 days—the lowest in nine years, but still more than six months of unauthorized access before anyone notices something is wrong.

Nike hasn’t disclosed when the unauthorized access began or how long attackers maintained presence. But the volume of data involved suggests this wasn’t a recent intrusion discovered quickly. More likely, someone was inside Nike’s network for weeks or months, quietly mapping systems and collecting files.

This is the pattern we see repeatedly: attackers gain initial access through compromised credentials, a vulnerable application, or a third-party vendor. Then they wait. They observe. They learn how the network operates, where valuable data lives, and how to move laterally without detection.

Third-Party Access: The Blind Spot

While Nike hasn’t confirmed the attack vector, security analysts note that many recent breaches share a common entry point: third-party access. Vendors, contractors, and partners often have legitimate credentials to access portions of an organization’s network. When those credentials are compromised—or when those third parties have weak security themselves—attackers get a free pass through the front door.

A January 2026 analysis from Strobes Security found that even when core platforms remain secure, connected systems like customer support tools, vendor portals, and legacy integrations often become the weakest entry points. Security programs focused only on primary applications miss the exposure created by everyone else who can log in.

What This Means for Your Business

Nike has resources most businesses can only imagine. Dedicated security teams. Enterprise detection tools. Compliance frameworks. And still, someone extracted 1.4TB of internal data.

The lesson isn’t that security is hopeless. The lesson is that traditional perimeter-focused security isn’t enough. You need visibility into who is accessing your systems right now. You need to know what “normal” looks like so you can spot “abnormal.” And you need to assume that someone, somewhere, is already testing your defenses.

Questions every business leader should be asking right now:

Who has access to your internal systems? Not just employees—vendors, contractors, former partners. Can you account for every credential that can log into your network?

How would you know if someone was inside your network today? Do you have monitoring that would detect unusual data movement? Would you notice someone downloading files at 2 AM?

When was your last access audit? Credentials accumulate over time. That contractor from three years ago might still have valid login information.

Take Our 2-Minute Security Assessment

centrexIT has helped businesses protect their networks and detect unauthorized access since 2002. If you’re not sure whether your current security would catch a sustained intrusion, let’s find out together.

Take the 2-Minute Cybersecurity Assessment

Sources

Dylan Natter CEO centrexIT From the CEO Desk article about the 88 percent problem showing ransomware disproportionately targets small and midsize businesses

The 88% Problem

I came across a number recently that I haven’t been able to shake.

88%.

That’s the percentage of ransomware breaches last year that hit small and midsize businesses. Not the big guys. Not government. Businesses like the ones we work with every day.

For larger enterprises? That number is 39%.

When I first saw that, I thought there had to be something off with the data. But it checks out. And honestly, when you think about it, the math makes sense.

Why the Shift Happened

I’ve been in this industry for over two decades now, and I’ve watched the target move. The big companies got serious. They built out security teams, layered their defenses, made themselves expensive to go after. So the attackers adjusted.

It’s kind of like that story I heard early in the pandemic about toilet paper. Remember that? Someone explained to me that we didn’t actually have a shortage. The problem was proportion. It used to be 50/50 between commercial and home use. Then overnight it went to 95/5, and the manufacturers just weren’t set up for that shift.

Same thing happened here. Small and midsize businesses adopted cloud platforms, remote work tools, all these connected systems. That’s great for productivity. But most of them moved faster on the technology than they did on the security around it.

Attackers aren’t dumb. They follow the path of least resistance. Right now, that path runs straight through the mid-market.

The Assumption That Gets People in Trouble

I still hear it from business owners all the time: “We’re not big enough to be worth going after.”

I get why people think that. But here’s what that misses.

These attacks aren’t hand-crafted for each victim anymore. They’re automated. Attackers are running broad campaigns that scan thousands of organizations at once. They don’t care about your revenue. They care about whether your door is open.

And here’s the part that really concerns me: the window between “someone got in” and “everything is locked” has shrunk to about five days. Most businesses don’t even know they’ve been compromised in that timeframe.

Then there’s this: 69% of companies that paid a ransom got hit again. Once you’re marked as someone who pays, you become a repeat target. That’s really a thing now.

What We’re Seeing Work

Look, I’m not going to pretend we have all the answers. We’re far from perfect. But after watching this play out across dozens of organizations, you start to see patterns.

The ones that stay protected tend to share a few things:

They actually know what they have. You can’t protect systems you don’t know exist. Shadow IT, old cloud accounts nobody remembers, contractor access that never got turned off. These gaps are where attackers get in.

They’ve shifted their mindset. Instead of trying to prevent everything, they’re focused on detecting and responding quickly. Perfect prevention isn’t realistic. Fast containment is.

They treat security as ongoing operations, not a one-time project. The businesses that get hit are often the ones who did an audit two years ago and figured they were covered.

They have people watching. The tools are important, but the tools alone aren’t enough. You need people who can see context, who can catch the things that don’t fit a pattern. That’s really what we mean when we talk about People-First, AI-Amplified. The technology makes the people more effective. It doesn’t replace the judgment.

The Question Worth Asking

Here’s what I’d challenge you to think about: if someone started probing your systems tonight, how long would it take you to know?

Not how long until they got in. How long until you noticed someone was trying.

If the honest answer is “I have no idea,” that’s the gap worth closing.

The 88% number isn’t going down anytime soon. But whether your organization ends up part of that statistic is still something you can control.

Let’s Talk About Where You Stand

If you’re not sure what your security gaps look like, I’m happy to spend 30 minutes walking through it with you. No pitch, no pressure. Just an honest conversation about what’s working and what’s not.

Schedule a free 30-minute session

Sources

Office desk with laptop displaying Chrome browser extensions panel, illustrating the security risk of malicious browser extensions targeting enterprise HR and business platforms.

Malicious Chrome Extensions Stole Credentials from 2,300 Businesses

Five browser extensions were just removed from the Chrome Web Store after security researchers discovered they were stealing login credentials from enterprise HR and business platforms.

The extensions posed as productivity tools for Workday, NetSuite, and SAP SuccessFactors. Instead, they harvested authentication cookies every 60 seconds, blocked security administration pages to prevent incident response, and enabled complete account takeover—all while bypassing multi-factor authentication.

Over 2,300 users installed them before removal. Some are still available on third-party download sites.

Take the 2-Minute Cybersecurity Assessment: https://centrexit.com/cyber-security-readiness-assessment/

What Happened

On January 15, 2026, security researchers at Socket discovered five malicious Chrome extensions operating as a coordinated attack campaign. The extensions shared identical code structures, API patterns, and infrastructure despite appearing under different publisher names.

The five malicious extensions were:

  • DataByCloud 1 (1,000 installs)
  • DataByCloud 2 (1,000 installs)
  • DataByCloud Access (251 installs)
  • Tool Access 11 (101 installs)
  • Software Access (unknown installs)

All marketed themselves as tools to streamline access to enterprise platforms, promising faster workflows and bulk account management.

None delivered on those promises. Instead, they executed three distinct attack types simultaneously.

How the Attack Worked

Cookie theft every 60 seconds. The extensions continuously extracted authentication cookies from targeted platforms including Workday, NetSuite, and SAP SuccessFactors. These cookies contain active login tokens that allow access without re-entering credentials. The stolen data was encrypted and sent to remote servers controlled by the attackers.

Security page blocking. Two of the extensions actively blocked access to security administration pages within Workday. When administrators tried to access authentication policies, IP range settings, password reset functions, or audit logs, the pages either displayed blank content or redirected elsewhere. This prevented IT teams from responding to the breach or even detecting unusual activity.

Session hijacking. Using the stolen cookies, attackers could take over authenticated sessions without needing usernames, passwords, or MFA codes. The session tokens were already validated—the attackers simply injected them into their own browsers and gained full access.

Why This Bypassed MFA

Multi-factor authentication protects the login process. It verifies identity when you enter credentials. But once you’re logged in, your session is maintained by cookies and tokens—not continuous MFA checks.

These extensions stole the session tokens after authentication was complete. The attackers didn’t need to bypass MFA because they were hijacking sessions that had already passed all security checks.

This is why session management and browser security matter as much as strong authentication.

What to Do Now

If your organization uses Chrome and accesses HR or business platforms through the browser, take these steps:

Audit installed extensions. Open Chrome, go to the extensions page (chrome://extensions), and review everything installed. Specifically check for and remove these five extensions if present: DataByCloud 1, DataByCloud 2, DataByCloud Access, Tool Access 11, and Software Access. Also remove anything else unfamiliar, especially tools claiming to provide access to HR or ERP platforms. Legitimate enterprise platforms don’t require third-party browser extensions.

Check across all devices. If Chrome sync is enabled, malicious extensions may have spread to multiple devices. Audit each one separately.

Review authentication logs. Check Workday, NetSuite, or SuccessFactors admin panels for unexpected sessions, unfamiliar IP addresses, or access from unusual locations during the period any suspicious extensions were installed.

Reset passwords from a clean system. If you suspect exposure, change passwords—but do it from a device you’ve verified is clean. Resetting from an infected browser means the new credentials get stolen immediately.

Implement extension allowlists. Chrome Enterprise allows organizations to restrict which extensions can be installed. Consider implementing allowlists that only permit approved, vetted extensions.

The Bigger Picture

Browser extensions are one of the most overlooked attack vectors in enterprise security. They run inside the browser with access to everything you access—passwords, session tokens, sensitive data, internal systems.

Traditional perimeter security doesn’t see them. Endpoint protection often ignores them. They bypass network monitoring entirely because the data theft happens within encrypted browser sessions.

Most organizations don’t have policies governing what extensions employees can install. Most don’t audit installed extensions regularly. Most wouldn’t know if an extension was exfiltrating data right now.

This attack worked because browser security is still treated as an afterthought in most enterprise environments. That needs to change.


Take Our 2-Minute Security Assessment

centrexIT has protected businesses since 2002. Browser security is just one piece of a comprehensive security posture. Find out where your organization stands.

Take the 2-Minute Cybersecurity Assessment: https://centrexit.com/cyber-security-readiness-assessment/


Sources

  • Socket Security Research (January 15, 2026): “5 Malicious Chrome Extensions Enable Session Hijacking in Enterprise HR Platforms”
  • The Hacker News (January 16, 2026): “Five Malicious Chrome Extensions Impersonate Workday and NetSuite to Hijack Accounts”
  • BleepingComputer (January 16, 2026): “Credential-stealing Chrome extensions target enterprise HR platforms”
  • Infosecurity Magazine (January 19, 2026): “Malicious Google Chrome Extensions Hijack Workday and Netsuite”
IT operations team reviewing AI readiness assessment on laptop screen with whiteboard showing artificial intelligence readiness framework and four pillars during planning meeting in office conference room

How to Evaluate If Your Organization Is Ready for Autonomous Systems

Businesses are asking “should we adopt autonomous systems?” when the better question is “are we ready to adopt autonomous systems?”

Those are very different questions. And the honest answer for many organizations is: not yet.

Here’s why that’s okay—and how to know where you stand.

The Readiness Gap Nobody Talks About

Accenture research shows that only 12% of companies have reached what they call “AI maturity”—the organizational readiness to deploy autonomous systems effectively.

That means 88% of businesses are somewhere on the journey, but not at the destination.

The problem isn’t the technology. AI systems that can monitor networks, respond to incidents, optimize performance, and manage routine operations already exist. They work.

The problem is organizational readiness. Autonomous systems require foundations that many businesses haven’t built yet. Trying to deploy AI before you’re ready doesn’t accelerate progress—it creates expensive failures.

Take the Assessment: https://centrexit.com/cyber-security-readiness-assessment/

The Four Readiness Pillars

After working with dozens of organizations at different stages of this journey, we’ve identified four pillars that determine whether autonomous systems will help you or hurt you.

1. Process Clarity

Autonomous systems execute processes. If your processes aren’t documented, standardized, or consistently followed, giving them to AI just automates chaos.

Ask yourself: If someone asked “how do we handle a security alert?” would five different team members give the same answer? If not, you’re not ready for automation.

You need: Documented workflows, clear escalation paths, and consistent execution. AI can’t create process discipline—it can only amplify what already exists.

2. Data Foundation

AI systems learn from data. If your data is incomplete, inconsistent, or trapped in disconnected systems, autonomous operations will make decisions based on incomplete information.

Ask yourself: Can you easily access accurate data about system performance, user activity, security events, and operational metrics? If gathering that information requires manual effort across multiple tools, your data foundation isn’t ready.

You need: Centralized logging, consistent monitoring, integrated systems, and clean data pipelines. Not perfect data—but reliable, accessible, consistent data.

3. Trust Infrastructure

Autonomous systems require trust—but that trust has to be earned through proven reliability in controlled scenarios.

Ask yourself: Do you have environments where AI can prove itself with limited risk? Can you test autonomous decision-making in non-critical systems before expanding to critical operations?

You need: Sandbox environments, pilot programs, and a phased implementation approach that views AI deployment as gradual adoption, not binary commitment. Trust builds gradually, not overnight.

4. Governance Framework

This is the pillar most organizations skip—and it’s the most critical.

Autonomous systems need clear boundaries. What decisions can AI make independently? What requires human review? What’s completely off-limits to automation?

Ask yourself: If an AI system made a decision that caused a problem, would your team know who’s accountable and what the review process looks like? If the answer is unclear, your governance isn’t ready.

You need: Defined decision authority, accountability structures, override mechanisms, and audit capabilities. AI doesn’t eliminate responsibility—it changes how responsibility is structured.

The Readiness Assessment Framework

Here’s a practical framework to evaluate where you stand:

Level 1: Foundation Building

  • Processes are inconsistent or undocumented
  • Data exists but isn’t centralized or reliable
  • No formal AI governance discussions
  • Team skeptical or uncertain about AI

What this means: You’re not ready for autonomous systems yet—but you can start building readiness. Focus on process documentation and data centralization first.

Level 2: Readiness Emerging

  • Core processes documented and followed
  • Monitoring and logging mostly centralized
  • Exploring AI capabilities in limited pilots
  • Some governance conversations happening

What this means: You’re building the foundation. Start small-scale pilots in non-critical areas. Use these to build trust and refine governance.

Level 3: Deployment Ready

  • Standardized processes across operations
  • Reliable, accessible data infrastructure
  • Successful pilots with measurable outcomes
  • Clear governance framework in place

What this means: You’re ready to expand autonomous systems into production operations. Start with bounded decision-making and expand based on proven results.

Level 4: Operational Maturity

  • Autonomous systems handling routine operations
  • Continuous learning from operational data
  • Mature governance with clear accountability
  • Team confidently directing AI systems

What this means: You’re in the 12%. Now the focus shifts to optimization, expanding capabilities, and continuous improvement.

Why Readiness Matters More Than Speed

Companies rush toward autonomous systems because competitors are doing it, or because they feel pressure to “innovate,” or because AI becomes a board-level discussion topic.

The ones who succeed aren’t the fastest to adopt. They’re the ones who build readiness first.

Here’s what happens when organizations skip readiness:

Autonomous systems make bad decisions based on incomplete data. Teams lose trust in AI capabilities. Governance failures create accountability gaps. The organization concludes “AI doesn’t work” when the real problem was “we weren’t ready.”

Contrast that with phased, readiness-based adoption:

Small pilots prove value in controlled environments. Teams build confidence through successful experiences. Governance frameworks prevent surprises. The organization expands AI capabilities because they’ve earned trust through demonstrated results.

Same technology. Completely different outcomes. The difference is readiness.

Where to Start Based on Your Level

If you’re at Level 1, most organizations are still building foundations with you. Your next step isn’t AI deployment—it’s process documentation and data centralization.

If you’re at Level 2, you’re in the pilot phase. Run small experiments in non-critical systems. Build governance frameworks before expanding. Let trust develop through proven performance.

If you’re at Level 3 or 4, you’re ahead of most. Your focus shifts to optimization, risk management, and scaling what’s working.

The autonomous IT era is coming—but it’s not a race where speed wins. It’s a transformation where readiness determines success.

Assess honestly where you stand. Build the foundations that matter. Deploy when you’re ready, not when you feel pressured.

That’s how organizations get autonomous systems that actually work.

Take Our 2-Minute Security Assessment

Readiness starts with understanding where you currently stand. Our cybersecurity assessment helps identify gaps in your foundation—the same foundation autonomous systems require.

centrexIT has helped San Diego businesses operate with confidence since 2002. Now we’re helping them prepare for what comes next.

Take the Assessment: https://centrexit.com/cyber-security-readiness-assessment/

Hand holding smartphone displaying Uber logo with data breach warning alert, illustrating the September 2022 MFA fatigue attack that compromised Uber's internal systems through exhausted contractor approval.

How Uber Was Breached Through MFA Fatigue: A Security Wake-Up Call

September 2022. A ride-share giant’s entire network compromised. Not through sophisticated malware or zero-day exploits—but because someone got tired of saying no.


It started with a stolen password.

An Uber contractor’s credentials—likely purchased from the dark web after an earlier breach—gave an attacker the first piece they needed. But Uber had multi-factor authentication protecting those credentials. Every login attempt triggered a push notification to the contractor’s phone: “Approve this sign-in?”

The contractor kept denying it.

So the attacker kept trying.

Again. And again. And again.

Push notification after push notification flooded the contractor’s phone. 10 notifications. 20 notifications. 40 notifications in 30 minutes. During work hours. During dinner. Late at night.

Eventually, exhausted and assuming it was some kind of system glitch, the contractor approved one.

The attacker was in.

What Happened Next

Once inside the network, the attacker moved laterally through Uber’s systems with alarming speed. They gained access to:

  • Internal Slack channels where employees had shared credentials
  • Cloud storage containing code repositories
  • Administrative tools for Uber’s production environment
  • Third-party systems including security platforms

The attacker even accessed Uber’s HackerOne account—the platform Uber uses to run its bug bounty program—and sent messages to security researchers announcing the breach.

“I announce I am a hacker and Uber has suffered a data breach,” one message read, accompanied by screenshots of internal systems.

The entire Uber engineering team had to be locked out of their own code repositories while the company investigated. Services went dark. The incident response scramble consumed hundreds of employee hours.

All because someone got tired of dismissing authentication requests.

How MFA Fatigue Attacks Work

Multi-factor authentication is supposed to stop exactly this scenario. You know the password? Great—but you still need to approve the login from your registered device.

Except attackers discovered the weakness: human psychology.

The attack pattern is simple:

Step 1: Obtain valid credentials (purchased, phished, or breached)
Step 2: Automate login attempts with those credentials
Step 3: Generate dozens or hundreds of MFA push notifications
Step 4: Wait for the victim to approve one out of frustration or confusion

The victim isn’t being tricked into revealing their password. They’re being exhausted into approving access they know is unauthorized.

Security professionals call it “MFA fatigue” or “push notification bombing.” Attackers call it effective.

The Social Engineering Layer

In Uber’s case, the attacker added one more element: they contacted the contractor directly.

Posing as Uber IT support, the attacker messaged the contractor claiming the notifications were part of a system fix. “Just approve the next one and they’ll stop,” the message suggested.

Tired of the constant alerts and believing they were talking to legitimate IT staff, the contractor approved the authentication request.

That combination—technical persistence plus social manipulation—is what makes MFA fatigue attacks so dangerous. The victim often knows something is wrong but approves anyway.

Why Traditional MFA Failed

Uber had implemented what many organizations consider best practice: multi-factor authentication requiring device approval.

But the implementation had a critical flaw—it allowed unlimited authentication attempts without throttling or alerts. An attacker could send 100 push notifications with no consequences beyond annoying the user.

The system treated each authentication request independently. No escalation after 5 denials. No automatic lockout after 10 attempts. No alert to security teams that something abnormal was happening.

The MFA provided security theater—visible protection that gave a false sense of safety—without addressing the human factor.

What Makes You Vulnerable

Your organization is vulnerable to MFA fatigue attacks if:

  • MFA push notifications have no attempt limits
  • Users can approve requests without context (what device, what location, what service)
  • No alerts trigger after multiple denied attempts
  • Contractors and vendors use the same MFA as internal employees
  • Users receive MFA requests during off-hours with no additional verification

The Uber breach demonstrated that MFA alone—without proper implementation and monitoring—creates a false confidence. You believe you’re protected because you have two-factor authentication. You’re not wrong, but you’re not as safe as you think.

The Aftermath

Uber disclosed the breach publicly after the attacker announced it themselves. The company’s new chief security officer, hired just months earlier specifically to improve security culture, faced their first major crisis.

The Federal Trade Commission later included the Uber breach in enforcement actions, highlighting the company’s pattern of security failures. The incident cost Uber millions in investigation, remediation, and regulatory response.

But the broader damage was to trust. Uber riders, drivers, and employees all had to question whether their personal information was secure. The breach exposed sensitive data including:

  • Driver license information
  • Social Security numbers
  • Background check details
  • Internal communications
  • Security vulnerability reports

For the contractor whose account was compromised? They likely faced internal review despite being the victim of a sophisticated social engineering attack.

What Actually Prevents This

Preventing MFA fatigue attacks requires layering defenses:

Number matching: Instead of “Approve yes/no,” the system displays a number that must be entered on the authenticating device. This eliminates mindless approval.

Attempt throttling: After 3 denied MFA requests, lock the account and alert security. After 5 denials, require in-person verification.

Context in requests: Show users what they’re approving—device type, location, IP address, time. “Someone in Russia is trying to log into your account at 3 AM” gets different treatment than generic requests.

Time-based restrictions: If your employees never log in from Asia at 2 AM, automatically deny those requests regardless of MFA approval.

Separate authentication for high-value access: Administrative accounts shouldn’t use the same MFA as standard users. Add hardware tokens or biometric requirements.

The goal isn’t making authentication harder—it’s making social engineering attacks impossible to execute at scale.

The Real Lesson

The Uber breach proves something security teams already knew: people will eventually do what’s convenient, even when they know it’s wrong.

The contractor who approved that authentication request wasn’t careless. They were human. Exhausted. Overwhelmed. Trusting that someone claiming to be IT support was telling them the truth.

Your security architecture has to account for that. Not with blame or punishment—with design that makes the secure choice the easy choice.

MFA fatigue attacks work because the secure action (denying every single request) creates friction. The attack exploits that friction.

The question isn’t whether your employees will eventually approve something they shouldn’t. The question is: what happens when they do?


Take Our 2-Minute Security Assessment

centrexIT has protected San Diego businesses since 2002. If MFA fatigue attacks concern you—or if you’re not sure whether your authentication system can withstand this kind of social engineering—let’s find out together.

Take the 2-Minute Cybersecurity Assessment:
https://centrexit.com/cyber-security-readiness-assessment/


Sources

  • TechCrunch: “Uber Investigating Breach of Internal Computer Systems” (September 2022)
  • BleepingComputer: “Uber Hacked, Internal Systems Breached” (September 2022)
  • The New York Times: “Uber Investigating Breach of Its Computer Systems” (September 2022)
  • KrebsOnSecurity: “MFA Fatigue Attacks” (2022)
  • CISA: “Multi-Factor Authentication (MFA)” – Cybersecurity Best Practices
  • Microsoft Security: “Number Matching in Microsoft Authenticator” (2022)
centrexIT team members accepting the San Diego Business Journal Best Places to Work 2025 award at the ceremony

centrexIT recognized as one of San Diego’s Best Places to Work 2025

We won the San Diego Business Journal’s Best Places to Work award again. I’m proud of that. But I want to talk about what it actually represents—because it’s not about a plaque on the wall or a logo on our website.

It’s about the people who show up every day and choose to build something together.

Read more “centrexIT recognized as one of San Diego’s Best Places to Work 2025”

Hospital IT security team monitoring healthcare systems for potential breach indicators

2025’s Biggest Healthcare Data Breaches: Lessons for 2026

Another Brutal Year for Patient Data

2025 did not break the record set by the Change Healthcare attack—that catastrophic breach affected 193 million people and remains the worst in healthcare history. But 605 healthcare breaches were still reported to HHS, affecting 44.3 million Americans.

The numbers tell a familiar story: healthcare remains one of the most targeted sectors, and the patterns of failure repeat year after year. Understanding what happened in 2025 is essential for organizations determined to avoid becoming 2026 statistics.

Take the 2-Minute Cybersecurity Assessment

The Biggest Breaches of 2025

Yale New Haven Health System: 5.56 Million Affected

Connecticut’s largest health system detected unusual activity on March 8, 2025. Hackers had breached the network and obtained sensitive data including names, contact information, demographic data, medical record numbers, and Social Security numbers. The electronic medical records system was not accessed, but the breach affected 5.56 million patients.

Episource: 5.42 Million Affected

This IT vendor providing risk adjustment and medical coding services to health plans suffered a ransomware attack in February 2025. When a vendor with access to multiple health systems gets breached, the impact cascades across their entire client base.

Blue Shield of California: 4.7 Million Affected

This breach was different—it was not a hack but a configuration error. Google Analytics had been improperly configured in a way that could have allowed Google Ads to deliver ad campaigns back to impacted members. Blue Shield severed the connection in January 2024 but notified members throughout 2025.

McLaren Health Care: 743,131 Affected

Michigan’s McLaren Health Care suffered its second ransomware attack in two years. The Inc Ransom group claimed responsibility. Attackers had access between July 17 and August 3, 2024, but the breach was not fully understood until May 2025. Being hit twice in two years illustrates that recovery without fundamental security improvements just sets up the next attack.

Covenant Health: 478,188 Affected

The Qilin ransomware group struck this Catholic healthcare organization in May 2025, claiming to have stolen 850 GB of data. Hospitals in Maine, New Hampshire, and Massachusetts experienced system shutdowns. Wait times increased and some services were only available with paper orders.

The Patterns That Keep Repeating

  • Third-Party Vendor Risk

The Episource and Conduent breaches demonstrate that healthcare security extends far beyond hospital walls. When billing companies, IT vendors, and business associates get breached, patient data goes with them. Many healthcare organizations still lack visibility into their vendor ecosystem’s security practices.

  • Delayed Detection

McLaren’s attackers had access for over two weeks before detection. Many breaches take months to fully investigate. The time between intrusion and detection—dwell time—remains dangerously long in healthcare.

  • Repeat Targets

McLaren was hit twice in two years. Organizations that recover from ransomware without addressing fundamental security gaps become known as easy targets who will pay or suffer again.

What Experts Predict for 2026

Dave Bailey, vice president of security services at Clearwater, notes a clear shift from opportunistic attacks to highly coordinated, multi-stage operations. He predicts more disruptive attacks masquerading as traditional ransomware events, with attackers corrupting backups and damaging infrastructure to maximize pressure.

AI-enabled attacks that dramatically compress the time from initial access to impact are becoming more common. Healthcare organizations relying on manual processes will struggle to keep pace.

Take Our 2-Minute Security Assessment

centrexIT has protected San Diego healthcare organizations since 2002. If you’re not sure how your organization would fare against the attacks targeting healthcare, let’s find out together.

Take the 2-Minute Cybersecurity Assessment

Sources

  • HHS Office for Civil Rights Breach Portal—605 breaches, 44.3 million affected (December 2025)
  • HIPAA Journal: “Largest Healthcare Data Breaches of 2025” (January 2, 2026)
  • Chief Healthcare Executive: “These are the biggest health data breaches in the first half of 2025” (December 2025)
  • Bank Info Security: “2025 in Health Data Breaches and Predictions for 2026” (December 2025)
  • The Record: “Nearly 480,000 impacted by Covenant Health data breach” (January 2, 2026)