How Are Calgary Companies Handling Shadow AI Risk in 2026?

Shaheer Tariq
Mar 12, 2026

28% of employees use AI without approval. Calgary companies in energy, medical research, and finance face real exposure. Here's how to get ahead of it.
Last updated: March 2026
Shadow AI — employees using AI tools without organizational awareness or approval — is the most underestimated risk facing Calgary's mid-market in 2026. A 2025 Salesforce survey found that 28% of workers use generative AI at work without their employer's knowledge. For Calgary companies in energy, medical research, financial services, and professional services, that means sensitive data is flowing into uncontrolled AI tools every day.
This guide examines how Calgary companies are identifying, measuring, and managing shadow AI risk — with practical steps any mid-size organization can implement immediately.
What Shadow AI Looks Like in Calgary
Shadow AI isn't malicious. It's well-intentioned employees using the best tools available to them without understanding the risks. Here's what it looks like in practice across Calgary's key sectors:
Energy and Oil & Gas: An engineer pastes proprietary geological data into free ChatGPT to help format a report. A procurement analyst feeds contract terms into an AI tool to compare vendor proposals. Neither realizes the free tier may use their inputs as training data.
Medical Research: A researcher at a Calgary pharmaceutical services company uploads clinical trial data into a consumer AI tool to help with analysis. The data includes patient identifiers. This isn't hypothetical — in a recent conversation with a Calgary medical research company, their operations lead flagged exactly this scenario as their primary concern. Employees were using AI tools on sensitive medical data without any governance framework.
Financial Services: An analyst at a Calgary investment firm copies client portfolio details into ChatGPT to generate a market summary. Client-confidential financial data now sits in an AI system with no data retention guarantees.
Professional Services: A lawyer drafts a client memo using AI, inadvertently feeding privileged information into a tool that doesn't offer enterprise data protection. An accountant uses AI to prepare tax analysis, entering client financial records into an unvetted system.
In every case, the employee is trying to be more productive. The problem isn't intent — it's infrastructure. Without approved tools, clear policies, and training, shadow AI is the inevitable default.
Why Free AI Tools Are the Problem
The core risk comes down to data handling. Here's how the major AI tools differ:
Free/Consumer tiers (HIGH RISK):
Free ChatGPT: OpenAI may use your inputs to train future models unless you specifically opt out in settings. Most employees don't know this setting exists.
Free Claude: Similar data usage policies on consumer tiers.
Free Copilot (Bing Chat): Microsoft's consumer AI has different data handling than enterprise Copilot.
Enterprise/Business tiers (MANAGED RISK):
Microsoft Copilot for Microsoft 365: Data stays within your Microsoft 365 tenant. Microsoft contractually commits to not training on your organizational data. Inherits your existing security and compliance settings.
ChatGPT Enterprise/Team: OpenAI contractually commits to not training on your business data. SOC 2 compliant.
Claude for Work: Anthropic offers zero-data-retention agreements on business plans.
The difference is contractual data protection — enterprise tiers give you legal guarantees that consumer tiers don't. But employees don't naturally know which tier they're on, and most default to whatever's free and easy.
How to Measure Shadow AI in Your Organization
Before you can manage shadow AI, you need to know its scope. Here are three approaches Calgary companies are using:
1. Anonymous employee survey (fastest): Ask employees directly: What AI tools are you using? What types of tasks? What data are you putting into them? Make it anonymous to get honest answers. Solway includes this as part of the Discovery and Baseline Scan in the AI Clarity Sprint.
2. IT audit of tool usage (most comprehensive): Your IT team or MSP can audit browser activity, installed applications, and API calls to identify AI tool usage. This is more thorough but requires IT resources and raises privacy considerations that should be handled carefully.
3. Network traffic analysis (technical): Monitor outbound traffic to known AI tool domains (api.openai.com, claude.ai, copilot.microsoft.com, etc.). This identifies volume and frequency without revealing content.
Most mid-size Calgary companies start with option 1 — it takes a week, costs nothing, and reveals enough to prioritize your response.
The Five-Step Shadow AI Response Plan
Here's the framework Solway recommends for mid-size Calgary companies:
Step 1 — Don't panic, don't ban. Banning AI pushes usage further underground and creates adversarial dynamics. The goal is to channel existing AI enthusiasm into managed, secure tools.
Step 2 — Audit current state. Use one of the methods above to understand what tools are being used, by whom, and for what. You'll likely be surprised by both the volume and the creativity of current AI usage.
Step 3 — Deploy enterprise tools. Get your team on enterprise-tier AI tools with contractual data protection. For most Calgary companies on Microsoft 365, this means rolling out Copilot. Budget $30-40 CAD per user per month. This is the single most effective step — it gives employees a sanctioned, secure alternative to consumer tools.
Step 4 — Develop your AI policy. Formalize what tools are approved, what data can enter AI systems, what review is required for AI-assisted outputs, and who oversees compliance. The Solway System — a 14-component AI Policy Framework — provides sliding scales from Caution-Oriented to Innovation-Oriented so you can calibrate to your risk profile. Solway's AI Clarity Sprint delivers this along with a Staff Decision Guide and Opportunity Matrix over 6 weeks.
Step 5 — Train your team. Policy without training is shelf-ware. Employees need hands-on experience with approved tools, understanding of why the policy exists, and practical skills to be productive within the guardrails. CAPG funding can reimburse 50% of eligible training costs with no minimum hours required.
What Calgary Companies Are Getting Right
The companies managing shadow AI effectively share three characteristics:
They lead with enablement, not restriction. The message isn't "stop using AI" — it's "here are better, safer tools and here's how to use them." This framing consistently produces higher adoption of sanctioned tools and lower shadow AI rates.
They treat policy as a living document. AI capabilities change quarterly. Companies that review and update their AI policy every 6 months stay current. Companies that write a policy once and file it away find their employees working around outdated rules.
They invest in training. A half-day workshop on approved AI tools — covering prompt engineering, safe use, and role-specific applications — typically reduces shadow AI by making the sanctioned tools more useful than the unsanctioned ones. When the approved tool is also the best tool, shadow AI solves itself.
As Shaheer Tariq, Solway's Co-Founder, puts it: "Shadow AI is a symptom, not a disease. The disease is a gap between what employees need and what organizations provide. Close that gap with good tools, clear policy, and practical training, and shadow AI disappears."
The Cost of Inaction
Calcuating the exact cost of a shadow AI incident is difficult before it happens. But the potential exposures are real:
Regulatory penalties: Alberta's PIPA (Personal Information Protection Act) imposes obligations on private organizations to protect personal information. Uncontrolled AI use that exposes personal data could trigger regulatory investigation.
Client trust and contracts: Many Calgary professional services firms, energy companies, and financial firms have client confidentiality clauses that shadow AI can violate. A single incident could damage a client relationship worth hundreds of thousands of dollars.
Competitive intelligence leakage: Proprietary pricing, geological data, product specifications, and strategic plans entered into consumer AI tools may not be protected from future use or disclosure.
Quality risks: AI-assisted work product that hasn't been reviewed for accuracy can introduce errors into client deliverables, proposals, and reports. Without a review framework, these errors compound.
The cost of prevention — enterprise AI licensing plus policy development plus training — typically runs $15,000-$40,000 for a 50-person company in the first year, with CAPG covering up to half the training component. The cost of a single data breach, regulatory action, or lost client relationship dwarfs that investment.
Frequently Asked Questions
How common is shadow AI in Calgary companies?
Industry data suggests 28% of workers use AI without employer approval. Based on Solway's discovery work across Calgary organizations, the actual rate in mid-size companies may be higher — particularly in professional services and knowledge-work-heavy sectors where employees have strong incentives to be more productive.
Is shadow AI illegal in Alberta?
Shadow AI itself isn't illegal, but the consequences can violate existing law. If an employee enters personal information into an unvetted AI tool, the organization may be in breach of Alberta's PIPA. If client-confidential data is exposed, contractual obligations may be violated.
What's the fastest way to reduce shadow AI?
Deploy enterprise-tier AI tools (Copilot, ChatGPT Enterprise, or Claude for Work) and run a focused training session. When employees have a sanctioned tool that's better than the free alternative, most shadow AI resolves itself. A half-day workshop plus enterprise licensing can be implemented within 2-3 weeks.
Does CAPG cover shadow AI risk mitigation?
CAPG covers AI training costs — and training is the most effective shadow AI mitigation. A workshop covering AI policy, safe use, and approved tools qualifies under CAPG's Digital and Technological skills category. The program reimburses 50% with no minimum hours required.
Should we monitor employee AI usage?
This is a risk-tolerance decision. Some companies audit AI tool usage through IT monitoring; others rely on policy compliance and training. The most effective approach combines clear policy, approved tools, and trust — with periodic audits to verify compliance rather than continuous surveillance.
How often should we update our shadow AI response?
Review quarterly, update the policy at least every 6 months. New AI tools launch constantly, and employee usage patterns evolve. The companies that stay ahead of shadow AI treat their AI policy as a living document with a regular review cadence.
Can Solway help us assess our shadow AI risk?
Yes. The AI Clarity Sprint begins with a Discovery and Baseline Scan that includes an audit of current AI usage across your organization. The sprint delivers an AI Policy Framework, Staff Decision Guide, and Opportunity Matrix over 6 weeks. CAPG reimburses 50% of eligible costs.
More articles
Explore more insights from our team to deepen your understanding of digital strategy and web development best practices.
Load More






