Does Your Company in Alberta Need an AI Policy?

Shaheer Tariq
Mar 12, 2026

28% of employees are already using AI without approval. An AI policy isn't optional anymore — here's what yours should include and how the Solway System structures it.
Last updated: March 2026
If your employees have access to the internet, they're already using AI — whether you know about it or not. A 2025 Salesforce survey found that 28% of workers use generative AI at work without their employer's knowledge or approval. In regulated industries like medical research, energy, and financial services — sectors that dominate Alberta's mid-market — that shadow AI usage creates real risk: sensitive data entering uncontrolled AI tools, inconsistent quality in AI-assisted work product, and zero governance over how these tools interact with proprietary information.
An AI policy isn't a theoretical exercise. It's a practical document that answers one question every employee has: "Can I use AI for this?" This guide covers what an Alberta AI policy should include, how to build one, and why the companies doing this well are treating it as a competitive advantage rather than a compliance burden.
The Shadow AI Problem in Alberta Companies
Shadow AI is the use of AI tools by employees without organizational awareness, approval, or governance. It's the AI equivalent of shadow IT — and it's happening in virtually every company with knowledge workers.
In a recent conversation with a Calgary medical research company, their operations lead flagged this as a top concern. The company works with pharmaceutical giants like AstraZeneca on sensitive medical data. Employees were using AI tools — but without any framework governing what data could enter those tools, which tools were approved, or what review process AI-assisted work should go through.
The risk isn't theoretical. Free versions of ChatGPT and other consumer AI tools do not offer data retention guarantees. Information entered into the free tier may be used to train future models. For an Alberta company handling medical data, proprietary business information, or client-confidential material, this represents a genuine exposure.
Enterprise versions of these tools — Copilot for Microsoft 365, ChatGPT Enterprise, Claude for Work — offer contractual data protection guarantees. But employees don't naturally know the difference between the free version and the enterprise version. That's what a policy is for.
What Should an AI Policy Include?
After developing AI policies for organizations ranging from Global Affairs Canada to mid-size Calgary businesses, Solway has identified the essential components every AI policy needs. We've formalized this into the Solway System — a 14-component AI Policy Framework with sliding scales that let organizations calibrate their approach from Caution-Oriented to Innovation-Oriented based on their risk profile.
Here are the core components:
1. Approved Tools and Platforms
Which AI tools are sanctioned for use? Which are prohibited? This should specify exact products and plan levels — for example, "Microsoft Copilot for Microsoft 365 (enterprise license) is approved. Free ChatGPT is prohibited for any work-related use."
2. Data Classification and Handling
What types of data can be entered into AI tools? A practical framework uses three tiers: Unrestricted (publicly available information), Internal (non-sensitive business information), and Restricted (client data, financial records, personal information, trade secrets). Restricted data should never enter external AI tools without explicit approval and verified data protection agreements.
3. Human Review Requirements
What level of human review is required before AI-assisted work product is used, shared, or published? This should vary by use case — a draft internal email might need minimal review, while a client-facing proposal or regulatory filing requires thorough verification.
4. Quality Standards and Accuracy Verification
AI hallucination is a structural characteristic of current language models, not a bug that will be fixed next quarter. Frontier models score roughly 44% on humanity's hardest exam — impressive, but far from infallible. Your policy should specify how employees verify AI-generated claims, statistics, and recommendations.
5. Privacy and Compliance
For Alberta companies, this means alignment with provincial privacy legislation (PIPA for private sector, FOIP for public sector) and any sector-specific regulations. The policy should address data residency requirements — where AI processing occurs geographically — and whether Canadian data residency options are required for your use case.
6. Intellectual Property
Who owns AI-assisted work product? How should AI contributions be disclosed? This is evolving legally, but your policy should take a clear position that employees can follow today.
7. Acceptable Use Cases
Rather than trying to enumerate every possible use, the most effective policies provide a decision framework. Solway's Staff Decision Guide — one of the three deliverables from the AI Clarity Sprint — gives employees a flowchart: Is the data restricted? Does the output need to be factually verified? Is the task creative or analytical? Each path leads to a clear recommendation.
The Solway System: A 14-Component Framework
The Solway System is an AI Policy Framework built from Solway's experience across 30+ organizations. It contains 14 components organized into three sections, each with sliding scales that let you calibrate between Caution-Oriented and Innovation-Oriented approaches:
Section 1 — Governance and Oversight: Covers who owns the AI policy, how decisions are escalated, and what the review cadence looks like. Includes a 7-category Capabilities Matrix that maps AI capabilities to organizational functions.
Section 2 — Use and Access: Covers approved tools, data classification, acceptable use cases, and the Staff Decision Guide.
Section 3 — Quality, Compliance, and Evolution: Covers accuracy standards, privacy requirements, IP considerations, and how the policy adapts as AI capabilities change.
The sliding scales are what make this framework practical. A fintech company handling regulated financial data will calibrate toward Caution-Oriented on data handling but might be Innovation-Oriented on internal productivity tools. A creative agency might be Innovation-Oriented across the board. The framework accommodates both without starting from scratch.
How to Build Your AI Policy: A Practical Roadmap
Week 1-2: Discovery and Baseline
Audit current AI usage across your organization. You'll be surprised — shadow AI is everywhere. Survey employees on what tools they're using, what they're using them for, and what concerns they have. Map your data types and sensitivity levels.
Week 3: Stakeholder Alignment
Bring together leadership, IT, legal (if applicable), and department leads. Align on risk tolerance — are you Caution-Oriented or Innovation-Oriented? Identify your non-negotiables (e.g., client data never enters external AI tools) and your enablement opportunities (e.g., all employees should have access to enterprise AI for internal communication).
Week 4: Draft Policy
Write the policy document covering all components above. Keep it practical — if employees need a law degree to understand it, adoption will fail. The best policies are under 10 pages with clear decision trees.
Week 5: Review and Training
Circulate the draft, gather feedback, revise. Then train your team — not just on the policy itself, but on the AI tools the policy covers. Policy without training is shelf-ware.
Week 6: Launch and Iterate
Publish the policy, make it accessible, and establish a review cadence. AI capabilities change quarterly — your policy should be reviewed at least every 6 months.
This is essentially the structure of Solway's AI Clarity Sprint, which delivers all three artifacts — AI Policy Framework, Staff Decision Guide, and Opportunity Matrix — over 6 weeks.
Why Alberta Companies Specifically Need This Now
Alberta's business landscape has characteristics that make AI policy particularly urgent:
Energy sector data sensitivity: Oil and gas companies handle proprietary geological data, operational metrics, and competitive intelligence. AI tools processing this data need enterprise-grade protection.
Medical and research data: Companies like those in Calgary's life sciences corridor work with clinical data, patient information, and pharmaceutical research — all requiring strict data governance.
Professional services confidentiality: Law firms, accounting practices, and consulting firms across Calgary and Edmonton handle client-privileged information daily.
Cross-border considerations: Many Alberta companies do business across provincial and international borders, adding complexity around data residency and privacy jurisdiction.
The CAPG grant makes this financially accessible. AI policy development — when structured as training through an engagement like Solway's AI Clarity Sprint — qualifies for 50% reimbursement. A $20,000 investment in policy and training costs $10,000 net.
Frequently Asked Questions
Does my Alberta company legally need an AI policy?
There is no current Alberta or Canadian law that mandates a formal AI policy for private companies. However, existing privacy legislation (PIPA in Alberta) requires you to protect personal information — and uncontrolled AI use can violate those obligations. Practically, an AI policy is a governance best practice that protects your organization, employees, and clients.
How long does it take to create an AI policy?
A comprehensive AI policy can be developed in 4-6 weeks with dedicated effort. Solway's AI Clarity Sprint delivers a complete policy framework in 6 weeks, including stakeholder alignment, employee input, and training. Simpler policies for smaller organizations can be drafted in 2-3 weeks.
What's the biggest risk of not having an AI policy?
Shadow AI — employees using unvetted AI tools on sensitive data. A 2025 survey found 28% of workers use AI without employer approval. For companies handling client-confidential, medical, financial, or proprietary data, this creates genuine legal and reputational exposure that a clear policy addresses.
Should our AI policy ban AI use or encourage it?
Neither extreme works. The most effective policies enable AI use within clear boundaries. Banning AI pushes usage underground (more shadow AI). Blanket encouragement without guardrails creates quality and security risks. The Solway System's sliding scales let you find the right balance for each category.
Does CAPG cover AI policy development?
Yes, when structured as training. Solway's AI Clarity Sprint — which includes policy development, team training, and opportunity mapping — qualifies under CAPG's Digital and Technological skills category. The program reimburses 50% of eligible costs with no minimum hour requirement.
What's the difference between an AI policy and an AI strategy?
An AI policy governs how AI is used (rules, boundaries, approvals). An AI strategy defines where AI should be used and what value it creates (opportunities, priorities, roadmap). You need both. Solway's AI Clarity Sprint delivers both — the Policy Framework addresses governance, while the Opportunity Matrix addresses strategy.
How often should we update our AI policy?
At minimum, every 6 months. AI capabilities evolve quarterly — Copilot in March 2026 is fundamentally different from Copilot in March 2025. Major updates to review: new tool capabilities, changes in privacy regulation, new AI tools entering the market, and feedback from employees on what's working and what isn't.
Can I use a template for my AI policy?
Templates are a starting point, not a finish line. Every organization has unique data types, risk profiles, and operational contexts. A medical research company and a construction firm need very different policies even if they're the same size. Solway's AI Policy Framework provides the structure while customizing the content to your specific organization.
More articles
Explore more insights from our team to deepen your understanding of digital strategy and web development best practices.
Load More






