What Should a Mid-Size Company's AI Policy Include in 2026? (The Solway System)

Shaheer Tariq

Mar 12, 2026

The Solway System is a 14-component AI Policy Framework with sliding scales from Caution to Innovation. Here's the complete breakdown.

Last updated: March 12, 2026

Most AI policy templates are either too vague to be useful or too rigid to fit diverse organizations. After developing AI governance frameworks for more than 30 organizations — from Global Affairs Canada to 25-person Calgary startups — Solway formalized a repeatable approach: the Solway System, a 14-component AI Policy Framework with sliding scales that let any organization calibrate between Caution-Oriented and Innovation-Oriented based on their specific risk profile, industry, and goals.

This guide walks through every component of the framework, explains how the sliding scales work, and provides a practical roadmap for building your own AI policy in 2026.

Why Traditional IT Policies Don't Work for AI

AI is fundamentally different from previous enterprise technology. Traditional IT policies govern tools that do exactly what you tell them. AI tools generate novel outputs, can produce errors that look authoritative, and handle data in ways that vary dramatically between pricing tiers.

A traditional IT policy might say: "Only use approved software." An AI policy needs to answer: Which AI tools are approved? For which data types? With what level of human review? How do we handle AI-generated content in client deliverables? What happens when an AI tool hallucinates a statistic in a board presentation?

These questions require a purpose-built framework — not a paragraph added to your existing acceptable use policy.

The Solway System: 14 Components in Three Sections

The framework is organized into three sections, each containing components with sliding scales that let organizations find their position between Caution-Oriented and Innovation-Oriented.

Section 1: Governance and Oversight

Component 1 — AI Governance Owner: Who owns the AI policy within your organization? On the Caution end, this is a dedicated AI governance committee with cross-functional representation. On the Innovation end, this is a single AI champion with authority to make rapid decisions. Most 50-person companies land somewhere in the middle: an executive sponsor (often the CEO or COO) with an informal advisory group.

Component 2 — Decision Escalation Framework: How are AI-related decisions escalated? When an employee encounters a new AI use case not covered by the policy, who decides? The Caution approach requires formal committee review. The Innovation approach empowers employees to experiment within broad guidelines and report outcomes. The right balance depends on your data sensitivity and regulatory environment.

Component 3 — Review Cadence: How often is the policy reviewed and updated? AI capabilities change quarterly — Copilot in March 2026 is fundamentally different from Copilot in March 2025. Caution-Oriented organizations review monthly. Innovation-Oriented organizations review semi-annually with ad-hoc updates for major capability changes. We recommend quarterly as the baseline.

Component 4 — Capabilities Matrix: A 7-category matrix that maps AI capabilities to organizational functions. Categories include: Content Creation, Data Analysis, Communication, Research, Process Automation, Decision Support, and Creative Work. For each category, the matrix specifies: approved tools, data handling rules, required review level, and responsible owner. This is the most operationally useful component of the entire framework.

Section 2: Use and Access

Component 5 — Approved Tools Registry: A definitive list of which AI tools are sanctioned, at which licensing tier, for which purposes. This must be specific: "Microsoft Copilot for Microsoft 365 (enterprise license) is approved for all internal work. Free ChatGPT is prohibited for any work-related use." Vague tool lists lead to vague compliance.

Component 6 — Data Classification and Handling: What types of data can enter AI tools? We use a three-tier model: Unrestricted (publicly available information — can go into any approved tool), Internal (non-sensitive business information — approved enterprise tools only), and Restricted (client data, personal information, financial records, trade secrets — requires specific authorization and verified data protection agreements). The Caution-Oriented position treats more data categories as Restricted. Innovation-Oriented positions allow Internal data into a broader range of tools.

Component 7 — Staff Decision Guide: The most employee-facing component. A visual flowchart that answers: "Can I use AI for this?" The guide walks employees through a series of questions: Is the data Restricted? Does the output need factual verification? Is this client-facing? Each path leads to a clear yes/no/conditionally answer. This is one of the three core deliverables from the AI Clarity Sprint.

Component 8 — Acceptable Use Cases: Rather than enumerating every possible use, this component provides categories of acceptable use with examples: Always OK (drafting internal emails, summarizing meetings, brainstorming), OK With Review (client-facing drafts, data analysis, report generation), and Requires Approval (entering restricted data, automated decision-making, public-facing content). The sliding scale determines where each category boundary sits.

Component 9 — Prohibited Uses: Explicit prohibitions that apply regardless of where you sit on the sliding scale. Universals include: entering customer payment information into any AI tool, using AI to make hiring or firing decisions without human review, submitting AI-generated content to regulators without disclosure. These are non-negotiable guardrails.

Section 3: Quality, Compliance, and Evolution

Component 10 — Human Review Requirements: What level of human review is required for AI-assisted work product? This varies by output type. Caution-Oriented: all AI-assisted outputs require review before use. Innovation-Oriented: only client-facing and regulatory outputs require formal review; internal use is at the employee's discretion. Most companies find a middle ground where the review requirement scales with the stakes of the output.

Component 11 — Accuracy and Quality Standards: How do employees verify AI-generated claims, statistics, and recommendations? Frontier models score roughly 44% on humanity's hardest exam — impressive but far from infallible. Hallucination is a structural characteristic, not a bug that gets fixed. Your policy should specify verification standards: cross-reference claims with primary sources, verify statistics against original data, never use AI-generated citations without checking they exist.

Component 12 — Privacy and Compliance: Alignment with relevant privacy legislation. For Alberta private-sector companies, this means PIPA (Personal Information Protection Act). For organizations working with public-sector data, FOIP applies. The policy should address data residency (where AI processing occurs geographically), cross-border data transfer implications, and sector-specific regulations.

Component 13 — Intellectual Property: Who owns AI-assisted work product? How should AI contributions be disclosed? The legal landscape is evolving, but your policy should take a clear position. Most organizations default to: the organization owns all work product created by employees using approved AI tools in the course of their duties, and AI assistance should be disclosed internally per the review requirements.

Component 14 — Evolution and Sunset: How does the policy adapt as AI capabilities change? This component specifies the review triggers (new major AI release, regulatory change, security incident), the update process, and how retired policies are archived. Without this component, policies become outdated within 6 months and employees stop consulting them.

How the Sliding Scales Work in Practice

Every organization lands at a different point on the Caution→Innovation spectrum, and importantly, they can land at different points for different components.

A Calgary medical research company might be:

  • Caution-Oriented on Components 6 (Data Classification), 10 (Human Review), and 12 (Privacy) because they handle clinical data

  • Innovation-Oriented on Components 1 (Governance), 5 (Approved Tools), and 8 (Acceptable Use) for internal productivity

A Calgary creative agency might be:

  • Innovation-Oriented across most components because their data sensitivity is lower and their competitive advantage comes from speed

  • Caution-Oriented only on Component 13 (IP) because client ownership of creative work is contractually critical

This flexibility is what makes the framework practical across industries, company sizes, and risk profiles.

Building Your Policy: The AI Clarity Sprint

Solway delivers the Solway System through the AI Clarity Sprint — a 6-week structured engagement that produces three artifacts:

Artifact 1 — AI Policy Framework: The complete 14-component policy, customized to your organization's industry, data types, regulatory environment, and risk tolerance. Sliding scales calibrated through stakeholder alignment workshops.

Artifact 2 — Staff Decision Guide: A visual flowchart employees reference daily to answer "Can I use AI for this?" Designed to be printed, posted, and bookmarked — not buried in a policy document.

Artifact 3 — Opportunity and Risk Matrix: Every AI use case mapped to your organizational workflows, categorized into Quick Wins (high value, low risk), Quality Lifts (high value, moderate risk), Strategic Upgrades (transformative but complex), and Not Yet (too risky or too early). This bridges policy and strategy.

The sprint runs in six steps: State of AI Briefing, Discovery and Baseline Scan, Team Input, Executive Vision Lab, Draft Policy and Opportunity Matrix, and Leadership Wrap-Up. The entire engagement qualifies for CAPG reimbursement at 50% with no minimum hours required.

Frequently Asked Questions

Can we build an AI policy without external help?

Yes, using this guide as a framework. The advantage of working with Solway is speed (6 weeks vs. 3-4 months internally), expertise (we've built 30+ policies and know the patterns), and the Staff Decision Guide and Opportunity Matrix — which require both AI expertise and deep understanding of your workflows to build effectively.

How long is a typical AI policy document?

Under 10 pages for the core policy. The Staff Decision Guide is typically a 1-page flowchart. The Opportunity Matrix can run 3-5 pages depending on organizational complexity. If your policy requires a law degree to understand, adoption will fail.

Is the Solway System only for large companies?

No. The framework scales from 10-person startups to 500-person enterprises. Smaller companies typically skip or simplify certain components (e.g., Component 2's escalation framework is simpler when everyone reports to the CEO). The sliding scales accommodate different complexity levels naturally.

How much does an AI Clarity Sprint cost?

A typical AI Clarity Sprint runs $15,000-$25,000 depending on organizational complexity and team size. CAPG reimburses 50% of eligible costs. For a $20,000 engagement, the net cost is $10,000 for a complete AI policy, staff decision guide, and opportunity matrix.

What if we already have an IT security policy?

Great — the AI policy should complement it, not replace it. Your existing IT security policy covers access controls, network security, and data protection broadly. The AI policy adds AI-specific governance: approved tools, data handling in AI contexts, review requirements for AI outputs, and AI-specific risk management. They should reference each other.

How do we handle employees who resist the policy?

Lead with enablement. The most common resistance comes from employees who fear the policy will slow them down. When you pair policy with training on approved tools — showing employees that the sanctioned tools are more powerful than the free alternatives — resistance typically dissolves. Framing matters: "here are powerful tools and how to use them" works; "here are rules about what you can't do" doesn't.

Does CAPG cover AI policy development?

Yes, when structured as training. The AI Clarity Sprint includes team training alongside policy development, qualifying it under CAPG's Digital and Technological skills category. The program reimburses 50% with no minimum hours.

More articles

Explore more insights from our team to deepen your understanding of digital strategy and web development best practices.