Skip to main content

ISED Consultation on Artificial Intelligence Strategy

(disponible uniquement en anglais)

Research and talent

How does Canada retain and grow its AI research edge? What are the promising areas that Canada should lean in on, where it can lead the world?

(i.e. promising domains for breakthroughs and first-mover advantage; strategic decisions on where to compete, collaborate or defer; balance between fundamental and applied research)

The Center for European Policy Analysis, echoed by other mainstream outlets, contends that the United States and China are the principal competitors in the global artificial intelligence (AI) race. Even so, Canada punches above its weight, home to world‑renowned researchers such as Geoffrey Hinton. A particularly promising avenue is AI safety, where Canada can leverage Hinton’s expertise and credibility to advance research, standards, and responsible deployment.

To retain and grow its edge, Canada should focus on stronger intellectual property (IP) rights, including a broader understanding of patentable subject matter, authorship, better opportunities for enforcement/ licensing, and perhaps a more robust “Standard Essential Patents and obligation to license them on Fair, Reasonable, and Non-Discriminatory terms”(SEP/FRAND)structure to make Canada more lucrative for those innovating in the AI space.

Focusing on IP, Danish legislation has expanded copyright to encompass personal information such as an individual’s face and voice to address deepfakes. Canada could take a similar approach by either extending existing IP rights or creating targeted, additional rights that recognize human contributions. Such a framework would allow individuals to participate meaningfully in the value chain, ensure ongoing consent and oversight, and create new opportunities for fair compensation in the digital economy. More information about this idea can be found below in Proposal 4 under the heading “Commercialization of IP”.

The CBA previously proposed that the government consider codifying a federal tort of appropriation of personality, or in the case of commercial artists, a federal right of publicity, to protect artists whose likeness or voice is commercially exploited through AI “deepfakes”. This is especially important in cases where training data has not been copied in the traditional or copyright sense, but the works of an author have still been used to train the AI. This will also assist because copyright infringement may be difficult to prove in cases where training data has not been copied in the traditional or copyright sense; however, the works of an author have still been used to train the AI. Platforms should not benefit financially from prompts like “Write me a song in the style of Drake,” for example, where the AI has been trained on Drake’s catalogue, regardless of whether the catalogue itself has been copied within the meaning of the Copyright Act, and whether the resulting AI output infringes a specific work.

Canada could integrate both approaches, i.e. expanding IP protections to include personal attributes, and codifying a federal tort or right of publicity, to create a more comprehensive legal framework. This would ensure that individuals and artists are protected from unauthorized commercial exploitation in the age of generative AI, while also enabling fair participation in the digital economy.

Additionally, Canada’s current copyright regime already provides a relatively balanced framework for text and data mining (TDM). Unlike jurisdictions proposing broad TDM exceptions, Canada’s fair dealing provisions under Section 29 of the Copyright Act, as interpreted by the Supreme Court of Canada, allow for TDM in research and private study contexts without undermining creator rights. This approach supports innovation while respecting the interests of rights holders and aligns with the goals of the proposed Artificial Intelligence and Data Act (AIDA) (to be possibly reintroduced), which emphasizes transparency and responsible development.

Leveraging public-private partnerships (PPPs) to advance AI research and foster inclusive talent development could be beneficial. PPPs can facilitate cross-sector collaboration in AI research by involving a diverse range of participants, not just large institutions, and by sharing data and computing infrastructure. They should include competitive research programs, innovation sandboxes, and governance frameworks that build trust through transparent IP, ethical guidelines, and accountability.

How can Canada strengthen coordination across academia, industry, government and defence to accelerate impactful AI research?

(i.e. mechanisms for cross-sector collaboration; integration of public and private research efforts; industry-sponsored research while preserving academic independence) 

What conditions are needed to ensure Canadian AI research remains globally competitive and ethically grounded?

(i.e. infrastructure, talent and governance enablers; ethical standards and risk mitigation; alignment of applied research with business and societal needs)

Without protection, control is unattainable. Strengthening intellectual property rights establishes a framework for acquiring, enforcing, and licensing AI.

While secrecy may be appropriate for certain components of AI development, a standards-setting framework anchored in transparency, auditing, and interoperability would be preferable for many others. Such a framework can deter misuse by reducing the feasibility of deploying AI systems in secret or in ways that evade detection, while still preserving room for responsible innovation.

History has shown that markets alone do not always align with the public interest. The IP system can play an important role by recognizing human contributions and incentivizing open, international collaboration. At the same time, secrecy and trade secret protection will be appropriate for certain components of AI development. For many other aspects, however, a standards-setting framework, anchored in transparency, auditing, and interoperability, would be preferable. Such an approach can deter bad actors by reducing the feasibility of deploying AI systems in secret or in ways that evade detection, while still preserving room for responsible innovation.

As AI systems increasingly rely on large-scale ingestion of data, Canada must ensure that its legal and regulatory infrastructure keeps pace with technological advances. Previous recommendations around text and data mining (TDM) emphasized the need for careful study and restraint in introducing broad exceptions. These remain relevant, particularly considering recent case law respecting unauthorized use of copyright content in AI training, and the proposed AIDA, which contemplate transparency and record-keeping obligations for AI developers. The Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems also provides interim guidance on safety, fairness, and human oversight while formal regulation evolves.

Together, these conditions, i.e., stronger IP and trade secret protections, transparent standards, and well-thought-out regulation of TDM and use of personal data, copyright content and public legal documents, will help ensure Canadian AI research remains globally competitive and ethically grounded.

What efforts are needed to attract, develop and retain top AI talent across research, industry and the public sector?

(i.e. differentiated enablers for research vs. applied talent; domestic vs. global talent strategies; targeted attraction programs and priority domains; international collaboration opportunities)

Return on Investment (ROI) is directly tied to the protection and defence of market advantage through robust IP mechanisms, whether for a product, service, or a foundational model. This is how Canada can attract, develop and retain top-tier talent, especially when competing with large, oligarchic, foreign AI firms that offer significantly more compensation and infrastructure.

AI has created an existential challenge for humanity, and Canada must be strategic in its response. The private sector has a critical role, but it is fair to question whether profit-maximizing firms should lead an undertaking with such profound societal stakes. In Canada, we can leverage Geoffrey Hinton’s leadership and target a narrowly defined, high-impact domain of AI where the country can achieve global excellence. Taiwan’s strategic cultivation of its semiconductor industry offers a useful analogue: by dominating a pivotal layer of the value chain, it has forged global alliances and strengthened national security. Canada can pursue a similarly focused strategy in AI to build sovereign capability, international partnerships, and long-term resilience.

To support this, Canada must ensure that its legal and regulatory infrastructure keeps pace with technological advances. The CBA has previously emphasized the importance of clear IP protections and a balanced copyright regime that supports innovation while respecting creator rights. This includes transparency and record-keeping obligations for AI developers, as contemplated under the previously proposed AIDA, and reinforced by the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.

Recent legal developments, including copyright claims against AI platforms and calls for stronger licensing frameworks, highlight the need for a clear and enforceable IP environment. Without it, Canada risks losing talent to jurisdictions with more robust protections and commercial pathways. Sovereign compute infrastructure is also critical, not only to retain researchers and startups but to ensure data security, accelerate innovation, and foster collaboration across academia, industry, and the public sector. Attracting and retaining top AI talent will require more than competitive salaries. It will depend on Canada’s ability to offer a coherent strategy: one that protects innovation, enables responsible commercialization, and aligns with the public interest.

Accelerating AI adoption by industry and government

Where is the greatest potential for impactful AI adoption in Canada? How can we ensure those sectors with the greatest opportunity can take advantage?

(i.e. high-potential industries like health care, construction and agriculture; lessons from application-specific use cases like inventory management or financial forecasting)

What are the key barriers to AI adoption, and how can government and industry work together to accelerate responsible uptake?

(i.e. sectoral vs. cross-sectoral challenges, such as liability and small to medium-sized enterprise constraints; potential government policies, incentives and ecosystem supports)

How will we know if Canada is meaningfully engaging with and adopting AI? What are the best measures of success?

(i.e. metrics to distinguish experimentation, integration and transformation; sector-specific benchmarks and indicators of progress)

Commercialization of AI

What needs to be put in place so Canada can grow globally competitive AI companies while retaining ownership, IP and economic sovereignty?

(i.e. strategies for attracting investment and scaling internationally; balancing foreign capital with Canadian control of IP and corporate identity; economic security safeguards) 

Proposal #1: Introduce a Federal Trade Secrets Act

The Problem
  • Canada currently lacks a unified federal regime for trade secrets. Protection depends largely on common law (breach of confidence) and contractual agreements.
    • Provincial courts may be more likely to lack the technical expertise and tools needed for complex, technology-driven disputes involving AI systems and data security breaches.
  • Key elements of AI innovation, such as algorithms, training data, model weights, and curation methods, often fall outside the scope of patents, copyright, and other traditional IP rights.
  • A federal trade secret regime would fill the current gap in Canada’s IP framework, complementing existing patent and copyright systems without requiring public disclosure. This aligns with Canada’s obligations under Canada–United States–Mexico Agreement (CUSMA), which mandates civil and criminal protections for trade secrets and encourages voluntary licensing and transfer mechanisms. [osler.com]
The Proposed Solution
  • Introduce a federal trade secret regime modeled partly on the US Defend Trade Secrets Act (DTSA) to provide consistent, national protection for trade secrets, including AI-related information.
    • Grant exclusive jurisdiction to the Federal Court of Canada, leveraging its existing expertise in IP matters.
    • Provide a statutory definition of trade secrets that explicitly encompasses digital assets, datasets, machine learning models, and technical know-how relevant to AI.
    • The DTSA has a well-defined 2-step definition that requires only that a claimant demonstrate that they exercised reasonable effort to keep a secret that holds economic value. If Canada uses this definition, it’s a low bar that provides trade secret protection at the moment of inception
    • Consider the CBA’s previous recommendation for a cautious and evidence-based approach to legislative reform, ensuring that any new framework maintains the balance between owners’ rights and public interest.
  • Registration mechanism
    • A Registrar of Trade Secrets could be created, allowing trade secret owners to register a title or general description of the secret (without disclosing its substance).
    • This would help establish ownership and a fixed date of existence, aiding in proof of rights without compromising confidentiality.
  • Evidentiary Protection
    • The onus would rest on the trade secret owner to establish that a trade secret:
      • existed in some expression at a fixed point in time;
      • they took reasonable efforts to protect it;
      • it holds some sort of economic value; and
      • that the secret was stolen/misappropriated
    • Proof of the trade secret’s existence and scope could be handled in chambers or through sealed proceedings, preventing public disclosure of sensitive details during discovery.
    • Confidential pleadings or annexes could be permitted to protect the sensitive nature of AI-related information throughout litigation.
    • These steps will ensure Canada's Trade Secret Act rises above the recently highlighted pitfalls of the DTSA, namely, reluctance to disclose in Trade Secret litigation (see Waymo v. Uber).
  • Enable robust enforcement mechanisms, including:
    • Civil seizure and injunction powers to prevent the dissemination of stolen AI assets.
    • Damages for economic loss due to misappropriation.
    • Potential alignment with US enforcement standards to facilitate cross-border cooperation.
  • By protecting non-public, high-value information, it would incentivize investment in AI research and development (R&D), especially for firms wary of patenting rapidly evolving technologies.

Proposal # 2: Patent prosecution fund to overcome international IP protection cost barriers

The Problem
  • The International Patent Cost Trap
    • Canadian AI startups face a devastating financial barrier: protecting innovation globally costs $100,000–$250,000 per patent family, far beyond the reach of seed-stage companies.
Table 1: Cost of International Patent Protection for AI Innovations
Jurisdiction Cost range Key challenges
Canada (CIPO) $5,000–$15,000 Limited market protection
United States (USPTO) $15,000–$30,000 Alice Corp. eligibility
Europe (EPO) $20,000–$40,000 + $3K–$5K per country
China $10,000–$25,000 Translation costs
Japan $10,000–$25,000 Complex prosecution
Korea $10,000–$25,000 Language barriers
 
Coverage level Estimated total Notes
Minimum protection $70,000–$100,000 CA + US + EU + China + JP
Realistic coverage $100,000–$150,000 Multiple applications
Comprehensive portfolio $150,000–$250,000 Defensive + offensive claims
  • Why AI patents cost more:
    • complex prior art across software, algorithms, and applications;
    • multiple office actions requiring costly responses;
    • subject matter eligibility challenges (especially in the US post-Alice Corp); and
    • fast-moving fields requiring multiple narrower applications, and continuation filings to capture emerging variations.
  • The startup catch-22
    • In years 1–2 (seed stage, $500K–$2M), startups develop a minimum Viable Product (MVP) and must file provisional /PCT to establish priority to establish priority but cannot afford comprehensive protection ($250K in patent costs). This results in minimal protection (Canada + US provisional only).
    • By years 2–3 (Series A, raising $3M–$10M), investors demand robust IP portfolios. Weak IP depresses valuation.
    • The 30-month PCT deadline forces the companies to make a choice: pay high costs or abandon foreign rights.
    • The outcomes are abandonment of foreign protection, enabling competitors to copy in the EU/Asia, early acquisition, or loss of competitive advantage. The result is IP Leakage: IP value flows out of Canada.
The Proposed Solution
  • Canadian AI Patent Prosecution Fund
    • A government-backed fund providing 50–75% coverage of international patent prosecution costs, modeled on Israel and Singapore’s successful programs.
Table 4: Proposed Patent Prosecution Fund Structure
Element Details
Eligibility Canadian-incorporated AI companies; aligned with AI Sovereign Compute Strategy; must conduct R&D in Canada; demonstrated commercial readiness.
Coverage Government funds 50–75% for PCT fees, national phase entries (US, EU, China, etc.), office action responses, professional fees, translations; maximum $500K–$1M per company over 5 years.
What Canada receives Non-exclusive, royalty-free license for government/research use (non-commercial); no ownership transfer; reporting on commercialization; right of first refusal on government procurement.
Economic anchoring IP transfer repayment (Israel model): if IP is transferred abroad and R&D stays in Canada, repay 3x; if all operations leave Canada, repay 6x; incentivizes maintaining Canadian operations post-exit.
Strategic focus Commercial barrier patents protecting specific applications and products, blocking competitor entry, generating licensing revenue, and ensuring freedom to operate for Canadian firms.
Coordination CIPO advises on filing strategy; share prior art research across funded companies; build a coordinated Canadian patent landscape.
Estimated budget $250M over 5 years; support 500+ companies; average $500K per company.
Expected outcomes 50% increase in international AI patents; higher valuations; licensing revenue returns to Canada; retained Canadian ownership post-exit; job creation in high-value AI.

Proposal #3: Regulatory sandbox IP protections to enable safe testing without exposing trade secrets

The Problem
  • Regulatory Testing Exposes Trade Secrets
    • Canada has no federal AI regulatory framework following AIDA’s cancellation in January 2025. Companies testing AI systems in regulated sectors must share extensive technical details with regulators but receive no IP protection.
IP Exposure Risks in Regulatory Testing
Sector IP disclosure requirements and risks
Healthcare AI (Health Canada) Clinical trial data reveals methodology; algorithm details for safety approval; advisory committees may include competitors; published results become prior art; no formal sandbox or IP protection provisions.
Financial Services (OSFI/CSA) Model architecture disclosure for bias testing; decision-making logic for discrimination review; CSA sandbox lacks explicit IP protections; trade secrets vulnerable during regulatory review.
Autonomous Systems (Transport Canada) Safety protocols and sensor data required; decision-making algorithms for approval; testing reveals performance data to observers; no comprehensive sandbox framework with IP rules.
Cross-sector gaps No protection from Access to Information requests; no prior art grace period for required disclosures; no regulatory data exclusivity provisions; no coordination between regulators and CIPO.
  • The Regulatory void
    • After AIDA’s failure, Canada has no federal AI legislation, no national regulatory sandbox framework, no explicit IP protection for AI testing, only fragmented sector-specific approaches, and an unpredictable future regulatory environment.
    • AIDA’s demise highlighted the need for meaningful consultation, harmonization with international norms, and a risk-based approach to regulation focused on deployment rather than research and development.
  • Competitive Disadvantage vs. International Peers
Canada vs. Peer Countries — AI Company Support
Country IP prosecution support Regulatory framework Result
Israel Tnufa Program: NIS 200K (~$70K CAD); 85% government funding; includes patent costs; R&D Fund: 50–70% Sector-specific sandboxes; Innovation Authority oversight; 3x/6x repayment for IP transfer (economic anchoring) 54% of exports high-tech
Singapore MRA Grant: SGD 100K; EIS tax deductions; IP-as-collateral loan scheme; IDI: 5–10% tax on IP income Cross-sector sandbox programs; ranked #2 globally for IP; fast-track patent grants; regional patent cooperation Asian biotech/AI hub
European Union Varies by member state; generally limited direct support EU AI Act (effective Aug 2024); mandatory sandboxes by Aug 2026; Article 78 confidentiality for trade secrets; structured sandbox plans Clear regulatory path
Canada (current) No patent prosecution funding program; SR&ED covers R&D but not patent costs; companies pay 100% AIDA cancelled Jan 2025; no federal AI framework; no national sandbox; no IP protection provisions; fragmented provincial approach Regulatory uncertainty
  • The consequences are predictable
    • Canadian AI startups either abandon international IP protection or seek foreign acquisition; investors discount Canadian AI companies due to weak IP positions; companies hesitate to test in Canada due to IP exposure risks; brain drain accelerates; and research excellence fails to translate into commercial ownership.
The Proposed Solution
  • Regulatory Sandbox IP Protection Framework
  • Implement EU AI Act–style protections enabling companies to test AI systems without exposing trade secrets or competitive intelligence.
Proposed Sandbox IP Protection Provisions
Protection mechanism Implementation
1. Regulatory confidentiality (EU Article 78 model) Information disclosed during sandbox testing is protected from ATIA requests; NDAs are in place for regulators and advisors; explicit trade secret protection is provided; and carve-outs are made from public disclosure rules.
2. Structured sandbox plans (EU Article 58 model) Negotiated agreements specifying what information is shared, how it is protected, and disclosure limitations; companies control sensitivity levels; the minimum necessary disclosure principle.
3. Prior art grace period Clarify Canada’s 12-month grace period applies to sandbox disclosures; sandbox publications do not count as prior art against participants’ own patents; CIPO guidance on filing timing.
4. Data rights and exclusivity Sandbox-generated data remains the participant’s property; the government holds a non-exclusive license for regulatory purposes only; no sharing with subsequent users; define the regulatory data exclusivity period.
5. Defensive publication Government assists with strategic disclosure of non-core innovations to block competitor patents and create public domain prior art, supporting freedom to operate.
6. Certification and exit Written proof of successful participation for investors/partners; exit reports with sensitive details redacted; IP protection recommendations post-sandbox.
  • Priority sectors for sandbox implementation
    • Healthcare AI (Health Canada) for clinical trials and diagnostic systems; Financial Services (OSFI/CSA) for credit scoring and fraud detection; Autonomous Systems (Transport Canada) for self-driving vehicles and drones.
  • Implementation Roadmap
    • Phase 1: Immediate Actions (0–6 months). Establish a Patent Prosecution Fund working group across ISED, CIPO, and Finance. Issue interim IP protection guidance for sectoral regulators. Consult with the industry on fund eligibility and sandbox needs. Study Israel and Singapore models in depth.
    • Phase 2: Pilot Programs (6–18 months). Launch a pilot Patent Prosecution Fund supporting 50 companies with a $25M budget. Implement sandbox IP protections in healthcare AI. Develop CIPO protocols for patent timing and strategy. Build IP valuation expertise for patents-as-collateral.
    • Phase 3: Full Deployment (18–36 months). Scale the Patent Prosecution Fund to 500+ companies with $250M over five years. Expand sandbox IP protections to all three priority sectors. Integrate with future federal AI legislation and align with the CBA’s previous recommendations on balanced IP frameworks. Establish reporting and tracking for commercialization outcomes and ROI.
    • Phase 4: Optimization (3–5 years). Add tax incentives for IP income. Create an IP-as-collateral financing scheme for patent-backed loans. Negotiate bilateral Patent Prosecution Highway arrangements. Measure outcomes such as patents filed, valuations, exits, and job retention.
  • The Bottom Line
    • Without these measures, Canadian AI startups will continue to abandon international IP protection or seek foreign acquisition to solve the IP problem; regulatory testing will continue to expose trade secrets; research excellence will not translate to Canadian commercial ownership; and brain drain will accelerate as innovators join foreign companies with IP support.
    • With these measures, Canadian AI companies can afford global protection; testing can proceed safely without IP exposure; higher valuations will attract investment while retaining Canadian ownership; economic anchoring via 3x/6x repayment keeps operations in Canada even post-exit; and Canada becomes a preferred jurisdiction for AI innovation and testing.
    • Success metrics (5-year targets): support 500+ companies with patent prosecution funding; 50% increase in Canadian AI patents filed internationally; 30% increase in average exit valuations for funded companies; 75% of exited companies maintain Canadian R&D operations; $500M+ in royalty repayments and licensing revenue returned to Canada; and 10,000+ high-value AI jobs created and retained.
    • The international precedents prove this works. Israel and Singapore have created high value exits while retaining economic activity. The EU has shown how regulatory frameworks can protect IP while ensuring safety. AIDA’s failure underscores the urgency of acting now with a more inclusive, harmonized, and risk-based approach to ensure Canadian ownership and operations continue beyond successful commercialization.

Proposal #4: Expansion of IP rights to promote human participation

To foster meaningful human participation in the algorithmic age, Canada could consider a framework that recognizes human-generated data, including biometric and neurodata, as a protected interest. Drawing inspiration from Denmark’s recent legislation expanding Copyright to personal attributes like voice and likeness, this approach would complement existing privacy laws like PIPEDA, by emphasizing individual control, transparency and economic participation, rather than ownership in the traditional IP sense.

While Canadian law does not currently recognize personal data as IP, emerging policy discussions around digital sovereignty and AI commercialization suggest a growing appetite for frameworks that empower individuals in the data economy. The CBA has previously supported cautious modernization of Canadian IP law, emphasizing human authorship and inventorship, and the need to protect creators’ rights in the face of AI-driven content generation. This principle is foundational to Canadian Copyright law, which distinguishes it from the laws of other jurisdictions by maintaining a strong emphasis on the role of human creativity, skill and judgment.

As outlined in Proposal #1, under the heading: “How does Canada retain and grow its AI research edge? What are the promising areas that Canada should lean on, where it can lead the world?”, the CBA has previously recommended codifying a federal tort of appropriation of personality and a federal right of publicity to protect individuals and artists from unauthorized commercial exploitation through AI-generated content, including deepfakes. These measures are particularly important where training data is used without direct copying, making copyright infringement difficult to prove, yet still resulting in the commercial use of an individual’s creative output. Such protections would ensure that individuals and artists are protected from unauthorized commercial exploitation in the age of generative AI, while also enabling fair participation in the digital economy.

Building on this foundation, Canada could explore the extension or expansion of IP protections to include health-related data, biometric identifiers, and other forms of personal data, ensuring individuals are not excluded from the value generated by AI systems trained on their likeness, voice, or other personal traits. Under such a model:

  • Individuals would retain inalienable, non-transferable rights in their raw neural signals and biometric data;
  • Companies could hold time-limited, transferable rights in derived models and analytics, contingent on transparency and licensing compliance; and
  • Licensing mechanisms such as Blockchain-based smart contracts could facilitate automated, auditable licensing, ensuring ongoing consent and enabling individuals to receive fair compensation or financial returns when their data is used in commercial products or services.

This approach could allow people to participate in the value chain, ensure ongoing consent and oversight, and create new opportunities for fair compensation in the digital economy.

Canada’s copyright regime already provides a balanced framework for text and data mining (TDM). As noted above, fair dealing under Section 29 of the Copyright Act, as interpreted by the Supreme Court of Canada, permits TDM for research and private study. This approach supports innovation while respecting creator rights and aligns with the principles of the proposed AIDA (to be reintroduced), which emphasizes transparency and responsible development.

By treating human data as a personal asset and enabling automated, auditable licensing, society might foster innovation while respecting the dignity and economic interests of the individuals whose minds generate this data. Recognizing an inalienable right in raw neural signals could help address the unique personal dimension of neurodata, while transferable, time-limited rights in derived analytics would preserve companies’ incentives to develop new therapeutic devices, adaptive-learning engines, or neuro-ergonomic applications.

Automated licensing via blockchain could minimize transaction costs, avoid the collective-action problems of individual bargaining, and provide regulators with an auditable trail for compliance. This framework avoids consigning neurodata to a pure commons or locking it behind perpetual corporate silos. By separating personal entitlement in raw signals from time-limited, tradable rights in downstream analytics, and by operationalizing that distinction through blockchain-enabled licensing, such a framework could foster an innovation ecosystem that rewards ingenuity while honouring the dignity, autonomy, and economic interests of the individuals at the heart of the data beneath the skull.

Proposal #5: Patent like protection for AI inventions

The foundational purpose of patent protection is to incentivize innovation through disclosure, enabling others to build on existing knowledge. While trade secret protection remains essential for safeguarding proprietary technologies, society benefits substantially from open science, knowledge sharing, and transparency.

In the context of AI, where rapid iteration and reduced R&D costs are reshaping innovation cycles, Canada must consider how its IP framework can evolve to support both transparency and competitiveness.

Today, many inventions are developed with the assistance of AI, where human inventors leverage AI tools and systems to enhance creativity, problem-solving and technical design. However, under current Canadian law, inventorship is limited to natural persons. This principle was reaffirmed in Thaler v. CIPO (DABUS), where the Canadian Patent Appeal Board held that AI cannot be named as an inventor under the Patent Act. The CBA has supported this position, emphasizing the importance of maintaining human authorship and inventorship as a cornerstone of Canadian IP law.

Despite this legal clarity, the increasing role of AI in innovation presents new challenges. Developers may be incentivized to rely on trade secrets rather than patents, particularly when the traditional 20-year exclusivity period does not align with the accelerated pace of AI-driven development. This trend risks undermining the public benefit of disclosure and impeding cumulative innovation.

To address these concerns, Canada could consider allowing AI-assisted inventions with demonstrable human inventorship, where the AI is used as a tool in the inventive process, and human contribution is central and verifiable, in the absence of infringement. This would preserve the integrity of inventorship while recognizing the evolving nature of innovation.

Additionally, Canada could explore the creation of a new category of patent-like protection specifically designed for AI-assisted inventions with demonstrable human inventorship. This framework would offer shorter exclusivity periods, for example, five to ten years, reflecting the rapid development cycles and reduced R&D costs associated with AI technologies. Such a model could strike a balance between encouraging disclosure and maintaining incentives for commercialization. Key components of this framework could include:

  • Mandatory disclosure of AI involvement in the inventive process, including the datasets, models, and algorithms used;
  • Time-limited exclusivity calibrated to reflect the rapid development cycles of AI technologies;
  • Compliance with Canada’s international obligations, including under CUSMA, which sets minimum standards for IP protection while allowing domestic flexibility;
  • Integration with a Patent Prosecution Fund (as proposed in Proposal #2) to support Canadian innovators in overcoming cost barriers to securing international IP rights.

This proposal aligns with Canada’s broader strategic objectives to scale globally competitive AI companies, retain domestic ownership of IP, and strengthen economic sovereignty. It also reflects the CBA’s call for modernizing IP frameworks to accommodate emerging technologies while preserving the integrity of foundational legal principles.

By creating a more agile and innovation-responsive IP regime, Canada can ensure that AI-assisted inventions with demonstrable human inventorship are disclosed, shared, and developed in ways that benefit both innovators and society, fostering a robust and inclusive innovation ecosystem.

What changes to the Canadian business enabling environment are needed to unlock AI commercialization?

(i.e. barriers such as Canadian-controlled private corporation rules and foreign direct investment constraints; incentives, capital access and liability mitigation; sector-specific and cross-sectoral policy levers)

How can Canada better connect AI research with commercialization to meet strategic business needs?

(i.e. determining government’s role in linking academia, start-ups and industry; retaining Canadian-developed intellectual property; prioritizing sectors like life sciences, energy and defence for commercialization support)

Scaling Canadian champions and attracting investments

How does Canada get to more and stronger AI industrial champions? What supports would make our champions own the podium?

(i.e. barriers to scaling, including mentorship needs; effective mechanisms for transitioning between federal programs; tailored support across early-, mid- and late-stage growth)

What changes to Canada’s landscape of business incentives would accelerate sustainable scaling of AI ventures?

(i.e. alignment of business incentives and programmatic improvements to support scaling firms; mechanisms to retain and champion high-potential Canadian companies)

How can we best support AI companies to remain rooted in Canada while growing strength in global markets?

(i.e. strategies for long-term retention of scaled firms; balancing global competitiveness with domestic economic impact; government’s role in championing Canadian AI success stories)

What lessons can we learn from countries that are successful at investment attraction in AI and tech, both from domestic sources and from foreign capital?

N/A

Building safe AI systems and strengthening public trust in AI

How can Canada build public trust in AI technologies while addressing the risks they present? What are the most important things to do to build confidence?

(i.e. risks posed by AI tools and services; drivers of public and business mistrust; educational and literacy strategies to foster informed confidence) 

Canadians will place their trust in AI when the rules are transparent, the stakes are clear, and the systems impacting people's lives can be explained in straightforward language. Voluntary guidance falls short. A modern, risk-based statute applicable to both public and private sectors should establish the foundational rules: inform people when AI is involved, offer meaningful explanations, provide an easy path to human review, and ensure real redress when decisions have significant consequences. These obligations should be practical to implement and easy to comprehend, allowing organizations to integrate them into design and procurement from the outset. Trust flourishes when assurance is visible. For systems with significant impact, algorithmic and privacy impact assessments should be conducted early and summarized for public access. People should be able to understand the purpose, data used, main logic, key risks, and the safeguards supporting the tool. In high-stakes public services where AI drives triage, departments should go further by detailing the decision framework and how they address and rectify disparities.

Maintaining a well-resourced human alternative signals that the system serves people, not the reverse. The government can set the tone. A public AI Systems Register listing each federal tool, its purpose, risk rating, supplier, contacts, and links to assurance materials would make transparency the norm rather than the exception. Coupled with accessible explanations and a clear complaints process, it allows people to see what is in use and how to raise concerns without having to guess where to start. Literacy is the other half of the trust equation. Canada should fund straightforward, modular learning—through schools, colleges, professional programs, newcomer supports, and small-business toolkits—so people understand what AI can and cannot do. Civil servants also need practical training to safely purchase, test, operate, and monitor AI. Given the rise of synthetic media, literacy should cover how provenance and watermarking work, where they are effective, and where they fall short. Engagement must be inclusive and ongoing, involving Indigenous peoples, migrants, persons with disabilities, and other communities often affected by automated decisions from the beginning. There are also areas where boundaries should be clearly defined. If a deployment poses unacceptable risks—live remote biometric surveillance is a common example—Canada should be ready to prohibit or pause it unless robust safeguards are demonstrably in place. Such clear limits are not anti-innovation; they are prerequisites for public confidence.

What frameworks, standards, regulations and norms are needed to ensure AI products in Canada are trustworthy and responsibly deployed?

(i.e. governance mechanisms for AI oversight; assurance of product integrity and ethical compliance; priority areas where trust issues are most acute) 

A credible framework consists of four interconnected components: legislation that establishes obligations; regulations that make these obligations concrete; standards and conformity assessments that transform principles into verifiable artifacts; and common-sense norms that guide daily practice.

Legislation should adopt a tiered, risk-based model. Unacceptable uses are either prohibited or tightly controlled and monitored. High-risk systems come with more stringent obligations both before and after deployment, including documented design choices, dataset governance, testing for fairness and robustness, security threat modeling, human involvement where rights are engaged, ongoing monitoring, incident reporting, and periodic review with the option to retire systems that do not meet the required standards. Lower risk uses are managed proportionately, avoiding unnecessary paperwork.

Standards ensure the regime functions effectively in practice. Model and system cards should outline the intended use, limitations, testing coverage, and known failure modes. Independent audits should be available and required in certain sectors. Procurement can integrate these practices into the mainstream by requiring vendors to provide assurance artifacts upfront. Canada should collaborate with its standards bodies to tailor profiles that align with domestic law while recognizing compatible foreign regimes to avoid duplication and help Canadian firms expand internationally.

Norms should emphasize the entire lifecycle. Good governance begins at design and continues through deployment and eventual decommissioning. This includes change control for retraining or fine-tuning, data minimization and quality checks, continuous evaluation to detect drift, and clear procedures to suspend or roll back when harms occur. Public-sector uses warrant higher duties because individuals cannot simply opt out.

Trust is most fragile where AI intersects with fundamental interests—public benefits, immigration and border decisions, law enforcement and biometrics, employment (including hiring and monitoring), credit and insurance, health, and children's safety. In these contexts, transparency, explanation, and access to human review are essential and should not be waived.

How can Canada proactively engage citizens and businesses to promote responsible AI use and trust in its governance? Who is best placed to lead which efforts that fuel trust?

(i.e. public-facing strategies to explain AI systems; inclusive approaches to trust building; balancing transparency with innovation) 

Engagement should be an ongoing relationship, not a one-time consultation. Every consequential AI system should come with clear notices that explain in simple terms why it is being used, what data feeds it, what parts of the logic matter, where it is likely to fail, and how to speak to a human if needed. When a person is on the receiving end of an AI-assisted decision, they should get a tailored explanation: what drove the outcome, which factors mattered, and how to challenge errors.

A national AI Help Centre would give people and small businesses a place to turn to. It could host practical FAQs, templates for impact assessments, examples of good explanations, and straightforward routes to complaints and appeals. It could also be used as a resource centre for learners to access AI usage trainings and skills. A federal AI Systems Register would complement this by showing which tools are in use across government and where to direct questions. Engagement should be inclusive by design—multiple languages, accessible formats, and partnerships with community organisations to co-create notices, testing protocols, and trust-building activities.

Responsibilities should be shared, with clear lines. ISED and the Treasury Board can set the legislative and policy baselines and use procurement to lift practice across the board. Program departments should publish their impact assessments, monitor outcomes for disparities, and keep human alternatives available. Privacy commissioners should oversee compliance, handle complaints, and issue practical guidance. Standards bodies should maintain the conformity and audit standards. Civil society and academia can contribute testing, red-teaming, and literacy programming. Industry should publish assurance artefacts, support independent evaluation, and resolve user concerns quickly.

Education and skills

What skills are required for a modern, digital economy, and how can Canada best support their development and deployment in the workforce?

(i.e. enable rapid adaptation to technological change; programs for both AI-focused careers and broader workforce readiness)

To excel in the modern digital economy, Canada must prioritize the cultivation of a workforce that is not only proficient in digital and data fluency but also with technical and professional competencies, entrepreneurial acumen, and commitment to responsible technology/innovation practices.

The strategy should promote fair and ethical AI development and deployment, prevent discriminatory practices and ensure accountability. Privacy law and data protection principles are relevant in this context: understanding data collection and usage, informed consent, data minimization and purpose limitation, right to explanation and transparency (algorithmic transparency) and bias mitigation. People with higher AI literacy will better understand how their personal data is collected, used and processed by AI systems. It can also help in recognizing the types of data AI models are trained on, how that data is sourced, and the potential for secondary uses.

How can we enhance AI literacy in Canada, including awareness of AI’s limitations and biases?

(i.e. workplace training programs or credentials; targeted engagements and public awareness campaigns; international best practices)

A uniform and consistent AI literacy program, along with comprehensive reskilling initiatives, is essential to prevent the emergence or widening of technological divides in Canadian society. AI literacy should include a thorough understanding of AI fundamentals, limitations, biases, and ethical considerations. This can be achieved through workplace training programs, public awareness campaigns, and the integration of AI literacy into professional training and licensure (e.g., for legal professionals: during law school, throughout the licensing process, and as mandatory professional development). Professions must stay current, knowledgeable, and skilled in these areas, but there must be a balance regarding the amount of specific skills training required annually.

A standardized AI literacy curriculum will help ensure that all members of society have equal access to foundational knowledge about AI technologies, their applications, and implications. Consistent reskilling programs enable workers to adapt to the evolving job landscape and remain relevant in an AI-driven economy.

Finally, consider procurement preferences and tax credits to support employer upskilling aligned with labor market demands. Additionally, consider the role of public-private partnerships in fostering collaborations among government, the private sector, academia, and non-profits.

What can Canada do to ensure equitable access to AI literacy across regions, demographics and socioeconomic groups?

(i.e. collaboration with other levels of government; role of industry and private sector; educational and literacy strategies to foster informed confidence)

To ensure (or guarantee) equitable access across all demographics and regions, Canada needs to develop a strategy that explicitly prioritizes equitable access and addresses the needs of diverse populations. Invest in bilingual “digital skills hubs”, mobile learning centers, and public access points such as libraries. Targeted program for rural communities, marginalized and equity-seeking groups supported by robust funding and incentive structures to bridge the digital divide.

Promote the creation and dissemination of free, high-quality AI educational materials, courses and tools that are easily accessible.

Impact on language rights and linguistic minorities

AI continues to rapidly reshaping industries, communication, services across Canada. Its integration into public and private sectors significantly impacts language rights concerning Canada’s official languages, English and French, presenting both challenges and opportunities.AI technologies like machine translation and voice recognition can enhance linguistic accessibility. For instance, real- time translation can bridge communication gaps in bilingual settings, while AI transcription services enhance access to government and legal processes in both languages. However, challenges arise when AI systems are poorly designed or regulated. Many models are primarily trained on English data, potentially leading to biased outputs and reduced accuracy in French, as well as inadequate support for regional dialects and Indigenous languages. This situation threatens the legal protections for Francophones, especially in minority communities (outside Québec).

To protect language rights, Canada must develop AI systems that promote bilingualism by investing in French-language data, enforcing language requirements for public-facing applications, and encouraging research into models that reflect linguistic diversity.

Building enabling infrastructure

Which infrastructure gaps (compute, data, connectivity) are holding back AI innovation in Canada, and what is stopping Canadian firms from building sovereign infrastructure to address them?

(i.e. strategies for derisking and promoting investment in different parts of the AI stack; government’s role in derisking; partnering with foreign capital)

While funding, talent, and technology are frequently cited as primary obstacles to AI adoption, these elements indeed influence further investment in Canada. However, Canadian firms and the public require confidence, which can be fostered through robust governance, regulation, and education. The data ecosystem in Canada also plays a crucial role. Without a clear understanding of AI, or evident returns on investment and benefits, investment in the technology will falter. Additionally, there must be investment in national data standards and support for data integration. A review of Canada’s privacy laws is necessary to assess whether changes are needed to facilitate access while ensuring adequate protection. Furthermore, Canada requires a strong regulatory framework to define data localization and determine when it is appropriate, as well as to clarify digital sovereignty and its implications. This will help build confidence, stimulate growth, and encourage investment, ensuring that what is developed and utilized in Canada remains within its borders.

How can we ensure equitable access to AI infrastructure across regions, sectors and users (researchers, start-ups, SMEs)?

(i.e. role of hyperscalers; open-source models; edge computing)

To achieve success, a human-centred approach is essential. Strengthening Canada’s position in the global digital economy is impossible without focusing on the people at its core. Public consultations across all sectors must be a fundamental part of this process to ensure inclusivity, equity, and digital sovereignty. Additionally, robust governance, regulation, and policy are crucial. Training programs should be designed to promote and ensure that AI infrastructure and services remain ethical, inclusive, and equitable.

How much sovereign AI compute capacity will we need for our security and growth, and in what formats?

(i.e. economic models for AI forecasting; comparison of public and private sector demand)

Security of the Canadian infrastructure and capacity

What are the emerging security risks associated with AI, and how can Canada proactively mitigate future threats?

(i.e. current and downstream risks posed by AI technologies; anticipated needs in national security and defence; strategic foresight for evolving threat landscapes)

AI raises the stakes in cybersecurity. Attackers can use models to scale social engineering and deepfakes, to accelerate vulnerability discovery, and to craft targeted evasion. Models themselves introduce new targets: poisoning the training data, tricking systems at inference time, stealing model weights, or compromising third-party components in the AI supply chain. As capabilities grow, so do the risks of misuse, from fraud to intrusive surveillance and information operations.

Mitigation should be built in, not bolted on. Secure-by-design engineering means routine threat modelling, red-teaming, and structured evaluations before launch and throughout a system’s life. Organizations need good logs, tamper-evident audit trails, and tested rollback plans. Data and model integrity require provenance checks, curated training data, and strong controls against poisoning and evasion, supported by tight access management and hardware-rooted protections. Incident reporting should explicitly cover AI-specific events—integrity failures, unsafe emergent behaviours, and model compromises—so that lessons spread quickly and fixes can be coordinated.

Canada should also strengthen the infrastructure for authenticating digital content. Provenance and watermarking will not end deception, but combined with education and platform safeguards, they make it harder for bad actors to pass off synthetic media as real. Where national security is concerned, any necessary confidentiality must remain proportionate, reviewable, and anchored in law to preserve both effectiveness and trust.

How can Canada strengthen cybersecurity and safeguard critical infrastructure, data and models in the age of AI?

(i.e. establishing policies and programs to protect sensitive assets, including data; building resilience into AI systems; leveraging international collaboration and partnership to meet global risks)

Critical infrastructure needs a clear, practical, testable, and enforceable security baseline for AI. That baseline should cover secure development lifecycles, continuous monitoring, and post-market evaluations that match the level of risk. Training data, prompts, fine-tuning datasets, and model weights should be treated as crown-jewel assets. Protect them with network segmentation, encryption in transit and at rest, hardware-backed secrets, least-privilege access, and anomaly detection tuned to model behaviour. Supply-chain security matters just as much: vet third-party components, require assurance artifacts from vendors, and keep a software bill of materials that includes model dependencies and evaluation histories.

Government can accelerate uplift through procurement. Contracts should request model and system cards, evaluation summaries, and red-team results, and they should prefer vendors that enable independent testing and responsible disclosure. Canada should pursue interoperability with like-minded partners so that security controls recognized abroad also work here and vice versa. Threat intelligence, evaluation methodologies, and incident response are all areas where international cooperation reduces duplication and raises the floor.

Where can AI better position Canada’s protection and defence? What will be required to have a strong AI defensive posture?

(i.e. coordination across public and private sectors; security-focused standards and frameworks; long-term preparedness for AI-driven security challenges) 

With the right guardrails, AI can strengthen Canada’s defence. In cybersecurity, models can spot weak signals across noisy environments, link related events, and help teams respond faster. In fraud and abuse detection, AI can surface coordinated campaigns and synthetic-media operations before they spread. In resilience engineering, it can test contingencies and guide recovery after incidents. To make this real, Canada should establish a public–private AI Security Centre that publishes evaluation suites and safety advisories, runs structured red-team exchanges, and offers safe-harbour reporting for newly discovered risks.

Preparedness depends on people and practice as much as on tools. Canada should invest in practitioner training across government—procurement, policy, legal, and operations—with micro-credentials tied to defined competencies. Regular exercises, both table-top and live, should test decision-making under AI-related incident scenarios. Security-focused standards, including continuous evaluation and auditable assurance artefacts, should be embedded in regulation and procurement for high-risk contexts. Over time, maturity models can guide organisations from baseline compliance to proactive, measurable risk management.

We hope these observations will be helpful.

Yours truly,

(original letter signed by Julie Terrien for Christiane Saad and Charlotte McDonald

Christiane Saad
Chair, Privacy and Access to Information Law Section

Charlotte McDonald
Chair, Intellectual Property Law Section