Canada's AI Register: A Tool of Transparency or Control?
Canada's Federal AI Register, launched in 2025, is more than a transparency tool—it actively shapes accountability. A new study reveals its limitations and biases in tracking AI systems. The register omits key details, raising questions about its effectiveness.

Canada's Federal AI Register, operationalized in November 2025, was intended to be a landmark in government transparency. The register lists 409 AI systems used by federal agencies, but a new study from arXiv argues that it does more than just reflect government activity—it actively shapes what is considered accountable. Researchers used the Algorithmic Decision-Making Adapted for the Public Sector (ADMAPS) framework to analyze the register, revealing significant gaps and biases.
The study highlights that the register obscures as much as it reveals. While it provides a quantitative map of AI systems, it lacks detailed qualitative information about how these systems make decisions or their impact on citizens. This omission means the register may not fulfill its intended purpose of holding the government accountable. The researchers argue that such registers are not neutral but are tools of 'ontological design,' shaping what is considered important in AI governance.
The findings raise critical questions about the future of AI transparency in government. If registers like Canada's are to be effective, they must include more detailed information about AI systems' decision-making processes. The study suggests that without such improvements, these registers risk becoming mere formalities rather than tools for genuine accountability. The next steps will likely involve calls for reform to ensure that AI registers provide meaningful transparency.