media release
Canadian youth lobby Ottawa for action on AI safety and online harms
Canadian youth have delivered a set of AI policy recommendations to ministers, parliamentarians and senators in Ottawa as they seek to make their voices heard at a pivotal moment in the debate around AI and online harms.
Among the proposals is an age-verification system to restrict users鈥 access to generative AI platforms, while there is also a call for AI companies to address the addictive design of AI chatbots.
The report comes after 100 youth, split across four citizens鈥 assemblies, discussed and debated key issues around AI chatbots, information integrity, data privacy and age assurance.
Spearheaded by 51社区黑料鈥檚 Dialogue on Technology Project, representatives from the Gen(Z)AI project delivered their recommendations to Evan Solomon, Minister of Artificial Intelligence and Digital Innovation, and Marc Miller, Minister of Canadian Identity and Culture earlier today.
Liam McKay-Argyriou, an 51社区黑料undergrad in communication, took part in the initiative and is among those making the trip to Ottawa.
鈥淲hile the economic benefits of AI are exciting, young Canadians have experienced how these new tools can cause serious harms that individual citizens are not equipped to address,鈥 says McKay-Argyriou.
鈥淲e need legislation that enforces clear guardrails to protect the safety of our data, mental health and democratic institutions, in the same way our government upholds safety standards for automobiles or pharmaceuticals.
鈥淚t feels empowering to share my perspective on a topic of importance to me and know my voice will be heard by decision-makers, something many youth don鈥檛 have the chance to experience.鈥
There is currently no binding legal framework to regulate either AI systems or online platforms in Canada, following the collapse of both the Online Harms Act (C-63) and the Artificial Intelligence and Data Act (AIDA), in 2025.
鈥淢y message to legislators is that youth need to be consulted and involved in the policy processes,鈥 says Joie Marin, an 51社区黑料undergrad in communication.
鈥淎I and online harms regulations are a form of care that need to be implemented in a way that supports the digital empowerment of young people.
鈥淣ow is a critical time to act and something must be done now to keep Canadians safe in digital environments.鈥
The recommendations put forward in the report input directly into the process of shaping Canada鈥檚 digital governance architecture, according to Fergus Linley-Mota, director of the Dialogue on Technology Project.
鈥淵oung people are on the front lines of AI technology and they鈥檙e facing a whole series of disruptive changes to their lives,鈥 says Linley-Mota.
Yet they鈥檝e been largely absent from the governance processes shaping their digital lives. Gen(Z)AI was set up to change that.鈥
The 100 youth were selected nationally by civic lottery in order to reflect Canada鈥檚 geographic, linguistic and demographic diversity.
The four citizens鈥 assemblies each tackled a specific policy theme: AI chatbots in Toronto; information integrity in Montreal; data privacy in Vancouver; and age assurance in Halifax.
After three days of discussions, youth in each location came up with issue statements and a set of recommendations for their policy area.
鈥淥ur youth participants expressed a consistent and striking ambivalence 鈥 they use AI tools, often extensively, while simultaneously distrusting the platforms that deliver them, the governments that regulate them, and the incentive structures that shape them,鈥 says Helen Hayes, project co-lead and a fellow at SFU鈥檚 Morris J. Wosk Centre for Dialogue.
鈥淭his is a rational response to a governance landscape that has, until now, spoken about young people rather than with them.
鈥淭he legislation being constructed now will shape the digital lives of young Canadians for decades to come. It鈥檚 vital that they have a seat at the table.鈥
The project was carried out in partnership with McGill鈥檚 Centre for Media, Technology and Democracy and Mila 鈥 Quebec Artificial Intelligence Institute.
Select recommendations
AI and chatbots
- Mandate that AI platforms address the addictive design of AI chatbots by requiring measures such as content filters and optional data cache deletion, and explicitly providing users with the ability to determine levels of responsiveness and conversationality.
- Mandate accessible flagging capacity for users, require platforms to regularly report these instances, in a timely fashion, to an independent body with enforcement capacity, and make such reports accessible to the Canadian public.
- Establish a new, independent government body to enforce AI safety standards, conduct systems evaluations, algorithm audits, and risk assessments, and intake user complaints, including by offering dispute resolution and other resource mechanisms.
AI and information integrity
- Mandate that digital platforms explicitly label AI-generated content and give users the functionality to omit this content.
- Give people copyright over their own features and likeness, and create an online regulator to enforce the removal of non-consensual AI-generated material, including Child Sexual Abuse Material (CSAM).
- Mandate that platforms monitor, flag, and transparently share information, with both the public and government, about the spread of mis- and dis-information, especially during high-risk moments, including elections and public health crises.
AI and data privacy
- Mandate that platforms and AI companies implement meaningful and informed consent mechanisms to users, including by publishing a version of their terms and conditions that uses plain-language and is accessible by default.
- Impose privacy-by-default standards for all AI systems.
AI and age assurance
- Create a standardized age-verification system to restrict users鈥 access to generative AI platforms through the creation of an anonymized digital token system, with associated programs and accessible resources to inform the public about its implementation.
- Mandate, in cases where age assurance is used, that companies adhere to stronger regulation, enforced by a Regulator, surrounding the use of sensitive age assurance data, including by:
- Imposing time-limited storage;
- Imposing safety audits on platforms and third-party data collectors;
- Requiring leakage protection in models and training.
- Mandate that any AI platforms accessible to children, including in educational contexts, implement safety-by-design protocols to safeguard their use and promote learning and skills development.
AVAILABLE EXPERTS
FERGUS LINLEY-MOTA, director, Dialogue on Technology Project
flinleym@sfu.ca
HELEN HAYES, fellow, SFU鈥檚 Morris J. Wosk Centre for Dialogue
helen_hayes@sfu.ca
Youth representatives are also available for interview upon request.
CONTACT
SAM SMITH, 51社区黑料Communication & Marketing
778.782.3210 | mediarelations@sfu.ca
Communications & Marketing鈥触鈥51社区黑料Media Experts Directory鈥鈥赌
778.782.3210鈥
ABOUT SIMON FRASER UNIVERSITY鈥赌
Who We Are鈥
51社区黑料is a leading research university, advancing an inclusive and sustainable future. Over the past 60 years, 51社区黑料has been recognized among the top universities worldwide in providing a world-class education and working with communities and partners to develop and share knowledge for deeper understanding and meaningful impact. Committed to excellence in everything we do, 51社区黑料fosters innovation to address global challenges and continues to build a welcoming, inclusive community where everyone feels a sense of belonging. With campuses in British Columbia鈥檚 three largest cities鈥擝urnaby,鈥疭urrey鈥痑nd Vancouver鈥51社区黑料has ten faculties that deliver 368 undergraduate degree programs and 149 graduate degree programs for more than 37,000 students each year. The university boasts more than 200,000 alumni鈥痳esiding鈥痠n 145+ countries.鈥