Navigating AI in PR Guide, Part 1

Valerie Lam-Bentley is a Senior Lead Editorial for the Ontario Medical Association and a candidate in the McMaster University Master of Communications Management program.
In the modern landscape of public relations, artificial intelligence is no longer a distant technology — it’s a transformative tool reshaping how communicators work today, and used properly, it even has the ability to predict trends of tomorrow.
But navigating this new terrain requires more than just adopting the latest tools. It demands strategic understanding, ethical consideration, and a nuanced approach. Sometimes you want a guide in this new territory.
In part one, I draw from research and industry insight, as a current Master of Communications Management graduate student, to curate a guide to help practitioners like us navigate AI effectively and responsibly.
In part two, I’ll share a fillable reflection worksheet. The worksheet combines your questions and input with industry expert best practices, so you can take it back to your workplace and use it for your needs. Be sure to share your thoughts after reading part one!
Disclosure: AI tools were used to help edit the content and create images in the post and promotion, in order to be more relevant to readers.
5 things we know about AI’s impact on public relations
1. Productivity, not replacement.
According to a 2023 study by the Chartered Institute of Public Relations (CIPR), 40 per cent of PR tasks are assisted by an AI tool, but no task has been completely replaced by AI. Or as HubSpot contributor Laura Browning says, “AI can’t replace human brains, but it can make a lot of mundane tasks a whole lot faster.”
The CIPR study shows that AI is mostly used by communicators to be more productive by 15–25 per cent in their workflow.
2. “Ethically safe” AI applications
PR professionals are strategically adopting AI in low-risk areas that do not pose major ethical concerns:
- Measurement and evaluation
- Monitoring and analysis of sentiment
- Insights into behaviours, workflow
- Internal communication
But that’s not to say that the use of AI outputs are automatically low-risk. It’s very important to consider how we’ll use AI outputs. Our decision making will have ethical consequences.
3. Half of us use AI for content creation
According to the 2023 CIPR report, content creation sits at a 50/50 AI assistance threshold. Hootsuite’s 2025 trend report confirms this, noting that 83 per cent of marketers use AI to generate more content efficiently. “Generative AI is off probation and officially on the team,” the report authors proclaim.
But don’t let a robot write your first draft, which produces unoriginal content without personality or a unique voice.
Instead, digital marketing expert Martin Waxman recommends using AI to critique the content — with a human to determine the final keystrokes.
4. Human-centric strategic domains
Even with the availability of AI assistance, many PR competencies remain firmly human-driven:
- Research and planning
- Team management
- Crisis communication
- Ethical decision-making
- Strategic partnerships
These areas are where PR professionals can really shine in demonstrating our strategic, human-led counsel. However, practitioners need to not only understand AI but advise on reputational impacts to its stakeholders.
Institute for Public Relations President and CEO Tina McCorkindale remarks that the future is collaborative with AI in a stance of cautious optimism, “where innovation thrives alongside ethical and thoughtful use of AI. In doing so, the communications industry can harness the full potential of AI, not just as a tool, but as a partner in crafting meaningful, impactful narratives.”
5. AI risk assessment is a priority
The European Union’s AI Act, the world’s first comprehensive AI regulation, categorizes rules based on risk levels.
The greater the AI risk, the more regulation is required, such as law enforcement, vocational training, and health or aviation products and systems. Even the lowest risk categories of AI still require assessment.
According to the EU AI Act, AI systems should be “safe, transparent, traceable, non-discriminatory and environmentally friendly” and supervised by real people who can help prevent harmful outcomes.
5 unknowns: Challenges in AI adoption
1. AI policy development
Although research from Telus shows what is important to Canadians for responsible AI, it’s unclear what how organizations are to form ethical policies.
In 2023, the Government of Canada announced voluntary participation in its responsible AI code of conduct, where signatories commit to measures to mitigate risks associated with AI. However, implementation of the voluntary code of conduct for signatories is not clear, although its guidance could give organizations flexibility to apply specific business context to their AI frameworks.
2. Privacy and transparency
Policies like the CIPR’s AI in PR Ethics Guide and the EU AI Act encourage the transparent disclosure of AI use, among a balanced consideration for risk assessment and innovation.
Tip: The CIPR Ethics Guide to Artificial Intelligence in PR, developed by a panel of AI in PR experts, is an authoritative guide for communicators and worth a close read.
Privacy control and transparency are related in terms of how data is collected, controlled, distributed and used. The risks of not safeguarding data can lead to breaches, and non-compliance with privacy laws.
3. Upskilling challenges
We don’t know what we don’t know. Learning and development is a must so that communicators use AI as a supplementary tool to assist and enhance tasks, in order to stay ahead of AI technology and its implications for audiences and business.
At the same time, education and professional development are major gaps when we’re trying to understand what AI entails to guide strategy or policy, and maximize its tools for creativity and productivity.
Maybe this professional development gap is what led you to this guide! You’ll have a chance to share your input and questions to inform part two of this guide.
4. The black box of AI
Generative AI tools like ChatGPT remain a “black box” that lacks explainability on how it works. There is a lack of understanding of how certain AI systems work, which reduces trust in its ability to be a responsible system.
We don’t know how AI is trained to produce the outputs we’ve come to depend upon for a host of uses and decision making. Without risk mitigation of black box systems, trust and transparency lie in its technology are in question.
5. Ethics and bias
We need an ethical framework to guide help us think and manage data ethically. Study authors Anne Gregory, Jean Valin, and Dr Swati Virmani of the Ethics Guide to Artificial Intelligence in PR say that communicators must prioritize:
- Preventing potential harm
- Protecting individual choice
- Avoiding unintended biases
- Ensuring transparent decision-making
AI has the potential for misuse, generating misinformation, disinformation and carries inherent bias that neglects underrepresented, diverse perspectives.
An ethical framework will be an important safeguard against systematic bias. According to Telus’ 2024 AI Report, trust is strongly associated with responsible AI that is guided by ethical principles and policy.
What next?
The path of AI integration is not predetermined. It’s a collaborative, evolving journey where strategic thinking, ethical considerations, and human creativity intersect.
Take action: Reflect on your organization’s AI approach. What opportunities can you unlock? What risks must you mitigate?
Please share your thoughts. Your anonymous response will help inform a fillable reflection worksheet for practitioners like us.
This resource will become part two of this guide, as a practical way to navigate AI in PR. With your input, it will combine industry best practice from part one together with the themes you want to learn more about.
Most of all, it will hopefully be a tool you can take back to your team and use to discover insights and opportunities about AI in PR in your context.
References
Browning, L. (2024, November 4). 5 Ways that AI Analytics Tools Can Make You a Better Marketer. Hubspot. https://blog.hubspot.com/marketing/ai-marketing-analytics?hubs_content=blog.hubspot.com%252Fai&hubs_content-cta=null&hubs_post-cta=blognavcard-marketing
Chakravorti, B. (2024, May 3). AI’s Trust Problem. Harvard Business Review. https://hbr.org/2024/05/ais-trust-problem
European Parliament (2023, August 6). EU AI Act: first regulation on artificial intelligence. Artificial Intelligence. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Government of Canada. (2024). Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. Innovation, Science and Economic Development Canada. https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems
Gregory, A., Valin, J., & Virmani, S. (2023, September). Humans needed, more than ever. Chartered Institute of Public Relations. https://newsroom.cipr.co.uk/humans-needed-more-than-ever-new-cipr-ai-in-pr-report-finds-ai-tools-assisting-with-40-of-pr-tasks/
Hootsuite. (2024). Social media trends 2025. https://hootsuite.widen.net/s/psc5swlkbh/hootsuitesocialtrends2025_report_en
Institute for Public Relations. (2024). Generative AI in Organizations: Insights and Strategies from Communication Leaders. https://instituteforpr.org/ipr-generative-ai-organizations-2024/
Telus. (2024). 2024 AI Report: The power of perspectives in Canada. https://www.telus.com/en/about/privacy/responsible-ai-join-us
Valin, J. & Gregory, A. (2020). Ethics Guide to Artificial Intelligence in PR. Chartered Institute of Public Relations, UK, & Canadian Public Relations Society. https://instituteforpr.org/it-is-always-about-ethics-even-more-with-ai/
Waxman, M. (2024, November 17). Content and Gen AI: How Much is Too Much? Digital Marketing Trends. https://www.linkedin.com/pulse/content-gen-ai-how-much-too-martin-waxman-mcm-apr-l6eyc/
MCM