Applied Comms AI Ethics Policy
Our Commitment
Applied Comms AI believes artificial intelligence should enhance human creativity and capability in communications, not replace it. We're committed to using AI responsibly whilst honestly documenting our journey, including the sustainability challenges we're still working to solve.
We are signatories to the Global Alliance's Responsible AI Guiding Principles and have committed to the Venice Pledge, which provides our profession with a shared framework for the responsible use of AI. Our policy builds upon these seven foundational principles whilst addressing the specific challenges of our newsletter and community.
Core Principles
These principles align with and build upon the Global Alliance's seven Responsible AI Guiding Principles, which have been adapted for our specific newsletter and community context.
1. Transparency First
- Always disclose when content is AI-assisted or AI-generated
- Share our process: How we used AI, what worked, what didn't
- Admit limitations: We won't claim AI can do things it can't
- Show the human element: Highlight where human judgement remains essential
2. Quality & Authenticity
- AI augments, humans decide: All final editorial decisions remain human-made
- No AI ghostwriting: If content is attributed to a person, that person wrote it
- Fact-check everything: AI outputs are always verified before publication
- Preserve voice: AI should enhance our tone, not replace it
3. Respect for People
- Protect privacy: No personal data fed to AI systems without consent
- Credit human work: Acknowledge when AI builds on existing human creativity
- Employment impact: Openly discuss AI's effect on communications jobs
- Accessibility: Ensure AI doesn't create barriers for any audience
Professional Standards & Industry Alignment
As signatories to the Global Alliance's Responsible AI Guiding Principles, we commit to:
- Ethics First: Adhering to professional ethical standards across all AI usage
- Human-Led Governance : Maintaining human oversight aligned with public interest
- Personal and Organisational Responsibility: Taking full accountability for all AI-assisted outputs
- Awareness, Openness, and Transparency: Clear disclosure of AI involvement in our content
- Education and Professional Development: Continuous learning and sharing knowledge with our community
- Active Global Voice: Contributing to industry standards and responsible AI advocacy
- Human-Centred AI for the Common Good: Ensuring our AI usage serves societal wellbeing
Our Additional Standards
- Client confidentiality: Never use proprietary information in AI experiments
- Intellectual property: Respect copyright in all AI-generated content
- Accuracy standards: Maintain same fact-checking rigour regardless of content source
- Bias awareness: Actively monitor and address AI bias in outputs
Sustainability Challenges
We acknowledge AI has significant environmental impacts through energy consumption and carbon emissions. Currently:
What We're Doing
- Minimising unnecessary usage: Only using AI when it adds clear value
- Choosing efficient models: Selecting less resource-intensive options where possible
- Documenting impact: Tracking our AI usage to understand our footprint
- Supporting research: Highlighting sustainability innovations in AI
What We're Working Towards
- Carbon measurement: Developing methods to track AI-related emissions
- Offset consideration: Exploring how to balance AI benefits against environmental costs
- Efficiency advocacy: Pushing for more sustainable AI development
- Alternative approaches: Testing lower-impact AI alternatives as they emerge
We recognise this is an evolving challenge without perfect solutions yet. We'll update our approach as better options become available.
Content Guidelines
When We Use AI
- Research assistance: Gathering background information and identifying trends
- Draft enhancement: Improving clarity, structure, or style of human-written content
- Tool testing: Experimenting with AI capabilities for newsletter content
- Creative prompts: Generating ideas that humans then develop
When We Don't Use AI
- Personal interviews: All conversations remain human-to-human
- Final decision-making: Editorial choices, strategic recommendations, and ethical judgements
- Sensitive topics: Crisis communications, legal matters, or personal stories
- Original insights: Our analysis and opinions remain human-generated
Transparency in Practice
In Newsletter Content
- Clear labelling: e.g. "This section was written with AI assistance"
- Process notes: e.g. "We used ChatGPT to help structure this analysis, then rewrote in our voice"
- Honest assessments: e.g. "The AI got this wrong, here's what actually happened"
In Tool Reviews
- Full disclosure of any commercial relationships
- Testing methodology is clearly explained
- Limitations and failures prominently featured
- User privacy implications discussed
Community Standards
Reader Engagement
- Honest dialogue: We'll discuss AI ethics openly with our community
- Feedback welcome: Readers can challenge our AI usage and we'll respond
- Shared learning: We'll feature reader experiences with AI ethics
- No lecturing: We're figuring this out together, not preaching from on high
Industry Leadership
- Set examples: Demonstrating responsible AI usage in communications
- Share standards: Making this ethics policy available for others to adapt (we only ask that you let us know if you do, for tracking impact)
- Challenge poor practice: Calling out irresponsible AI usage in our field
- Collaborate: Working with others to improve industry standards
Regular Review
This policy will be reviewed every six months and updated based on:
- Technology developments: New AI capabilities and limitations
- Industry standards: Evolving best practices in communications
- Community feedback: What our readers think works and doesn't
- Environmental progress: Improvements in AI sustainability
- Our own learning: Insights from our experiments and mistakes
Questions or Concerns?
If you have questions about our AI ethics or would like to challenge any of our approaches, please get in touch. We're committed to learning and improving.
This policy reflects our commitment to responsible AI usage, and continuing to monitor and update our practices. It's a living document that will evolve as we learn more about both the opportunities and challenges of AI in communications.
Last updated: 24 June 2025
Next review: December 2025