The complete workflow for building campaign packs using Claude Projects and Skills

The complete workflow for building campaign packs using Claude Projects and Skills. Everything you need to replicate the system: methodology, prompts, setup instructions, and principles.

The complete workflow for building campaign packs using Claude Projects and Skills

What This Is

This is the complete workflow from my Data Lab session on 18 February 2026, where I demonstrated building a full campaign pack in under 30 minutes using Claude Projects and Custom Skills.

Everything you need to replicate this workflow is here: the methodology, all five prompts, the setup instructions, and the principles that make it work.

Why This Matters

Most AI workflows in communications produce one-off outputs. You ask for something, you get something, then you start from scratch next time.

This workflow is different. It builds a persistent workspace that gets more valuable with each use. Projects act as containers, Skills encode your quality standards, and each artefact you create becomes reference material for the next piece of work.

The promise: if you set this up properly once, you'll have a reusable system that produces consistently high-quality campaign materials in 30-35 minutes.

The 30/70 Principle

Before we get into mechanics, understand this: AI handles roughly 30% of the work. The other 70% is your strategic judgement, editorial control, and quality assurance.

AI is excellent at structure, first drafts, and executing defined patterns. It cannot replace your understanding of stakeholders, your feel for timing, or your judgement about what's credible versus what's overreach.

Treat AI as a very capable junior strategist who follows instructions well but needs supervision. The governance pass (Step 5) exists because AI will confidently generate plausible nonsense if you don't check its work.

The Three-Layer Framework

This workflow operates on three layers:

  • Layer 1: The Project Folder
    The persistent container for everything. Lives in your Claude account. Maintains context across multiple conversations. Holds your brief, your templates, your reference materials.
  • Layer 2: The Custom Skill
    Your quality encoder. Captures tone of voice, brand rules, writing style. Applies automatically to every generation in the Project. More reliable than repeating instructions in prompts.
  • Layer 3: The Prompts
    Execution instructions. Tells Claude what to create, in what format, using what structure. Combines with the Project's context and the Skill's rules to produce output.

All three layers work together. Remove any one and quality drops significantly.


Prerequisites

You'll need:

  • Claude Pro or Team account (Projects aren't available on the free tier)
  • A campaign brief (150-300 words is ideal)
  • 35-40 minutes of uninterrupted time for your first run-through
  • Basic understanding of what makes good comms strategy

You don't need:

  • Deep AI expertise
  • Prompt engineering experience
  • Technical skills beyond copy-paste

The Five-Step Workflow

Step 1: Clean Brief (3-4 minutes)

Goal: Surface hidden assumptions and create a Definition of Done.

Why this matters: Most briefs are incomplete. Clients assume you know their sector, their constraints, their success metrics. This step forces everything into the open before you start building.

The prompt:

I'm going to paste a campaign brief. Your job is to:

1. Identify all unstated assumptions in this brief (audience demographics, budget constraints, timeline expectations, approval processes, competitive context)
2. Flag any contradictions or ambiguities
3. Propose a "Definition of Done" — the 3-5 specific deliverables and success criteria that would fulfil this brief
4. Present this as: Assumptions [in brackets], Contradictions (if any), and Suggested Definition of Done

Brief:
[PASTE BRIEF HERE]

What you're looking for:

  • Assumptions flagged with [BRACKETS] that you can confirm or correct
  • Contradictions that need resolving before you proceed
  • A clear Definition of Done you can use to evaluate the final output

Real example output:

"Assumptions: [Mid-market finance teams = 50-500 employees], [Demo bookings = primary KPI, not MQLs], [2-week timeline = launch campaign, not nurture sequence]. Definition of Done: 3 LinkedIn posts, 1 email sequence, 1 landing page copy, 1 competitor comparison guide."

Common mistakes:

  • Skipping this step because the brief "seems clear enough"
  • Not correcting wrong assumptions (AI will build the campaign on false foundations)
  • Accepting vague Definitions of Done ("raise awareness" isn't measurable)

Time check: If this step takes more than 5 minutes, your brief is genuinely incomplete and needs more work before you proceed.


Step 2: Foundational Comms Structure (5-6 minutes)

Goal: Create the strategic anchor for all downstream decisions.

Why this matters: Without a foundational structure, every subsequent piece of work will drift. This step creates a single reference document that defines key messages, audience insights, proof points, and tone rules. Everything else builds from this.

The prompt:

Create a Foundational Comms Structure for this campaign. This is the strategic anchor for all content decisions. Include:

1. Strategic Context (1 paragraph: what we're doing and why)
2. Primary Audience (demographics, psychographics, current beliefs about the category/problem)
3. Secondary Audiences (if relevant)
4. Key Message (the single most important thing this audience should understand or believe)
5. Supporting Messages (3-4 secondary messages that reinforce the key message)
6. Proof Points (specific evidence, data, or examples that substantiate each message)
7. Tone & Voice Rules (how we sound: 4-5 specific guidelines)
8. What We're NOT Saying (important boundaries or messages to avoid)
9. Success Metrics (how we'll know this worked)

Format this as a reference document. We'll return to this throughout the campaign build.

What you're looking for:

  • A key message that's specific enough to guide decisions (not "we're experts" but "we're the only firm that's completed 12+ tech M&A deals under £50m in Scotland")
  • Proof points that are concrete (numbers, names, verifiable facts)
  • Tone rules that could differentiate two drafts ("conversational but precise" not "professional and engaging")
  • Boundaries that prevent overreach ("don't claim we're disruptive — we're specialists")

Real example output structure:

Key Message:
"For compliance teams drowning in manual reporting, [Product] automates what currently takes 2-3 days per month into 20 minutes."

Proof Point 1:
Current users report 87% time reduction in compliance reporting (Q4 2025 user survey, n=47)

Tone Rules:

  • Lead with the pain point, not the product
  • Use "you" language to make it personal
  • Avoid compliance jargon (no "regulatory frameworks" or "governance matrices")
  • Sound relieved, not evangelical

Critical note: Save this document. You'll reference it constantly. If using Claude Projects, this automatically stays in the conversation history. If you're not in a Project, export it as a separate file and upload it to future conversations.

Common mistakes:

  • Key messages that sound like mission statements ("We help businesses succeed")
  • Proof points that are vague ("industry-leading results")
  • Tone rules that every brand could claim ("authentic and trustworthy")

Time check: This should take 4-6 minutes to generate and review. If it's taking longer, the brief isn't clear enough (return to Step 1).


Goal: Encode tone of voice rules so you don't have to repeat them in every prompt.

Why this matters: Custom Skills apply automatically to everything generated in the Project. Instead of remembering to paste "use UK English, write conversationally, avoid jargon" into every prompt, the Skill enforces it automatically.

Note: Claude's Skill builder can be unreliable. If it glitches (known issue), your tone rules in the Foundational Structure still work — you just lose the automatic application benefit.

The setup prompt:

Based on the Tone & Voice Rules in the Foundational Comms Structure, suggest 5-6 voice model options for this campaign. Each suggestion should include:
- A 2-3 word label (e.g., "Plain-speaking advisor", "Sharp-but-warm expert")
- A brief description of the voice
- A sample sentence in that voice addressing the primary audience

I'll pick one and we'll create a Custom Skill from it.

What you're looking for:

  • Options that genuinely sound different from each other
  • Sample sentences that demonstrate the voice (not just describe it)
  • A voice that matches your client's positioning (not your personal preference)

Example output:

Option 1: "Straight-talking peer"
Sounds like: Someone who's done your job and won't waste your time.
Sample: "Compliance reporting eats 3 days a month. We've built something that gets it down to 20 minutes."

Option 2: "Data-first advisor"
Sounds like: Consultative but grounded in evidence.
Sample: "Our Q4 survey of 47 compliance teams showed a consistent pattern: manual reporting takes 2-3 days monthly."

Creating the Skill (if the tool works):

  1. Click "Create Custom Skill" in Claude Projects
  2. Paste the voice model description and sample
  3. Add specific rules from your Foundational Structure
  4. Test it by generating a short piece of content
  5. Refine if it's not matching the voice you want

If the Skill builder glitches:

Say to your audience: "Claude's Skill builder is having a moment. The tone rules are still in our Foundational Structure, so every prompt will reference those. The Skill just saves us from having to manually include them each time — but the content quality won't suffer without it."

Then proceed with the workflow. Your tone rules are already captured in the Foundational Structure and you can reference them manually in subsequent prompts.

Time check: 4-5 minutes if creation works. 60 seconds to acknowledge and move on if it doesn't.

Get production-ready templates for this workflow at CommsWith.AI — tested prompts, Custom Skills, and checklists you can use immediately.

Learn more

Step 3: Channel Plan + Content Angles (5-6 minutes)

Goal: Decide where content lives and what hooks make it compelling.

Why this matters: Without a channel plan, you'll create content in a vacuum. This step forces strategic thinking: which channels actually reach the audience, what formats suit each channel, what angles make the content worth engaging with.

The prompt:

Based on the Foundational Comms Structure, create:

1. Channel Plan: Recommend 3-4 channels for this campaign with rationale (why these channels for this audience + goal). Consider owned, earned, and paid. Be specific (not "social media" but "LinkedIn organic posts targeting Heads of Risk").

2. Content Angles: Generate 8 distinct content hooks that would work across these channels. Each angle should:
   - Connect to a supporting message or proof point
   - Be specific enough to write from
   - Vary in approach (some stats-led, some story-led, some problem-focused)

3. 3-Month Roadmap: Suggest a realistic content cadence and sequencing for a 3-month campaign.

Format as: Channel Plan (with rationale), Content Angles (numbered list), 3-Month Roadmap (month-by-month).

What you're looking for:

Channel Plan:

  • Rationale that connects to audience behaviour (not "LinkedIn because B2B" but "LinkedIn because 73% of Heads of Risk follow industry thought leaders there")
  • Specificity about formats (LinkedIn carousel posts, not just "LinkedIn")
  • Owned/earned/paid mix (if budget allows)

Content Angles:

  • Variety in approach (not 8 variations of "we save you time")
  • Connection to specific proof points from the Foundational Structure
  • Angles that could work across multiple formats

Example output:

Content Angle 3:
"The hidden cost of manual compliance: 3 days of senior time × 12 months = 36 days yearly. What could your team do with an extra month?"
Connects to: Proof Point 1 (87% time reduction). Works as: LinkedIn post, email subject line, landing page hero.

3-Month Roadmap:

  • Month 1 (Awareness): Problem-focused content, competitor comparison, "day in the life" of manual compliance
  • Month 2 (Consideration): Product education, demo clips, testimonial case study
  • Month 3 (Decision): ROI calculator, implementation timeline, Q&A with users

Common mistakes:

  • Channel plans based on where you want to be, not where your audience is
  • Content angles that are all the same shape (8 variations of "here's why we're great")
  • Roadmaps that ignore natural decision-making timelines

Time check: 5-6 minutes. If it's taking longer, your Foundational Structure isn't specific enough.


Step 4: Draft Assets (8-10 minutes)

Goal: Generate the actual content deliverables.

Why this matters: This is where the strategy becomes tangible. You're creating the posts, emails, landing page copy, and supporting materials that will actually go live.

The prompt:

Using the Foundational Comms Structure, Channel Plan, and Content Angles, draft 6 content assets:

1. [Specify asset type, e.g., "LinkedIn post using Content Angle 3"]
2. [Specify asset type, e.g., "Email sequence: 3 emails for demo booking nurture"]
3. [Specify asset type, e.g., "Landing page: hero, 3 benefits, CTA"]
4. [Specify asset type, e.g., "Competitor comparison: 1-page guide"]
5. [Specify asset type, e.g., "Testimonial structure for user interview"]
6. [Leave this open — let Claude suggest an appropriate 6th asset based on the campaign needs]

For each asset:
- Reference the Foundational Structure for key messages and tone
- Include [BRACKETS] for any placeholder content that needs client input
- Flag assumptions you're making about format or length
- Ensure consistency across all assets

For asset 6: Propose what would be most useful for this campaign, then draft it.

What you're looking for:

  • Consistency in voice across all assets (they should sound like they're from the same campaign)
  • Strategic coherence (the LinkedIn post should drive to the landing page, the email should reference the guide)
  • Appropriate length for channel (LinkedIn posts = 150-200 words, not 500)
  • Clear [PLACEHOLDER] markers for content you can't generate (client quotes, specific product features)

Real example output (Asset 1 — LinkedIn post):

"Compliance reporting shouldn't take 3 days a month.

For most mid-market finance teams, end-of-month compliance is a manual slog: pulling data from 4 systems, cross-checking against requirements, formatting reports that regulators will actually read.

The teams we work with were spending 2-3 days monthly on this. Not because they're inefficient — because manual processes don't scale.

[PRODUCT] automates what takes your team 3 days into 20 minutes. Same output, same rigour, 87% less time.

What would your team do with an extra 2.5 days per month?

[CTA: Book a 15-minute demo]"

Asset 6 (Claude's suggestion):

"I'd recommend a one-page 'Compliance Time Audit' worksheet that Heads of Risk can use to calculate their current time investment. This gives them a concrete number before they see your solution — making the ROI case more tangible."

Common mistakes:

  • Drafting assets that ignore the Foundational Structure (key messages drift)
  • Creating content in isolation (LinkedIn post doesn't reference the landing page)
  • Accepting generic first drafts without refining (AI's first pass is rarely the best version)

Time check: 8-10 minutes to generate. Allow another 3-4 minutes to review and refine.


Step 5: Governance Pass (6-8 minutes)

Goal: AI critiques its own work against professional standards.

Why this matters: This is the 30/70 split in action. AI generated everything in Step 4, but now you need to evaluate whether it's actually good enough to use. The governance pass identifies problems before they become live content mistakes.

The prompt:

Review the campaign pack we've just created against professional communications standards. Act as a senior strategist conducting a quality audit.

For each asset, check:
1. Message consistency with the Foundational Structure
2. Claims that need substantiation (flag any that are unsupported)
3. Tone drift (does it still sound like our voice model?)
4. Logical gaps or weak reasoning
5. Potential reputational risks
6. Over-promising or credibility stretch

Then create a Pre-Publish Checklist: a one-page reference doc listing all the quality checks that should happen before any content from this campaign goes live.

Save this checklist as a file in the Project Knowledge — it becomes a reusable quality standard.

What you're looking for:

  • Specific flags with asset references ("Asset 2, Email 1: claim about 87% time reduction needs source citation")
  • Recognition of where AI has made logical leaps ("Asset 3: we're claiming 'industry-leading' but our proof point is 47 user survey — that's not conclusive")
  • Practical suggestions for fixing problems (not "improve this" but "replace X with Y")

Real example output:

FLAGGED ISSUES:

Asset 1 (LinkedIn post):
Claim: "Same output, same rigour" — needs client confirmation that automated reports match manual quality standards. Currently unsupported.

Asset 3 (Landing page):
Tone drift in benefits section — "revolutionary" and "game-changing" don't match "straight-talking peer" voice model. Rewrite with problem-solution framing instead of hype.

Asset 4 (Competitor comparison):
Logical gap — compares [PRODUCT] to "manual spreadsheets" but brief mentioned other SaaS tools exist. Need client input on actual competitive set.

PRE-PUBLISH CHECKLIST:

  1. All claims linked to proof points in Foundational Structure
  2. No unsupported statistics or "industry-leading" language
  3. [PLACEHOLDERS] replaced with client-specific content
  4. Tone check: sounds like voice model, not generic AI
  5. Legal review: no competitor disparagement, no unsubstantiated claims
  6. Accessibility check: clear CTAs, readable formatting
  7. Link verification: all URLs functional
  8. Social proof: testimonials have attribution

CRITICAL: Multi-tool quality checking (optional but recommended)

Before publishing, run your final assets through a second AI tool (ChatGPT, Gemini, or any alternative to Claude) and ask: "Review this content for claims that need evidence, tone inconsistencies, or logical gaps."

Different AI models catch different issues. Claude might miss an unsubstantiated claim that ChatGPT flags. This isn't about which tool is "better" — it's about getting a second opinion from a different evaluation perspective.

Time check: 6-8 minutes for the governance pass. Budget another 10-15 minutes to actually fix the flagged issues before content goes live.

Common mistakes:

  • Treating this as a rubber-stamp exercise (if AI finds zero issues, you're not looking hard enough)
  • Skipping the governance pass because "it all looks fine"
  • Not actually fixing the problems AI identifies (generating the critique is pointless if you don't act on it)

The 30/70 principle in action:

AI just told you what's wrong with the campaign pack. Now you need to:

  • Decide if its flags are valid (sometimes AI is overly cautious)
  • Make editorial judgements about fixes (AI might suggest safer language when you want to be bold)
  • Get client input on [PLACEHOLDERS] (AI can't know your client's actual proof points)
  • Apply your professional judgement about what's ready to publish

This is the 70% of the work. AI did the structure and first drafts (30%). You're doing the strategic editing, quality assurance, and final decisions (70%).


Post-Workflow: What Happens Next

Immediate actions (Day 1):

  1. Fix all flagged issues from the governance pass
  2. Replace [PLACEHOLDERS] with real client content
  3. Get internal sign-off on key messages and tone
  4. Verify all proof points are accurate and sourced

Before launch (Week 1):

  1. Run final content through legal/compliance review
  2. Test all links and CTAs
  3. Schedule content according to the 3-month roadmap
  4. Brief internal teams on key messages and campaign goals

Post-launch (Ongoing):

  1. Track performance against success metrics from Foundational Structure
  2. Use the Pre-Publish Checklist for all subsequent campaign content
  3. Update the Project with learnings and refinements
  4. Build campaign performance data into the Foundational Structure for future reference

Why This Workflow Works (And Why Others Don't)

Most AI workflows fail because they're transactional:

  • You prompt for a LinkedIn post, you get a LinkedIn post
  • You prompt for an email, you get an email
  • Each output is isolated, no strategic continuity

This workflow succeeds because it's systematic:

  • The Project folder maintains context across all assets
  • The Foundational Structure ensures strategic coherence
  • The Custom Skill (if created) enforces quality automatically
  • Each asset builds on previous decisions
  • The governance pass prevents drift

The result: A campaign pack where everything sounds like it's from the same strategy (because it is), where claims are substantiated (because you checked), where tone is consistent (because it's encoded in the Skill), and where quality is reliable (because you ran the governance pass).

CTA Image

Ready to scale this approach? Faur combines 20 years strategic comms experience with practical AI implementation — book a consultation.

Book Now

Common Problems and Fixes

Problem: "The Foundational Structure is too generic. It could work for any campaign."

Fix: Return to Step 1 and force more specificity in the brief. If the brief says "tech companies" ask "which segment of tech?". If it says "decision-makers" ask "which function?". Vague briefs produce generic strategies.


Problem: "AI keeps generating content that ignores the Foundational Structure."

Fix: Your Foundational Structure isn't directive enough. Instead of "sound professional" write "use 'you' language, lead with pain points, avoid jargon". Instead of "be credible" write "every claim must link to a proof point in this document". AI follows instructions literally — vague instructions produce vague outputs.


Problem: "The governance pass found 15 problems and now I don't trust anything."

Fix: That's normal for a first run-through. AI generates structurally sound content with strategically weak details. The governance pass is working as designed — it's surfacing issues before they become published mistakes. Fix the flags, run the pass again, repeat until you're down to 2-3 minor issues.


Problem: "This took 50 minutes, not 30."

Fix: First-time setup always takes longer. Your second campaign in the same Project will be faster because:

  • You'll reuse the Project Instructions
  • You'll have a Skill already created
  • You'll know which prompts need refinement
  • You won't be learning the workflow as you go

Target 30-35 minutes by your third campaign.


Problem: "My client won't accept AI-generated content."

Fix: Don't tell them it's AI-generated. Tell them you've used a structured strategic framework to develop the campaign, which you have. The Foundational Structure is strategy work. The Channel Plan is strategic thinking. The governance pass is quality control. AI accelerated the drafting, but the thinking is yours.


What You Should Have Now

If you've followed this workflow completely, you have:

1. A Project Folder containing:

  • The original campaign brief
  • The clean brief with assumptions and Definition of Done
  • The Foundational Comms Structure
  • The Channel Plan with 3-month roadmap
  • 8 content angles
  • 6 drafted content assets
  • A governance pass critique
  • A Pre-Publish Checklist (saved as reference)
  • A campaign recap slide

2. A Custom Skill (if creation succeeded) encoding:

  • Tone of voice rules
  • Writing style preferences
  • Brand-specific dos and don'ts

3. A reusable system that you can:

  • Apply to your next campaign by creating a new Project
  • Refine based on what worked and what didn't
  • Share with colleagues as a template
  • Build upon with each subsequent campaign

How to Use This for Real

Tomorrow:

  1. Take your next campaign brief
  2. Create a new Claude Project
  3. Run through Steps 1-5 using the prompts in this article
  4. Allow 40 minutes for your first attempt
  5. Review the output with the Pre-Publish Checklist

Next week:

  1. Refine the prompts based on what worked
  2. Update your Custom Skill if it's not matching your voice
  3. Try the workflow with a different campaign type
  4. Track how long each step takes (target: 30-35 minutes total)

Next month:

  1. You should have 3-4 Projects set up for different clients/campaign types
  2. Each Project contains the history of previous campaigns
  3. Your Skills are refined to match each client's voice
  4. You're consistently hitting 30-minute campaign builds

Advanced Variations

Once you're comfortable with the core workflow, try these variations:

Variation 1: Multi-channel campaign packs
Run Steps 1-3 once, then run Step 4 three times with different channel focuses (social-only pack, email-only pack, earned-media-only pack). This creates modular content libraries.

Variation 2: Campaign iteration
After launch, feed performance data back into the Project and ask AI to suggest content refinements based on what's working. This creates a learning loop.

Variation 3: Template multiplication
Save your best Foundational Structures as separate documents. When you start a new campaign, upload a similar past example and prompt: "Use this as structural inspiration but adapt for [new brief]."

Variation 4: Multi-tool workflow
Run the full workflow in Claude, then paste the final assets into ChatGPT or Gemini and ask: "What would you change about this campaign?" Different models surface different strategic gaps.


Final Notes

This workflow took six run-throughs to refine. Version 1 took 50 minutes and produced mediocre output. Version 6 takes 30 minutes and produces strategically sound campaign packs that need editorial polish, not wholesale rewrites.

The difference: understanding that AI is a junior strategist who needs clear instructions, quality floors, and supervision. The Project creates the container, the Skill enforces the standards, the prompts execute the strategy, and the governance pass prevents mistakes.

You won't get it perfect on your first attempt. Run through it three times before judging whether it works for you. Refine the prompts to match your clients. Adjust the Foundational Structure template to your strategic frameworks. Make it yours.

The core principle remains: 30% execution support from AI, 70% strategic judgement from you. If you're accepting AI's first draft without critique, you're doing it wrong. If you're rewriting everything from scratch, you're not using AI effectively.

The workflow works when you treat AI as a capable but junior team member who needs clear briefs, quality standards, and editorial oversight.


About Applied Comms AI

Applied Comms AI is where we openly share our AI implementation experiments, successes, and failures in communications workflows. We prove our capability by showing our work — transparently testing what works, what doesn't, and why.

This article is part of our practical implementation series. For more workflows, templates, and honest evaluations of AI tools for communications professionals, visit appliedcomms.ai.

About Faur

Faur is a communications consultancy pioneering practical AI expertise for organisations ready to implement at scale. We work with multinationals, NGOs, membership bodies, and ambitious startups to develop strategies, conduct digital audits, and deliver content workshops that transform communications capabilities.

Founded by Michael MacLennan, former Digital & Social Director at Grayling and veteran of Brunswick, Red Bull, BBC, Barclaycard, and ITN. Imperial College ML & AI certified. 20 years transforming communications for global brands.


Version History
v1.0 (18 Feb 2026):
Initial publication following Data Lab session
Based on workflow refined through six live run-throughs
Prompts tested against three campaign types: B2B SaaS, tourism, professional services

Licence & Usage
This workflow and all prompts are free to use, adapt, and share. Attribution appreciated but not required. If you improve the workflow, consider sharing your refinements with the Applied Comms AI community.

No warranty provided. Results depend on brief quality, editorial judgement, and appropriate AI model selection. Your mileage may vary.