Home/ Blog/ AI Replaced My Agency Team
Applied AI

How I Replaced a 5-Person Agency Team with Claude & n8n

Mohamed Reda Mohamed Reda March 14, 2026 8 Min Read
AI automation workflow — servers and data pipelines
Back to Blog
Share:

In early 2024, I was running content operations for three simultaneous Mashhor Hub campaigns. At peak load, my team consisted of a copywriter, a creative strategist, a designer, an editor, and a social media coordinator — five people, five salaries, five different communication threads, and a workflow that was about as scalable as a sand castle at high tide.

Today, the same output volume is handled by me, one junior coordinator, and a stack of AI tools orchestrated through n8n. The output quality went up. The turnaround time went down by 60%. The cost savings were significant enough to fund the entire Mashhor Cloud development. This is not a hype piece. This is exactly what I built and how.

The Problem I Was Actually Solving

Before I dive into the tools, let me be precise about what "replacing a team" actually means — because it's often misunderstood. I didn't automate creativity. I automated the mechanical work surrounding creativity: research aggregation, brief formatting, first-draft generation, variation testing, performance reporting, and distribution scheduling.

The creative judgment — which angle to test, which insight to act on, what the brand voice actually sounds like — that still lives with me. The AI handles the execution at scale.

300+
Ad creatives/week
60%
Faster turnaround
5→2
Team headcount
4.5x
Average ROAS

The Exact Tool Stack

Here's what I'm running, and why each tool was chosen over its alternatives:

  • Claude 3.5 Sonnet (Anthropic) — Primary language model for all copywriting, brief analysis, and strategic thinking. I chose Claude over GPT-4o for this workflow specifically because of its superior instruction-following on structured outputs and its ability to maintain brand voice consistency across long documents. As an Anthropic-certified practitioner, I've also done the deep work on its system prompt architecture.
  • n8n (Self-hosted) — The orchestration layer. Every workflow runs through n8n. It connects Claude to Google Sheets (briefs), Airtable (content calendar), Notion (performance logs), and our image generation tools. I self-host for data sovereignty — client ad strategies don't leave our infrastructure.
  • Midjourney + Ideogram — Image generation for ad creatives. Midjourney for photorealistic lifestyle shots; Ideogram for text-heavy designs and Arabic typography work.
  • ElevenLabs — AI voice for video ad scripts. We generate the voiceover track first, then the video team edits to match timing — not the other way around.
  • Google Sheets as the "Source of Truth" — All briefs, outputs, and approval states live in Sheets. The non-technical team never needs to touch n8n or Claude directly.

The Core Workflow: From Brief to 30 Creatives in 4 Hours

Here's a simplified version of my primary content generation workflow:

  1. Brief Entry (Human) — The coordinator fills out a Google Sheet row with: product name, target audience, campaign objective, key differentiators, brand voice notes, and any forbidden language. This takes about 8 minutes.
  2. n8n Trigger — A webhook fires when the row status is set to "APPROVED." n8n pulls the brief data.
  3. Research Aggregation — n8n queries a Perplexity API integration for relevant current market context, competitor positioning, and trending angles in the target market (Kuwait/GCC specific).
  4. Claude Strategy Pass — The research + brief is sent to Claude with a system prompt that forces it to output 5 distinct strategic angles, each with a single-sentence hook, the emotional driver being targeted (status, fear of missing out, transformation), and a call-to-action variant.
  5. Human Review Gate — The 5 angles are formatted and sent to me in a Telegram message (via n8n's Telegram node). I approve 2-3 with a simple reply.
  6. Full Creative Generation — Claude generates 10 copy variations per approved angle: 3 headlines, 3 body options, 2 CTA variants, and 2 Arabic adaptations. That's 20-30 complete ad copy sets from one brief.
  7. Image Prompt Generation — Claude writes corresponding Midjourney prompts for each approved angle. n8n routes these to Midjourney via their API and stores the resulting images in a Google Drive folder organized by campaign.
  8. Airtable Assembly — Copy + images are assembled into Airtable records, ready for the design team to finalize and the media buyer to implement.

The Prompt System That Makes It Work

The quality of this system lives and dies on the prompts. Here's a simplified version of my primary content brief system prompt for Claude:

You are a Senior Direct Response Copy Strategist specializing in the GCC market (Kuwait, Saudi Arabia, UAE, Egypt).

BRAND CONTEXT: {{brand_name}} | {{brand_voice}} | {{forbidden_phrases}}
PRODUCT: {{product_name}}
OBJECTIVE: {{campaign_objective}}
TARGET: {{target_audience}}
DIFFERENTIATORS: {{key_differentiators}}

OUTPUT EXACTLY 5 STRATEGIC ANGLES in this JSON format:
{
  "angle_id": "A1",
  "angle_name": "[memorable 2-3 word label]",
  "hook": "[single sentence, max 12 words, stops the scroll]",
  "emotional_driver": "[status|fear|aspiration|transformation|belonging]",
  "target_insight": "[the specific belief or pain this angle exploits]",
  "cta_direction": "[what action you're driving toward]"
}

Rules:
- No angle can share the same emotional_driver
- Hooks must be testable — no vague claims
- All angles must be culturally appropriate for GCC audiences
- One angle must work for Snapchat Stories format (vertical, 3-second hook)
"The biggest mistake in AI content generation is treating the model like an employee and giving vague instructions. Treat it like a brilliant contractor who has never met your client — every constraint, every preference, every 'we never say this' needs to be in the system prompt."

The Arabic Localization Layer

Running bilingual campaigns in Kuwait means everything needs an Arabic-first version, not just a translation. I built a separate localization workflow that takes approved English copy and runs it through a dedicated Arabic adaptation prompt:

You are a Senior Arabic Copywriter specializing in Gulf Arabic (Kuwaiti dialect awareness required).

TASK: Adapt the following English ad copy to Gulf Arabic.

RULES:
- This is NOT a translation — it's a cultural adaptation
- Match the emotional intensity of the original
- Use Gulf Arabic colloquialisms where appropriate
- NEVER use Egyptian Arabic expressions
- Keep brand names in English (do not transliterate unless specified)
- Maintain the same CTA urgency
- Output must work right-to-left (RTL display)

ENGLISH COPY: {{english_copy}}

What Didn't Work (Be Honest About This)

This workflow took 4 months to fully operationalize. Here's what failed along the way:

  • Using GPT-4o for structured output — It hallucinated field values in the JSON output at a rate that was unacceptable for automation. Claude's instruction-following is measurably better for this use case.
  • Trying to automate the approval gate — I initially set up an automatic approval if Claude's confidence score exceeded a threshold. This produced off-brand content at scale before I burned the feature and reinstated the human review gate.
  • Image generation quality control — Midjourney's API doesn't give you the same queue priority as the web interface. We experienced 40-minute delays on image generation during peak hours. Solution: batch overnight jobs for next-day content.
  • Arabic typography in AI-generated images — Midjourney cannot generate accurate Arabic text. We use Canva Pro with preset Arabic brand templates for any text-heavy Arabic creative. Ideogram is better at Arabic text but still makes grammatical errors — human review is mandatory.

The Actual Results

After 6 months running this system for three active client accounts at Mashhor Hub, here's what the data shows:

  • Average campaign ROAS improved from 2.1x to 4.5x — driven primarily by faster creative iteration, not better media buying strategy
  • Time from brief to published creative: reduced from 5 business days to 18 hours average
  • Number of ad variants tested per campaign: increased from 4-6 to 18-22
  • Team overhead cost: reduced by ~55% year-over-year

The improvement in ROAS deserves explanation — it's not magic. When you can test 18 creative variants instead of 4, you find winners faster. The AI doesn't write better than humans. It writes faster, which means more testing, which means better optimization data, which means better outcomes. The math does the work.

Should You Build This?

Before you start, be honest about three things:

  1. Do you have a documented brand voice? If not, the AI will invent one for you and it won't be yours. Build the brand guide first.
  2. Are you personally willing to be the quality gate? This system requires a senior human — you or someone who deeply understands your brand — to be in the review loop. Don't hand this to a VA on day one.
  3. Do you have at least one real campaign to test on before scaling? Build the workflow for one campaign, prove the output quality, then expand. I've seen agencies build the infrastructure and never validate it against actual performance data.

If the answer to all three is yes — I can help you build a version of this workflow tailored to your agency or client roster. It's one of the things I do through Mashhor Hub's consulting arm.

Mohamed Reda

Mohamed Reda

Marketing Consultant, AI Specialist & Creative Director based in Kuwait. Anthropic-Certified, Meta Blueprint-Certified, Google-Certified. Founder of Mashhor Hub (influencer marketing platform) and Munjiz Egypt. 10+ years across Kuwait, GCC & Egypt.

Chat with Mohamed