banatie-strategy/03-icp-validation-v3.md

24 KiB

ICP Validation: Agentic Coding Developer Interview Guide

Date: October 20, 2025 Version: 3.0 (Expanded from Claude Code to all agentic coding tools) Purpose: Validate that developers using agentic coding tools have the same workflow pain as Oleg Timeline: 2 weeks (10-15 interviews) Previous versions:

  • v1.0: Generic framework (archived)
  • v2.0: Claude Code-specific (superseded by this version)

Interview Goals

Primary goal: Confirm that developers using agentic coding tools (Claude Code, Cursor, Aider, Windsurf, Gemini CLI, Continue.dev, etc.) struggle with image generation workflow.

What we're validating:

  1. ✅ Pain exists (context switching, manual workflow)
  2. ✅ Pain is severe (costs significant time/frustration)
  3. ✅ Integration channel preference (MCP vs. CLI vs. REST API)
  4. ✅ Willingness to pay $20-50 (budget exists)
  5. ✅ Regular usage (build projects monthly+)
  6. ✅ Competitive awareness (fal.ai, Replicate, Together.ai)

What we're NOT doing:

  • ❌ Selling or pitching product
  • ❌ Leading the witness ("Wouldn't you love X?")
  • ❌ Confirmation bias (ignoring red flags)
  • ❌ Asking hypotheticals ("Would you use...?")

Golden rule: Ask about PAST BEHAVIOR, not FUTURE INTENTIONS


Where to Find Interviewees

Priority 1: Reddit (Highest response rate)

r/ClaudeAI (14K members) - PRIMARY

  • Post: "Agentic coding users: How do you handle images?"
  • Look for: Developers who comment about workflows
  • DM: Top 5-10 engaged commenters

r/ChatGPTCoding (50K members) - STRONG SECONDARY

  • Post: "AI-assisted devs: Image generation workflow?"
  • Look for: People using AI tools for web development
  • DM: Active users who mention projects

r/cursor (smaller, focused) - NICHE

  • Post: Same as above
  • Look for: Cursor IDE users specifically

r/nextjs (200K members) - BROAD

  • Post: "Next.js devs using AI tools: image workflow?"
  • Look for: Mentions of Claude Code, Cursor, etc.

Priority 2: Tool-Specific Communities

Aider GitHub Discussions

  • Look for: Active contributors, workflow discussions
  • Engage: Comment on relevant threads
  • DM: Via GitHub (offer to chat about workflows)

Cursor Discord

  • Channel: #general or #workflows
  • Message: "Quick question for Cursor users..."
  • Look for: Active community members

Continue.dev Community

  • GitHub + Discord
  • Similar approach to Aider

Windsurf Community (if exists)

  • New tool, smaller community
  • Early adopters = high intent

Priority 3: Indie Hackers

"Ask IH" section

  • Post: "Validating: Production image pipeline for agentic coding devs"
  • Look for: Solo builders, side project makers
  • DM: Anyone who shows interest

Priority 4: Discord (More active, real-time)

Claude AI Discord

  • Channel: #general or #tools
  • Message: "Quick question for Claude Code users..."
  • Look for: Active community members

AI Tinkerers Discord

  • Broader AI developer community
  • Channel: #dev or #tools

Outreach Strategy (Stealth Mode)

Reddit Post Templates

Variant A: Problem-focused (RECOMMENDED for r/ClaudeAI, r/ChatGPTCoding)

Title: "Agentic coding users: How do you handle images in your projects?"

Using Claude Code/Cursor/Aider to build Next.js sites has been great, but images are still a pain. I end up:
1. Leaving my development environment
2. Opening Gemini Studio/Midjourney/fal.ai
3. Generating manually
4. Downloading, organizing, importing

Takes longer than building the actual site sometimes.

How do you handle this? Any workflows that work well?

(Exploring this problem space, not selling anything yet)

Variant B: Tool comparison (for r/nextjs, broader communities)

Title: "AI-assisted devs: What's your image generation workflow?"

Quick survey for developers using AI coding tools (Claude Code, Cursor, Aider, etc.):

1. Which agentic coding tool do you use?
2. How do you generate images for your projects?
3. Where does it fit in your workflow (terminal, browser, separate app)?
4. What's the most annoying part?
5. Have you tried automating this?

Researching this space, want to understand actual pain points and tool preferences.

Variant C: Solution validation (if Variant A gets low response)

Title: "Would you use a production image pipeline integrated into your agentic workflow?"

I've been using Claude Code/Aider for projects and kept hitting the same bottleneck: generating images breaks my workflow.

Thinking about a tool that:
- Integrates via MCP (for Claude Code/Cursor) OR CLI (for terminal workflows)
- Returns production CDN URLs automatically
- Maintains consistency (logos, characters, etc.)
- Works with any agentic coding tool

Would this solve a real problem for you? What's missing?
Which integration would you prefer: MCP, CLI, REST API, or something else?

DM Template (After Someone Comments)

Hey! Saw your comment on [post title about agentic coding workflows].

I'm doing research on how AI-assisted developers handle images in their projects and would love to hear more about your experience.

Would you be open to answering 7-9 questions via DM? Should take ~10-15 minutes.

(Not selling anything, genuinely trying to understand the problem space and which integration channels developers prefer)

Expected response rate: 50-60% will agree (text-based = low friction)


Interview Script (Text-Based)

Use this exact script via DM/email:

Thanks for agreeing to chat! Here are the questions:

1. What agentic coding tool(s) do you use?
   (Claude Code, Cursor, Aider, Windsurf, Gemini CLI, Continue.dev, other?)

2. How often do you build web projects? (Daily, weekly, monthly, rarely?)

3. When you build sites/apps, do you generate images for them?
   - If yes: How? (Midjourney, DALL-E, Gemini Studio, fal.ai, Replicate, other?)
   - If no: Why not? (Use stock photos, clients provide, other?)

4. Walk me through your current image workflow.
   - Where do you generate images?
   - How do you get them into your project?
   - How long does this typically take?

5. What's the most annoying part of your current workflow?

6. Have you tried to automate this? What did you try? What worked/didn't work?

7. **NEW:** Have you tried AI image APIs (fal.ai, Replicate, Together.ai, Gemini direct)?
   - If yes: What was your experience? What did you like/dislike?
   - If no: Why not?

8. If there was a tool that integrated image generation into your agentic workflow, which integration would you prefer:
   - MCP (for Claude Code/Cursor)
   - CLI tool (terminal-based)
   - REST API (programmatic)
   - Prompt URLs (GET-based generation)
   - Other?

9. What would you pay for a production-ready solution like that?
   - Options: $0 (only free tier), $10-20/month, $20-50/month, $50+/month
   - OR: One-time credit packs (e.g., $20 for 200 images, valid 90 days)?

No rush, answer when convenient. Really appreciate your input!

Interview Analysis Framework

After EACH interview, immediately fill out:


Interview #: ___ Date: ___ Channel: (Reddit / IH / Discord / GitHub) Username/Contact: ___


Q1: What agentic coding tool(s) do you use?

Answer:

Analysis:

  • Uses Claude Code (PRIMARY ICP)
  • Uses Cursor (PRIMARY ICP)
  • Uses Aider (PRIMARY ICP)
  • Uses Windsurf (PRIMARY ICP)
  • Uses Gemini CLI (PRIMARY ICP)
  • Uses Continue.dev (PRIMARY ICP)
  • Uses other agentic tool (SECONDARY ICP)
  • Doesn't use agentic coding tools (WRONG ICP - disqualify)

Multiple tools? (Many devs use more than one)

  • Primary tool: ___
  • Secondary tool: ___

Q2: How often do you build web projects?

Answer:

Analysis:

  • Daily or several times per week (IDEAL)
  • Weekly (GOOD)
  • Monthly (BORDERLINE)
  • Quarterly or less (WRONG ICP - disqualify)

Q3: Do you generate images for projects?

Answer:

Analysis:

  • Yes, regularly (IDEAL)
  • Yes, occasionally (GOOD)
  • No, but wish I could (POTENTIAL)
  • No, don't need images (WRONG ICP - disqualify)

If yes, which tools:

  • Midjourney
  • DALL-E / ChatGPT
  • Gemini Studio
  • fal.ai
  • Replicate
  • Together.ai
  • Other: ___

Q4: Current workflow description

Answer:

Analysis:

  • Tools used:
  • Steps involved:
  • Time spent per project:
  • Manual or automated:
  • Pain points observed:

Q5: Most annoying part?

Answer:

Key quote (exact words):

Pain severity (1-10): ___

Pain category:

  • Context switching (leaving dev environment)
  • Manual file management (download, organize, import)
  • Consistency issues (logos, characters, style)
  • Prompt engineering complexity
  • Time cost (too slow)
  • Other: ___

Q6: Tried to automate?

Answer:

Analysis:

  • Yes, built custom solution (STRONG signal - tried to solve)
  • Yes, used Zapier/Make/n8n (GOOD signal)
  • Yes, but gave up (STRONG signal - couldn't solve)
  • No, didn't try (NEUTRAL)
  • No, not worth it (RED FLAG)

If tried, what worked/didn't work:


Q7: Experience with AI image APIs?

Answer:

Analysis:

Tried fal.ai:

  • Yes - liked: ___
  • Yes - disliked: ___
  • No - why not: ___

Tried Replicate:

  • Yes - liked: ___
  • Yes - disliked: ___
  • No - why not: ___

Tried Together.ai:

  • Yes - liked: ___
  • Yes - disliked: ___
  • No - why not: ___

Tried Gemini API directly:

  • Yes - liked: ___
  • Yes - disliked: ___
  • No - why not: ___

Competitive insights:

  • What did they like about competitors?
  • What's missing from competitors?
  • Why did they stop using (if they did)?

Q8: Integration channel preference?

Answer:

Analysis:

Preference (rank 1-4):

  • MCP (for Claude Code/Cursor) - Rank: ___
  • CLI tool (terminal-based) - Rank: ___
  • REST API (programmatic) - Rank: ___
  • Prompt URLs (GET-based) - Rank: ___
  • Other: ___ - Rank: ___

Why this preference:

Would they use multiple channels:

  • Yes (e.g., MCP for dev, API for production)
  • No (stick to one)

Q9: Willingness to pay?

Answer:

Analysis:

  • $50+/month (STRONG signal)
  • $20-50/month (IDEAL ICP)
  • $10-20/month (LOWER signal, but acceptable)
  • $0 only (RED FLAG)

Preference:

  • Subscription (monthly commitment)
  • Credits (buy as needed)
  • Don't know / depends

Budget context:

  • Currently spending on tools: $___/month
  • Willing to spend on Banatie: $___/month

Summary Analysis (Per Interview)

OVERALL ASSESSMENT:

Green Lights (Evidence of fit):

Red Flags (Evidence of poor fit):

Channel Preference Insights:

  • Primary: ___
  • Secondary: ___
  • Why: ___

Competitive Intelligence:

  • Tried competitors: ___
  • Liked about competitors: ___
  • Missing from competitors: ___

Key Insights:

Early Access Interest:

  • Yes, wants early access (which tool? ___)
  • Maybe, interested but cautious
  • No, not interested

Follow-up Actions:

  • Add to beta waitlist (tool preference: ___)
  • Ask for referrals (similar developers)
  • Thank and close (if not a fit)

Validation Scorecard (After 10 Interviews)

Quantitative Metrics

Metric Target Actual Status
Use agentic coding tools 90%+ ___% [ ] ✅ [ ] ⚠️ [ ] ❌
Build projects monthly+ 80%+ ___% [ ] ✅ [ ] ⚠️ [ ] ❌
Generate images regularly 70%+ ___% [ ] ✅ [ ] ⚠️ [ ] ❌
Would use integrated tool 60%+ ___% [ ] ✅ [ ] ⚠️ [ ] ❌
Willing to pay $20+ 40%+ ___% [ ] ✅ [ ] ⚠️ [ ] ❌
Want early access 30%+ ___% [ ] ✅ [ ] ⚠️ [ ] ❌

Legend:

  • ✅ = Target met or exceeded
  • ⚠️ = Close but below target
  • ❌ = Significantly below target

Qualitative Patterns

Pain Point Consistency:

  • 70%+ describe similar pain (context switching, manual workflow)
  • 50-70% describe similar pain (some variation)
  • <50% describe similar pain (no consensus - RED FLAG)

Tool Distribution:

  • Claude Code: ___%
  • Cursor: ___%
  • Aider: ___%
  • Windsurf: ___%
  • Gemini CLI: ___%
  • Continue.dev: ___%
  • Other: ___%
  • Multiple tools: ___%

Channel Preference:

  • MCP: ___% prefer (1st choice)
  • CLI: ___% prefer (1st choice)
  • API: ___% prefer (1st choice)
  • Prompt URLs: ___% prefer (1st choice)
  • Multiple channels: ___% would use

Current Solutions: Most common tools they use now: 1. 2. 3.

Competitive Awareness:

  • Tried fal.ai: ___%
  • Tried Replicate: ___%
  • Tried Together.ai: ___%
  • Tried Gemini direct: ___%
  • None: ___%

Feature Priorities: Most requested features: 1. 2. 3.

Objections/Concerns: Recurring hesitations: 1. 2. 3.


Decision Matrix

GO (Build MVP for agentic coding developers)

Criteria (ALL must be met):

  • 60%+ willing to use integrated tool
  • 40%+ willing to pay $20+
  • 30%+ want early access
  • Consistent pain point (not scattered)
  • Clear channel preference (MCP, CLI, or both)
  • Budget confirmed (they pay for other tools)

Next steps:

  1. Strip MVP to validated integration channels (MCP + CLI + API)
  2. Set 4-6 week development timeline
  3. Prepare beta access list (from interviews)
  4. Prioritize features based on feedback
  5. Start building

PIVOT (Test different ICP or adjust positioning)

Criteria (2+ of these):

  • <60% willing to use (lukewarm interest)
  • Interest but weak willingness to pay (<40%)
  • Pain exists but not urgent ("nice to have")
  • Conflicting channel preferences (no consensus)
  • Budget concerns ("I only use free tools")
  • Strong preference for one competitor (fal.ai/Replicate)

Next steps:

  1. Review interview patterns (what went wrong?)
  2. Consider:
    • Different tools (focus on one tool community?)
    • Different ICP (agencies instead of solo devs?)
    • Different positioning (TCO vs. per-image price?)
  3. Prepare new interview script
  4. Run 5-10 more interviews (1-2 weeks)
  5. Set final deadline (if 2nd attempt fails, stop)

STOP (Fundamental rethink)

Criteria (2+ of these):

  • <40% willing to use (no interest)
  • No one willing to pay (pricing issue or no value)
  • Pain is theoretical, not actual ("never bothered me")
  • Current solutions adequate ("fal.ai works fine")
  • Market doesn't exist (can't find 10 relevant people)
  • Strong competitor lock-in (already using fal.ai, happy)

Next steps:

  1. Document learnings (what didn't work)
  2. Preserve relationships (thank interviewees)
  3. Decide: Different problem OR shut down?
  4. Focus back on day job (reduce stress)

Red Flags to Watch For

During interviews, if you hear these phrases:

❌ "I'm happy with fal.ai" → Competitor already solved problem

❌ "I just use stock photos" → No generation need = wrong ICP

❌ "I would use it if it was free" → Price sensitivity = won't pay

❌ "Interesting idea, but..." → Polite rejection = not solving real problem

❌ "I only build sites a few times a year" → Wrong ICP = not regular enough

❌ "I prefer having full control over design" → Wrong mindset = not automation-oriented

❌ "My clients want custom photography" → Wrong use case = AI gen not suitable

❌ "I don't trust AI-generated images" → Stigma barrier = market not ready

❌ "I already automated this with Zapier" → Existing solution = no switching motivation

If 3+ interviewees say similar red flags → PIVOT


Green Lights to Look For

During interviews, if you hear these phrases:

✅ "This would save me so much time" → Clear value proposition

✅ "I've been looking for something like this" → Unmet need

✅ "When can I try this?" → Strong interest

✅ "I currently use [fal.ai/Replicate] but it doesn't do X" → Clear gap in market

✅ "I'd pay $X if it works as described" → Willingness to pay

✅ "You should talk to my friend/colleague" → Network effect potential

✅ "Can you add [specific feature]?" → Engaged thinking (not just polite)

✅ "I tried building this myself but gave up" → Validated pain + difficulty

✅ "Which tool do I use? Let me show you my workflow..." → Deep engagement

If 5+ interviewees say similar green lights → STRONG GO


Common Interview Mistakes (Avoid These)

Mistake 1: Leading Questions

Bad: "Wouldn't it be great if your agentic tool could generate images automatically?"

Good: "How do you currently handle images when building with [tool]?"


Mistake 2: Pitching Instead of Learning

Bad: "Let me tell you about Banatie - it solves your image problem with MCP integration and..."

Good: "Walk me through your last project that needed images. What did you do?"


Mistake 3: Ignoring Red Flags

Bad: Interviewer says "I'm happy with fal.ai" but you keep pushing

Good: "Interesting! What makes fal.ai work well for you? What would make you switch?"


Mistake 4: Asking Hypotheticals

Bad: "If there was a tool that did X, would you use it?"

Good: "Tell me about the last time you struggled with image generation. What happened?"


Mistake 5: Not Documenting Competitive Intel

Bad: They mention using fal.ai, you don't ask follow-up questions

Good: "You mentioned fal.ai - what do you like about it? What's missing? Why are you still looking for alternatives?"


Mistake 6: Skipping Channel Preference

Bad: Assume they want MCP just because they use Claude Code

Good: "Would you prefer MCP integration, CLI tool, REST API, or something else? Why?"


Post-Interview Actions

Immediately After Each Interview:

  1. Fill out analysis template (while fresh)
  2. Capture exact quotes (especially pain points and competitive insights)
  3. Note channel preference (MCP, CLI, API, URLs)
  4. Note competitive intel (fal.ai, Replicate, etc.)
  5. Add to beta list (with tool preference noted)
  6. Thank them (brief message)

After Every 5 Interviews:

  1. Pattern analysis (what's consistent?)
  2. Update hypothesis (if needed)
  3. Refine questions (if something's unclear)
  4. Check progress (on track to 60% validation?)
  5. Identify channel preferences (MCP vs CLI vs API)

After 10 Interviews:

  1. Compile validation report (see template below)
  2. Make GO/PIVOT/STOP decision
  3. Discuss with @men (strategy call)
  4. Plan next steps (build MVP or pivot)

Validation Report Template

Use this after completing 10-15 interviews:


ICP Validation Report: Agentic Coding Developers

Date: ___ Interviews Completed: ___ / 15 Decision: [ ] GO [ ] PIVOT [ ] STOP


Executive Summary

ICP Validated: [ ] Yes [ ] No [ ] Partially

Key Findings (3 bullets): 1. 2. 3.

Recommendation:

  • Build MVP for agentic coding developers (strong signals)
  • Pivot to different segment or positioning (weak signals)
  • Stop / rethink (no market fit)

Quantitative Results

Metric Target Actual Status
Willing to use 60% ___% ✅ / ⚠️ / ❌
Willing to pay $20+ 40% ___% ✅ / ⚠️ / ❌
Want early access 30% ___% ✅ / ⚠️ / ❌

Total validated: ___ out of ___ interviews


Tool Distribution

Tool % Using Notes
Claude Code ___%
Cursor ___%
Aider ___%
Windsurf ___%
Gemini CLI ___%
Continue.dev ___%
Other ___%
Multiple tools ___%

Insight: Which tool(s) should we prioritize for MVP?


Channel Preference Analysis

Channel 1st Choice Would Use Notes
MCP ___% ___%
CLI ___% ___%
REST API ___% ___%
Prompt URLs ___% ___%
Multiple ___% ___%

MVP Decision: Which channel(s) to build first?


Pain Point Analysis

Validated pain (in their words):

"[Quote from interview about main frustration]"

Severity: ___/10 average across interviews

Pain categories (frequency):

  • Context switching: ___%
  • Manual file management: ___%
  • Consistency issues: ___%
  • Prompt engineering: ___%
  • Time cost: ___%
  • Other: ___%

Current workarounds: 1. 2. 3.

Why current solutions fail:


Competitive Intelligence

fal.ai

Tried: ___% What they liked:

What they disliked:

Why they'd switch to Banatie:


Replicate

Tried: ___% What they liked:

What they disliked:

Why they'd switch to Banatie:


Together.ai

Tried: ___% What they liked:

What they disliked:

Why they'd switch to Banatie:


DIY Stacks (Zapier, custom code)

Tried: ___% What they liked:

What they disliked:

Why they'd switch to Banatie:


Key Competitive Insights:

Gaps in competitor offerings: 1. 2. 3.

Our unique value (validated by interviews): 1. 2. 3.


Feature Priorities (From Interviews)

Must-have (for MVP): 1. 2. 3.

Nice-to-have: 1. 2.

Don't care about: 1. 2.


Pricing Insights

Preferred model:

  • Credits (buy when needed) - ___% prefer
  • Subscription (monthly) - ___% prefer
  • Mixed - ___% open to either

Price sensitivity:

  • $10-20: ___% willing
  • $20-50: ___% willing
  • $50+: ___% willing

Budget availability:

  • Current tool spending: $___/month average
  • Available for Banatie: $___/month average

TCO awareness:

  • ___% understand TCO argument (time + cost)
  • ___% focus only on per-image price
  • ___% don't care about price (just want solution)

Messaging Insights

What resonates:

What doesn't resonate:

Language they use (exact quotes):

  • "___"
  • "___"
  • "___"

Positioning preference:

  • "Production image pipeline" (___%)
  • "Workflow automation" (___%)
  • "Developer-first tool" (___%)
  • Other: ___ (___%)

Red Flags Identified

Recurring objections: 1. 2. 3.

Concerns to address: 1. 2.

Competitor lock-in:

  • How many are happy with current solution? ___%
  • How many are actively looking for alternative? ___%

Beta Waitlist

Interested users: ___ Contacts collected: ___ Tool preferences recorded: ___ Channel preferences recorded: ___ Referrals offered: ___


Next Steps

If GO:

  1. Finalize MVP scope based on:
    • Validated channel preferences (MCP + CLI + API priorities)
    • Must-have features (top 3)
    • Tool focus (which communities to target first)
  2. Set 4-6 week development timeline
  3. Prepare beta onboarding (tool-specific guides)
  4. Plan soft launch strategy (which Reddit/Discord first)

If PIVOT:

  1. Select alternative ICP or positioning
  2. Prepare new interview script
  3. Run 5-10 more interviews (deadline: 2 weeks)
  4. Final GO/STOP decision

If STOP:

  1. Document learnings (for future reference)
  2. Thank all interviewees
  3. Archive project (or pivot completely)
  4. Refocus on day job

Report owner: @men + Oleg Date completed: ___ Next milestone: ___


Key Reminders for Oleg

✅ DO:

  • Be genuinely curious (not selling)
  • Listen 80%, talk 20%
  • Ask "why" follow-ups (especially about competitors)
  • Document channel preferences carefully
  • Note exact tool usage (Claude Code vs Cursor vs Aider)
  • Celebrate patterns (not individual opinions)

❌ DON'T:

  • Pitch the product
  • Lead the witness
  • Ignore competitive intel (very important now)
  • Skip channel preference questions
  • Assume MCP is what everyone wants
  • Give up too early (need 10+ interviews for patterns)

Remember: "The goal is to learn what integration they ACTUALLY want, not what we THINK they want. And to understand what competitors are doing right/wrong."

If the market says "fal.ai is good enough," that's a GIFT (saves you 6 months). But also ask WHY it's not perfect - that's where our opportunity is.


Document owner: @men Related docs:

  • 01-market-positioning-v3.md (updated positioning)
  • 07-validated-icp-ai-developers.md (needs update to "agentic coding")
  • 08-validation-plan.md (needs update with new channels)
  • 09-mvp-scope.md (needs update based on channel preferences)

Next action: Complete 10-15 interviews within 2 weeks