Resume Helper V3 Retrospective: Product Methodology for AI Resume Tools
Resume Helper is now on its third version.
V1: 3 months live, 0 paying users. V2: 6 months live, 200 registered users, 20 paying users. V3: Live to date, stable monthly revenue of ~3,000 RMB.
This is a retrospective on why V1 failed, why V2 stayed lukewarm, and why V3 survived.
Why V1 Failed: Building Features Users Didn’t Need
V1’s mistake was classic: I thought I understood users.
What I built:
- AI resume scoring (0-100)
- Resume template library
- One-click formatting
- Competitor analysis
What users actually needed:
- Help me write a resume (they don’t know how)
- Help me fix my resume (they don’t realize how bad it is)
V1’s features didn’t solve users’ real pain points. The resume scoring feature was fundamentally about users wanting a reference point when they “didn’t know how good they could write”—but nobody cares whether their resume scores 60 or 70. They care about whether they can get a job.
Lesson: Product features must align with users’ mental model, not the product manager’s logical model.
Why V2 Didn’t Take Off: PMF Was Right, But Retention Was Poor
V2 made a critical pivot: from “scoring” to “optimization.”
User uploads resume → AI analyzes issues → Provides modification suggestions → User makes changes themselves.
This flow solved a real pain point, but had two problems:
Problem 1: Optimization suggestions were too generic “Your resume lacks quantifiable results”—this kind of feedback, users already know, but don’t know how to fix it.
Problem 2: No feedback after users made changes Nobody told them whether their changes were good or what to do next.
Result: Users came once, used it once, never returned.
Lesson: Users don’t need a “one-time service”—they need a “path of continuous improvement.”
How V3 Did It: Vertical Focus + Closed Loop
V3 made two critical changes:
Change 1: Focus on specific job roles V3 stopped being a generic resume tool and specialized in technical role resume optimization. Reason: Technical resumes have clear evaluation dimensions (tech stack, project experience, quantifiable results), allowing AI to provide specific, actionable suggestions.
Change 2: Added “comparison” feature After users make revisions, they can upload the new resume, and AI compares the old and new versions, providing improvement feedback. This feature gives users the satisfaction of “seeing their improvement,” creating a usage loop.
Results:
- Monthly registered user growth: 15%
- Paid conversion rate increased from V2’s 8% to 23%
- Average usage sessions per user increased from V2’s 1.2 to 4.3
Lesson: Verticalization is the survival strategy for indie tools. Generic tools can’t beat big companies—vertical niches build moats.
Core Methodology for AI-Powered Tool Products
After three versions and many pitfalls, I’ve distilled a simple framework:
Step 1: Find a “painful enough” scenario Not just “has demand”—painful enough. Users will pay for problems that hurt badly enough, not for problems that are “okay.”
Step 2: Validate PMF (Product-Market Fit) PMF standard: Do users come back on their own? If your users use it once and leave, PMF hasn’t been found.
Step 3: Build “closed loops” not “features” Every feature should answer one question: Does this bring users closer to “the problem being solved”?
Step 4: Go vertical, build moats Generic tools rely on traffic; vertical tools rely on repurchase. Independent developers should choose the latter.
What’s Next
V3 isn’t the end.
The next version will add an “AI mock interview” feature—based on the resume and target position, AI generates interview questions so users can practice.
This is what users ultimately need: not a pretty resume, but a resume that lands interviews.