Automating Portfolio Writing with a 4-Agent Pipeline
LLMAgentAutomationPortfolio
Problem
Writing portfolio case studies is time-consuming:
- Technical facts must be accurately documented
- Engineering tone required, no marketing fluff
- Every claim needs evidence (code, test results)
- Consistent structure and quality across projects
Writing manually leads to exaggeration. Delegating everything to an LLM at once causes hallucination.
Solution: 4-Agent Pipeline
Four agents with separated roles process sequentially:
User Notes → Extractor → Writer → Reviewer → Editor → Final Draft
1. Extractor
Extracts only verifiable facts from input notes.
- Output: Structured JSON (claims, numbers, design_decisions, validation)
- Confidence level assigned to each item (high/medium/low)
- Uncertain content isolated in
unknownsarray - Claims without evidence are not extracted
2. Writer
Writes markdown case study using only the Facts JSON.
- Adding content not in Facts JSON is prohibited
- Follows fixed structure: Context → Problem → Approach → Results → Takeaway
- Engineering tone, bullet-point focused
3. Reviewer
Scores against a 20-point rubric and issues fix instructions.
Criteria (2 points each):
- One-liner clarity
- Context specificity
- Problem statement
- Key insight sharpness
- Modeling/feature design
- Learning/stabilization logic
- Tradeoffs/risks
- Validation method
- Result credibility
- Writing quality
Output:
- Score with item-by-item breakdown
- Must-Fix list (max 5)
- Unsupported Claims list
- Ambiguous Metrics list
4. Editor
Revises document according to Reviewer’s instructions.
- Addresses Must-Fix items only
- Removes or rewrites unsupported claims
- Adding new content is prohibited
Directory Structure
portfolio/
├── agents/ # Agent prompts
│ ├── extractor.md
│ ├── writer.md
│ ├── reviewer.md
│ └── editor.md
├── system/ # Rubric, style guide
├── inputs/ # Raw user notes
└── runs/ # Per-project outputs
└── {project}/
├── facts_v1.json
├── draft_v1.md
├── review_v1.md
└── ...
Quality Criteria
- 18/20 minimum score to pass
- 0 unsupported claims
- All metrics must specify measurement method
Results
Projects currently using this pipeline:
- Torque Control
- Lidar-Odom Fusion
- Lidar Noise Filtering
- Hardware Architecture
- S-Curve Profiling
Each project goes through iterative review-edit cycles until quality criteria are met.
Key Takeaway
- Role separation is key: one agent doing everything makes quality control impossible
- Rubric-based evaluation enables objective quality measurement
- Facts JSON serves as single source of truth
- Repeatable process ensures consistent quality