Playbooks & Notes¶
Structured learning systems, tool deep dives, and frameworks I've developed. These are living documents that evolve as I learn.
AI Week Learning Playbook¶
Overview¶
The AI Week Learning Playbook is a structured curriculum I created to guide my transition into AI Product Management. It's organized by topic areas and includes resources, exercises, and project ideas.
Structure: - Foundations (AI concepts, terminology, landscape) - Tools (deep dives on specific tools) - Workflows (how to structure AI-powered processes) - Evaluation (how to assess AI outputs and systems) - Application (real projects and case studies)
Status: Living document, updated regularly
Access: Available in my notes system, key sections shared here
AI PM Bootcamp v1¶
Structure¶
A self-directed learning program I designed to build AI Product Management skills systematically.
Modules:
- AI Fundamentals
- Understanding LLMs, embeddings, agents
- Key concepts: prompting, RAG, fine-tuning
-
The AI product landscape
-
Tool Mastery
- Deep dives on core tools (Cursor, n8n, Lindy, Napkin)
- When to use what tool
-
Tool evaluation frameworks
-
Workflow Design
- Structuring agentic workflows
- Human-in-the-loop patterns
-
Error handling and reliability
-
Product Thinking for AI
- AI product design principles
- User experience for AI products
-
Evaluation and metrics
-
Building Real Things
- Project case studies
- Shipping and iterating
- Learning from failures
Format: Self-paced, project-based, documented
Timeline: Ongoing, iterating based on learnings
Tool Deep Dives¶
n8n¶
What it is: Workflow automation platform with strong AI integration capabilities.
Why I use it: Excellent for orchestrating multi-step AI workflows, connecting APIs, and building reliable automation systems.
Key Learnings: - Workflow design matters: clear structure, error handling, logging - AI nodes work well for transformation and decision-making - Testing workflows with real data is critical - Documentation of workflows is as important as the workflows themselves
Use Cases: - Agentic research workflows - Content generation pipelines - Data processing and transformation - Multi-step automation tasks
Resources: n8n documentation, community workflows, my own experiments
Lindy¶
What it is: AI agent platform for building autonomous workflows.
Why I use it: Good for agentic browsing, web automation, and tasks that require autonomous decision-making.
Key Learnings: - Agents need very clear instructions and success criteria - Testing with real scenarios reveals edge cases quickly - Human validation is still needed for critical outputs - Agent behavior can be unpredictable; monitoring is essential
Use Cases: - Web research and information gathering - Automated form filling and data entry - Multi-step web-based tasks
Resources: Lindy documentation, example agents, my experiments
Cursor¶
What it is: AI-powered code editor built on VS Code.
Why I use it: Transforms coding from writing to editing and reviewing. Excellent for rapid iteration and learning.
Key Learnings: - AI suggestions are best when you have clear intent - Review all AI-generated code carefully - Use AI for boilerplate, refactoring, and debugging - Maintain understanding of what the code does
Use Cases: - Rapid prototyping - Code refactoring and improvement - Learning new languages and frameworks - Debugging and problem-solving
Resources: Cursor documentation, best practices guides, my coding workflows
Napkin¶
What it is: AI-powered note-taking and knowledge management tool.
Why I use it: Excellent for capturing ideas, synthesizing information, and building knowledge bases.
Key Learnings: - Structure matters: clear organization improves AI understanding - Regular review and refinement keeps notes useful - Export capabilities are important for portability - AI synthesis is powerful but needs human validation
Use Cases: - Learning note-taking - Research synthesis - Knowledge base building - Idea development
Resources: Napkin documentation, note-taking best practices
Notes on Self-Prompting¶
Principles¶
1. Be Specific Vague prompts produce vague outputs. Define the task, context, constraints, and success criteria clearly.
2. Provide Structure Give examples, templates, or frameworks. Show the AI what good looks like.
3. Iterate and Refine First prompts are rarely perfect. Test, evaluate, refine. Keep what works.
4. Set Constraints Define boundaries, formats, tone, length. Constraints help focus outputs.
5. Request Reasoning Ask the AI to explain its thinking. This helps you understand and improve the process.
Common Patterns¶
Research Prompt Pattern:
Context: [situation]
Question: [what I need to know]
Constraints: [limitations]
Format: [how I want the output]
Success criteria: [what good looks like]
Analysis Prompt Pattern:
Input: [what to analyze]
Focus: [what aspects to consider]
Framework: [how to structure the analysis]
Output: [desired format]
Iteration Prompt Pattern:
Previous attempt: [what was tried]
What worked: [successes]
What didn't: [failures]
Refinement: [what to change]
New attempt: [revised approach]
AI Evaluation Frameworks¶
Output Quality Assessment¶
Criteria: 1. Accuracy — Is the information correct? 2. Relevance — Does it address the question? 3. Completeness — Are all aspects covered? 4. Clarity — Is it well-structured and understandable? 5. Usefulness — Can I actually use this?
Process: - Review against criteria - Test in real context - Identify gaps and errors - Refine prompts based on findings
Workflow Reliability Assessment¶
Criteria: 1. Success Rate — How often does it work? 2. Error Handling — How does it handle failures? 3. Consistency — Are outputs reliable? 4. Speed — Is it fast enough? 5. Maintainability — Can I update and improve it?
Process: - Run multiple test cases - Monitor error rates - Document failure modes - Iterate on design
Tool Evaluation Framework¶
Criteria: 1. Capability — What can it do? 2. Ease of Use — How hard is it to learn? 3. Reliability — Does it work consistently? 4. Integration — How well does it connect with other tools? 5. Cost — Is it worth the price? 6. Community — Is there support and documentation?
Process: - Test with real use cases - Compare with alternatives - Document pros and cons - Make informed decisions
Learning Systems Principles¶
1. Structure Enables Speed Clear organization helps me find information and apply it quickly.
2. Documentation is Learning Writing things down helps me understand and remember.
3. Iteration Improves Everything Learning systems themselves should evolve based on what works.
4. Real Use Cases Drive Learning I learn best when solving real problems, not just studying theory.
5. Share to Solidify Explaining what I've learned helps me understand it better.
Resources and References¶
Books and Articles: - Various AI product management resources - Tool-specific documentation - Community discussions and case studies
Communities: - AI product management forums - Tool-specific communities - Learning groups and cohorts
Tools: - Note-taking systems (Napkin, Markdown) - Documentation platforms (MkDocs, GitHub) - Experimentation environments (n8n, local setups)
These playbooks are living documents. They evolve as I learn, build, and iterate. If you're developing similar systems or want to discuss approaches, let's connect.