When it comes to AI and startups, most people treat AI one of two ways: as an all-knowing oracle or as a tool for automating tasks. LEANSpark is built on a different premise entirely — AI as a co-founder. A thinking partner that can run experiments, learn from feedback, and iterate alongside you.

The Story

The LEANSpark story started three years before we wrote a single line of code. When ChatGPT launched in November 2022, the world went crazy. Services popped up overnight that could generate startup ideas, Lean Canvases, landing pages, and even starting code. People were already declaring the end of customer research and validation.

I was skeptical. An LLM cannot come up with a successful startup idea on its own with a one-shot prompt. Startups are inherently uncertain — going from 0 to 1 requires non-linear thinking, and that is not how current LLMs work. Challenge an LLM’s answer with a different point of view, and instead of holding its ground, it changes its answer to match yours. That does not generate breakthrough thinking — it creates an echo chamber.

So while everyone chased generative AI — asking machines for answers — I was more interested in predictive AI. Not oracles that tell you what to do, but tools and workflows that speed up things hard for humans but simple for machines. One of those things was transcribing customer interviews and finding patterns across them. We built Customer Forces AI in early 2023 to do exactly that, and it became a foundational tool in our demand validation playbook.

Then in early 2025, vibe coding agents changed the game again. Services like Lovable, Replit, and Claude Code could build real working apps autonomously using an agentic loop — a Plan-Code-Test cycle where agents figured out how to complete tasks on their own. I wondered: could we build a similar agentic loop for business model development? The Continuous Innovation Framework already follows a model-prioritize-test cycle. Could an agent help founders run through these steps?

We ran three parallel proof-of-concept approaches. Deterministic workflows were too rigid. But the agentic loop — giving an agent simple tools, a goal, and a way to measure success — was the breakthrough. I prototyped business model stress tests and watched the agent take a Lean Canvas, determine which tests to run, score them, and move to the next one. That derisked the solution in my mind. Dozens of validation recipes I had cataloged over the years could be turned into tools and plugged into this harness.

With 80% confidence we could build it, I shifted to pre-selling it. I launched a Demo-Sell-Build campaign in October and started dogfooding LEANSpark to validate itself — a meta-recursive approach I have used with every product, from my first book to Lean Canvas.

Key Frameworks

  • Predictive AI vs. Generative AI: Rather than asking AI for answers (generative), use AI to accelerate analysis humans find slow — like pattern-matching across interview transcripts (predictive).
  • The Agentic Experiment Loop: Inspired by coding agents, this PDCA-based loop gives an AI agent tools, a goal, and success criteria, then lets it iterate autonomously through business model experiments.
  • Meta-Recursive Dogfooding: Using your own product to validate itself — becoming both the entrepreneur and the power user to speed up learning cycles.

Key Takeaways

  • The real opportunity in AI for startups is not generating ideas but accelerating the validation process through systematic experimentation.
  • Three years of false starts and proof-of-concepts were necessary to find the right approach — agentic loops, not deterministic workflows.
  • Dogfooding your own product creates a powerful feedback loop where you experience every friction point your customers will face.
  • Selling before building (Demo-Sell-Build) tests whether founders will pay for something before you invest months building it.
  • The best AI products are not oracles — they are thinking partners that help you run experiments and learn from the results.