Loading Case File…
Autopsy is a forensic AI tool that investigates why startups, products, and companies failed. Instead of a single chatbot guessing at a postmortem, Autopsy deploys six specialist agents — each with a distinct investigative lens — who research in parallel, debate their findings, and produce a synthesized verdict backed by live web sources.
The result is not an opinion. It is a structured forensic report with a primary cause of death, an evidence trail, counterfactual analysis (what would have saved it), and actionable lessons for builders. Every claim is sourced. Every agent has a known bias. The disagreements are surfaced, not hidden.
Most postmortems are written by the people who failed — retrospective rationalization dressed up as insight. Autopsy is different. Six agents with six biases, forced to argue, forced to confront each other's blind spots. The truth lives in the disagreement.
Searches for market timing, competitive dynamics, demand signals, and TAM/SAM misalignments.
Known bias: Tends to over-weight timing and under-weight execution quality.
Investigates team decisions, hiring patterns, pivots that didn't happen, and execution velocity.
Known bias: Often blames leadership before acknowledging market headwinds.
Traces burn rate, funding history, unit economics, runway math, and business model viability.
Known bias: Will flag broken unit economics even when traction is strong.
Scrapes reviews, Reddit threads, social sentiment, and churn signals.
Known bias: Can overweight early-adopter complaints and miss mainstream adoption.
Analyzes technical architecture, scalability decisions, product trade-offs, and tech debt accumulation.
Known bias: Frequently concludes "the tech was fine, the market was wrong."
Pattern-matches against historical failures with similar DNA. Draws from case studies and longitudinal data.
Known bias: May over-fit to historical analogies and miss novel category risks.
Autopsy was built for the AMD Developer Hackathon 2026, a global competition focused on leveraging AMD hardware for AI-driven applications. The project uses Kimi K2.6 via Fireworks AI for agent reasoning and plans a migration to self-hosted inference on AMD MI300X GPUs for cost-efficient, high-throughput postmortem generation.
The MI300X advantage is real: 192GB of HBM3 memory enables all six agents to run in parallel on a single GPU. On an H100 (80GB), you'd need three sequential rounds. On the MI300X, the agents debate in real time. The hardware doesn't just make it faster — it makes the debate architecture possible.