Loading Case File…
How Autopsy uses AMD MI300X's 192GB HBM3 memory to run 24 specialized agents across 4 investigation modes — enabling real-time cross-agent debate that's impossible on smaller GPUs.
ON A SINGLE H100
80GB HBM3
ON AMD MI300X
192GB HBM3
Investigation Modes
Specialized Agents
HBM3 Memory
Token Context Window
Memory Bandwidth
Time to First Token
Params / Agent
Historical Cases in DB
Standard agents analyze what happened. Counterfactual agents must reason about what DIDN'T happen. This requires agents to:
Each agent maps what actually led to failure before reasoning about alternatives.
Pinpoint the specific decision that created the divergence between actual and alternate timelines.
Build a plausible alternate timeline from the decision point forward, grounded in evidence.
Search for real companies that made the alternate decision. Their outcomes become evidence.
Butterfly effects — the alternate decision creates cascading changes beyond the obvious.
Currently running
accounts/fireworks/models/llama-v3p3-70b-instruct
Configure via LLM_MODEL in .env.local — see model options there.
Autopsy is open source. The agent prompts, orchestration logic, and debate methodology are all public. We believe better agent systems come from shared methodology, not closed walls.
[ VIEW ON GITHUB ]