About
Vol St. is one person.
The process is what scales.
No team, no handoff. I scope the problem, write the code, run the search, and sign the report. The person you talk to is the person doing the work.
Tyler Prahm - quantitative researcher. Volatility derivatives, market making, and production trading systems across traditional and crypto markets.
Published research and built institutional derivatives tooling at Genesis Volatility / Amberdata - serving prop firms and hedge funds on Deribit. SPX 0DTE desk experience at Dipsea Capital. BTC options pricing on Deribit vol surfaces.

I kept seeing the same thing. Teams find signals that look predictive, deploy capital, edge evaporates. Everyone blames the model. Wrong parameters, wrong lookback, wrong architecture. It was never the model.
The question was wrong. The target got chosen to match the data instead of the trade. And nobody was pressure testing whether the explanation was specific enough to be wrong. So you'd get these beautiful backtests - great Sharpe, smooth equity curve - and then it falls apart live. And everyone's confused. But it's obvious if you look upstream. They optimized the scorecard. They never tested the bet.
Infrastructure was part of it too - teams pulling from Bloomberg or Deribit with no database, notebooks that can't manage a strike chain or a roll. I'd built tooling for this at Genesis. But that was never the real problem. You can have perfect data pipelines feeding the wrong question.
That's what Vol St. is. I start with the question.
Principles
Honest negatives
If the signals don't predict your target, I tell you. A true negative saves more money than a false positive.
Out-of-sample or it didn't happen
No result counts unless it survives walk-forward validation on data the model never trained on.
Full transparency
Every dataset, assumption, and parameter is documented and delivered. Research runs on my infrastructure; production systems deploy to yours. You own the work - not a black box.
What brings you here?
Start a conversation