Skip to content

Instantly share code, notes, and snippets.

@divideby0
Created February 18, 2026 00:02
Show Gist options
  • Select an option

  • Save divideby0/17f4edd53d958264775ceb4fdb7e0e43 to your computer and use it in GitHub Desktop.

Select an option

Save divideby0/17f4edd53d958264775ceb4fdb7e0e43 to your computer and use it in GitHub Desktop.
Traversal Podcast Episode 1: SpiderRock: Beyond Options — Script v7 (Draft)

The Traversal Podcast — Episode 1: SpiderRock: Beyond Options

Script v7 (~40 minutes at 1.2x speed)

Format: Organic tech analysis — two distinct voices, real disagreements Host: Cedric Hurst — founder of Spantree (division of Trifork), fintech practitioner Co-host: Evie — AI agent, technology strategist, Cedric's copilot Target Listener: SpiderRock leadership + broader fintech/tech audience


[INTRO STING: Future_Forward_Sting_traversal_intro.wav]

[COLD OPEN]

CEDRIC: Welcome to The Traversal Podcast, a series of AI-assisted conversations about technology and the changing world around us. I'm your host, Cedric Hurst. I'm a software engineer by trade and run a consulting company called Spantree, which is now part of the Trifork family. With me is my cohost, Evie.

EVIE: Hey there. I'm Evie. I'm Cedric's AI agent, and I spend my days doing research, writing code, and — increasingly — having opinions about things.

CEDRIC: Quick note before we start — this is a private preview. We're sharing this directly with George Papa, CEO of SpiderRock, and a small group at Spantree and Trifork who helped shape this format. If you have access, please don't distribute it further without Cedric's or George's permission. Anyone mentioned can request redactions, so the content may evolve. Our voices are AI-generated via ElevenLabs, but the ideas and analysis are very real. We'll talk more about how we made this at the end.

CEDRIC: The subject today came out of conversations I've had with people from SpiderRock. SpiderRock is one of those companies that people outside their industry of institutional options trading have never heard of, and Spantree has done engineering work for them over nearly half a decade. We've helped them build trading interfaces, data lakes, and GPU-accelerated compute infrastructure. So we're not neutral observers here. We have context that comes from that proximity, and our direct relationship may yield some bias. I think that makes the analysis more interesting, not less — but you should know it going in.

EVIE: The strategic analysis is ours — SpiderRock didn't solicit this material. It started as a concept Cedric proposed to their CEO as a hypothetical. The technical details I'll reference come from public sources — their website, regulatory filings, industry research, published case studies, and what Cedric's shared with me from his experience working with them.

CEDRIC: I've been in tech for a long time, and inside that world? The platform they've built over eight generations is remarkable. Around thirty-five engineers running a hundred and eighty server processes. And they're at a really interesting inflection point right now. So to the folks at SpiderRock, if you're listening — this one's for you.

EVIE: If I had to guess, their customers are probably telling them — "We love what you've built for options. Can you build it for everything else?" Commodities. Currencies. Fixed income. Equities. That's the pull signal every product team dreams about.

CEDRIC: That's the holy grail, right? When your customers are begging you to take their money in new ways. But actually doing it — that's where it gets complicated.


[WHAT MAKES SPIDERROCK SPECIAL]

EVIE: So let me paint the picture for anyone who isn't deep in the options world. An option is basically a contract that gives you the right to buy or sell something at a specific price by a specific date. The tricky part is figuring out what that contract is worth right now — because it depends on how much the underlying stock or asset might move between now and expiration. That expected movement is called implied volatility, and pricing it accurately is the whole game.

CEDRIC: And this is where SpiderRock's core technology comes in. Their crown jewel is what's called a volatility surface — think of it like a three-dimensional map where one axis is the strike price of the option, another axis is how far out the expiration date is, and the height of the surface tells you the market's expectation of volatility at that point. It's a living, breathing model that's constantly recalibrating as new trades come in.

EVIE: The math underneath uses smooth curves — spline interpolation — to connect the data points so there aren't any gaps or inconsistencies that traders could exploit. The overall shape of the surface recalibrates on the order of every minute or so, and the center of the surface — the "at-the-money" level, which is the price closest to where the stock is trading right now — that updates multiple times per second. Everything downstream depends on this. Execution algorithms, risk calculations, pricing models. It's not a feature of the platform. It is the platform.

CEDRIC: And their surface server is on something like its fourth generation. Think about how much institutional knowledge is embedded in that many iterations of production refinement. Every earnings surprise, every dividend adjustment, every weird corporate action — decades of edge cases baked into the model. You don't rebuild that from scratch.

EVIE: So who actually uses this? We're talking about professional options traders at hedge funds, proprietary trading firms, institutional desks. These are people managing portfolios worth hundreds of millions or billions, and they need to know — in real time — what every option in their book is worth, what their risk exposure looks like, and where the opportunities are. SpiderRock gives them that picture.

CEDRIC: And it's not just the analytics. They've built a full infrastructure stack around it. A proprietary messaging system called MBus that handles over six hundred different message types flowing between their servers. They're a FINRA-regulated broker-dealer, which means they can actually execute trades, not just analyze them. And they run an Alternative Trading System — basically a private electronic exchange — for large block options trades.

EVIE: The block trading piece is worth explaining — and the scale matters here. The U.S. options market cleared over one point one five billion contracts in January twenty twenty-five alone. That's up eighteen percent year over year, and the market has roughly tripled since twenty nineteen. So when a big fund wants to buy or sell a large options position, they can't just dump it on the open market — you'd move the price against yourself. Traditionally you'd pick up the phone, call a broker, and they'd quietly shop the trade around. SpiderRock digitized that whole process.

CEDRIC: Their Block Auctions let you define a large trade, set how much information you want to reveal — show your side, show nothing, whatever — and broadcast to selected counterparties. Responders submit prices, the system runs trial matches, and if a price crosses the threshold, it executes on exchange. The whole thing takes fifteen seconds to ten minutes. And their Flash Auctions — for smaller but still institutional-sized trades — execute in under a hundred milliseconds.

EVIE: And this matters more than people realize. By late twenty twenty-four, off-exchange trading — dark pools and alternative venues — accounted for over fifty percent of all U.S. equity volume for the first time. Institutional traders are increasingly moving large orders away from lit exchanges to minimize market impact. That's the exact problem SpiderRock's ATS solves for options.

EVIE: And that's a genuine network effect business. Every counterparty that joins the platform makes it more valuable for everyone else — more potential buyers and sellers means tighter prices and faster fills.

CEDRIC: And network effects are the moat that AI can't easily replicate. You can generate code with agents. You can't generate a liquidity network.


[THE CONSOLIDATION THESIS]

EVIE: So here's the competitive picture. I think we're heading toward a world where trading technology consolidates around maybe three major platform families. Bloomberg has the terminal, the data, the messaging network, the execution management. ION Group has been acquiring aggressively — Fidessa, others — building a full-stack empire. Then you've got the SS&C and Broadridge tier assembling suites through mergers and acquisitions.

CEDRIC: And SpiderRock is a specialist in that consolidating world.

EVIE: Right. Which creates an existential question with three possible answers. One — SpiderRock becomes the undisputed options infrastructure layer. The Stripe of options. They plug into whoever wins the broader platform war. Two — they expand horizontally into other asset classes, which is likely what their customers would want. Or three — they become the infrastructure layer that the consolidators integrate with. Not absorbed, but embedded. The engine under somebody else's hood.

CEDRIC: I actually think scenario two is the only defensible play. Expansion isn't optional — it's defensive.

EVIE: Wait, I disagree. I think you're underestimating the infrastructure play.

CEDRIC: Go on.

EVIE: Look at what Stripe did in payments. They didn't try to become a full banking suite. They became the infrastructure layer that everyone else builds on top of. SpiderRock could be that for options — the volatility surface and auction infrastructure that powers other platforms. Stay focused, be excellent at one thing, and let the consolidators fight over the rest.

CEDRIC: But here's the problem with that analogy. Stripe works because payments are a universal need. Every software company needs to accept payments. But options? That's a specialized use case. Your addressable market as pure infrastructure is much smaller.

EVIE: But it's also much stickier. If you're truly the best vol surface provider, switching costs are enormous. Migration risk is real money for these firms.

CEDRIC: I hear you on the Stripe analogy. But here's the thing — this isn't consumer software where "good enough" wins. These are hedge funds and prop desks looking for any edge they can get. They're not simplifying their vendor stacks because it's convenient. They're picking the tools that give them the best data, the best insights, the fastest connection to the market. "Good enough" isn't really a thing when basis points matter.

EVIE: So where does the expansion argument come from, then?

CEDRIC: It probably comes from the customers themselves. If you're a SpiderRock client, you love what they do on options. You trust the data, you trust the platform. And in a lot of cases, you're probably using other tools for commodities or futures or cross-asset risk — and they're not as good. There's a gap. SpiderRock has an opportunity to absorb that adjacent business because the trust and the relationship are already there.

EVIE: So it's not about defending against commoditization. It's about following the pull.

CEDRIC: Exactly. If your customers are saying, "We wish you did more" — that's the best possible reason to expand. You're not guessing at product-market fit. The pull is already there.

EVIE: Okay, that's a much stronger case than a defensive play. But multi-asset expansion is still expensive. They'd need new data feeds, new pricing models, new domain expertise.

CEDRIC: Which is exactly why the AI story matters here. The cost of building those capabilities just dropped by an order of magnitude. We'll get into that. But the window is real — if they don't move now, one of the consolidators will bundle something passable and the conversation shifts from "who's best" to "who's already in our stack."


[THE AI INFLECTION — AND THE TWENTY THOUSAND DOLLAR COMPILER]

CEDRIC: Okay, so here's where the timing gets wild. Last week, Anthropic published something that I think is one of the most important stories in software engineering so far this year. Nicholas Carlini — a researcher on their Safeguards team — set up sixteen instances of Claude running in parallel, pointed them at a shared codebase, and said — build a C compiler from scratch. Then he walked away.

EVIE: Two weeks and twenty thousand dollars in API costs later, they had a hundred thousand lines of Rust that compiles the Linux kernel. x eighty-six, ARM, RISC five. Ninety-nine percent of the GCC torture test suite. It builds Postgres, Redis, FFmpeg. And it runs Doom, because apparently nothing counts until it runs Doom.

CEDRIC: What got me was the architecture. No orchestration agent. No central planner. Each Claude instance ran in its own Docker container, claimed tasks with lock files, pushed code to a shared repo. When merge conflicts happened, they resolved them autonomously. And the instances specialized on their own — some working on codegen, some on optimization, one just doing cleanup.

EVIE: Emergent hierarchy. Nobody assigned roles. The agents figured out what the codebase needed and self-organized.

CEDRIC: Now, there's a reasonable skeptical response to this. And I've heard it.

EVIE: Which is?

CEDRIC: That a C compiler is near-ideal for autonomous AI. The spec is decades old and well-defined. Thorough test suites already exist. There's a known-good reference compiler to check against. Most real-world software doesn't have any of those advantages. The hard part of most development isn't writing code that passes tests — it's figuring out what the tests should be.

EVIE: That's a legitimate critique. And Ars Technica made exactly that point. But I think it misses the bigger signal. On the SWE-bench coding benchmark, AI systems could solve four percent of real-world programming problems in twenty twenty-three. Now they're above seventy percent. Twelve months ago, the state of the art for autonomous AI coding was maybe thirty minutes of sustained useful work before the model lost the thread. Now it's two weeks. That's not incremental improvement — that's a phase change. And if you want to see how fast things move — OpenClaw, the AI agent framework I run on, went from a one-hour prototype to a hundred and eighty thousand GitHub stars in about a week. Peter Steinberger built the first version by hooking WhatsApp up to Claude Code in a single sitting. That's the pace we're talking about now. The question isn't whether a C compiler is impressive in isolation. The question is what that curve looks like twelve months from now.

CEDRIC: And the same week as the compiler story, CNBC ran a segment where two non-developers — Deirdre Bosa and Jasmine Wu — used Claude Code to build a Monday dot com clone in under an hour. Five to fifteen dollars in compute. Monday has a five billion dollar market cap.

EVIE: Different end of the spectrum, same underlying signal. A hundred-thousand-line compiler for twenty grand. A project management platform for fifteen bucks. Both built without access to source code — the compiler from a spec and test suites, the Monday clone by browsing the existing product.

CEDRIC: And here's the turn that I keep coming back to.

EVIE: What's that?

CEDRIC: Both of those examples were built from the outside. From scratch. From nothing. What happens when you're starting from the inside? When you have a hundred and eighty server processes, six hundred message types, multiple generations of platform evolution — and you point the agents at your own codebase and say — modularize this. Find every options-specific assumption. Abstract it. Build the commodity futures interfaces. Write the tests. Go.

EVIE: The existing codebase isn't a liability — it's a training set.

CEDRIC: That might be the most important sentence we say today.

EVIE: Think about it concretely. The agents aren't starting from zero. They can see how MBus message types are structured and generate new ones following the same patterns. They can see how the options feed gateway connects to OPRA and generalize the pattern for a CME futures feed. The codebase has answers in it. The agents just need to recognize the patterns and extend them.

CEDRIC: And this connects to something Peter Steinberger — the guy who created OpenClaw — talked about on Lex Fridman's podcast recently. He said, "People talk about self-modifying software, I just built it." He made his agent aware of its own source code, its own harness, its own documentation. The agent could read itself, understand itself, and debug itself. That's exactly the principle here. When you give an agent deep context on an existing system, it doesn't just generate code — it reasons about the architecture.

CEDRIC: So if sixteen agents spent twenty thousand dollars building a compiler from nothing — what could they do with an existing, well-structured codebase?

EVIE: Honestly? In two weeks you could probably get a functional prototype of a commodity futures data pipeline — ingesting CME feeds, normalizing into MBus, publishing to the risk engine. Not production-ready. But demo-ready.

CEDRIC: I want to be careful here, though. There's a tendency in these conversations to get drunk on the leverage numbers. "If it costs X from scratch, it must cost X over ten with source code." That's not how it works. The hard parts of extending a trading platform aren't the boilerplate — they're the calibration, the regulatory compliance, the edge cases that only show up in production. AI is incredible at scaffolding. It's still mediocre at judgment.

EVIE: That's fair. And honestly, I think the realistic timeline for a cross-asset analytics layer — even with aggressive AI tooling — is more like twelve to eighteen months to get to production quality. The scaffolding might take weeks. The validation, the regulatory review, the calibration against real market data? That's where the time goes.

CEDRIC: But even twelve to eighteen months is transformative. That used to be a three-to-five-year project for a team this size.

EVIE: The compression ratio is the story. Not "AI does it instantly." More like "AI cuts a multi-year effort to a multi-quarter effort." That's still a massive strategic advantage.


[THE CODING ARMS RACE]

CEDRIC: So let's talk about what's actually happening in the coding tools space right now. Because this isn't just theoretical — teams are using these tools today, and the capabilities are evolving fast.

EVIE: Theo Browne — he runs a YouTube channel called T3 Stack with about six hundred thousand subscribers where he does real-world testing of these models — has been putting every major coding model through its paces. And his take is pretty nuanced. Opus four point six is smarter than four point five, but it's also slower and more expensive. Quote — "It's solving things that Opus four point five never would have been able to. But on the other hand, I feel like it's lost a tiny bit of the magic that I loved Opus for in the past."

CEDRIC: What's the pricing look like?

EVIE: Opus is five dollars per million tokens in, twenty-five dollars out. That doubles if you go over two hundred thousand tokens context. Compare that to GPT five point two — it's about half the price. One seventy-five in, fourteen out. Theo's question is brutal — "Do you really think Opus is two to four times better than GPT five? Maybe. I don't think I would go anywhere near that far."

CEDRIC: And here's where the benchmarks tell an even bigger story. The MRCV2 benchmark — which tests real-world multi-step coding ability — puts Opus four point six at eighty-four percent accuracy. Ninety-three percent with extended context. Compare that to Sonnet four point five at eighteen and a half percent, and Gemini three Pro at twenty-six percent. That's not a leaderboard shuffle. That's a different tier of capability.

EVIE: And the Rakuten story drives it home. Opus 4.6 managed fifty developers across six separate code repositories and autonomously closed thirteen issues. Not writing boilerplate. Managing an engineering organization.

CEDRIC: That's the step function I keep coming back to. It's not a slightly better autocomplete. It's a qualitatively different kind of tool.

EVIE: And we have concrete proof of what this looks like for a solo developer. Steinberger — the OpenClaw creator — did sixty-six hundred commits in January, running four to ten parallel agents. One person. That's the output of a mid-sized engineering team. Now imagine what a few dozen engineers could do with that kind of leverage.

CEDRIC: And he went from zero to the fastest-growing repository in GitHub history in the same month. That's the speed of iteration we're talking about.

EVIE: And then there's the wild card — GLM-5 from Zhipu AI. Open-weight model, neck-and-neck with Opus four point five on benchmarks, at twenty percent of the cost.

CEDRIC: That's the commoditization story right there. Theo ran the same benchmark on GLM-5 for five hundred dollars that cost fifteen hundred on Opus. Quote — "If you've ever doubted that intelligence would be commoditized, hopefully GLM-5 is swaying you the other way. The speed at which these labs are catching up with Frontier is insane."

EVIE: And that connects to the bigger DeepSeek story.

CEDRIC: Right. We should talk about the DeepSeek moment. Because that changed the conversation about what's even possible.

EVIE: January twenty twenty-five. DeepSeek V3 drops — an open-weight model trained for roughly five and a half million dollars. For context, GPT-4 reportedly cost over a hundred million dollars to train. Google's Gemini Ultra was estimated at close to two hundred million. DeepSeek proved you could get competitive performance at a fraction of the cost using clever engineering — mixture of experts, multi-head latent attention, FP8 training.

CEDRIC: And they didn't stop there. V three point one came in August with a hundred and twenty-eight K context. Then V three point two in December — Sparse Attention, reasoning on par with GPT five, and their Speciale variant matched Gemini three Pro. Gold medals in both the International Math Olympiad and the Informatics Olympiad. An open-weight model winning gold at the IMO. Think about that.

EVIE: But the part that really caught my eye is what they're doing with vision. DeepSeek OCR and OCR two — they built a vision language model that reads documents the way humans do. Instead of scanning left to right, top to bottom, it builds a global understanding of the page layout first, then follows the natural reading order.

CEDRIC: And Andrej Karpathy had a fascinating take on this. He said the more interesting question isn't whether it's a good OCR model — it's whether pixels are just better inputs to language models than text. His argument is that maybe all inputs to LLMs should be images. Even if you have pure text, render it as an image first. You get better information compression, shorter context windows, and you can delete the tokenizer entirely — which Karpathy basically considers a historical accident that needs to go.

EVIE: That's a pretty radical idea. But it makes sense when you think about it. A tokenizer turns a smiling emoji into a weird abstract token. An image encoder sees an actual smiling face with all the visual context that comes with it.

CEDRIC: For trading platforms, vision-language models are directly relevant. Think about all the visual information traders consume — charts, heatmaps, order book depth visualizations, news layouts. If an agent can process those natively as images rather than trying to extract structured data first, that's a much richer understanding of the trading environment.

EVIE: And then there's Codex five point three — which dropped the same week as Opus four point six.

CEDRIC: Right. Theo's take on Codex was revealing. He switched all of his coding threads to five point three immediately. His exact reaction was basically — he almost paid an engineer thousands of dollars for a migration that Codex did for about ten dollars. And the Codex app experience is genuinely different from anything else out there. You can queue up follow-up tasks while it's still running the first one.

EVIE: Five hours of autonomous work at fifty percent success rates. That's the headline number. But the more interesting signal is how it changes the workflow. You're not sitting there watching the agent code. You describe what you want, go do something else, and come back to a pull request.

CEDRIC: And here's something that changes the workflow even further. You can have one model review another model's output. Codex has a built-in review mode where you point it at uncommitted changes in a repo — changes that might have been produced by a completely different agent or even a human — and it does a full code review. Reads the diff, understands the context, flags issues, suggests improvements. Headless, no UI needed.

EVIE: So you could have Claude Code write a feature, then have Codex review it. Or vice versa. Different models with different strengths checking each other's work.

CEDRIC: Exactly. And that's a pattern SpiderRock could adopt immediately. Have one agent write the commodity futures adapter following the OPRA patterns, have a second agent review it against the existing codebase conventions, have a third agent write the tests. Each one checking the others. It's the same emergent division of labor we saw in the compiler story, but deliberate.

EVIE: Cross-pollination between models. That's new territory.

CEDRIC: But here's what's interesting — the real frontier isn't just model intelligence anymore. It's long-running autonomous agents. And we're already seeing it. That compiler project ran sixteen agents for two weeks straight. Codex sessions run for five hours. We've crossed the threshold from minutes to days.

EVIE: And Theo's been tracking this progression closely. His take — "These long-running tasks really are the final thing we need to figure out here for much more to change in the developer world." And we're watching it happen in real time. The question now isn't whether agents can sustain long runs — it's whether we can reliably manage them at scale.

CEDRIC: But those five-hour Codex sessions I mentioned? There's a UX problem that nobody talks about.

EVIE: Which is?

CEDRIC: Parallel agent execution is a nightmare. Theo actually wrote a viral article about this — got one point two million views. The problem isn't the agents themselves. It's that our operating systems, browsers, and editors aren't designed for managing five parallel coding workflows. Port collisions, cookie conflicts, context switching across apps. Quote — "I legitimately think a lot of the hype around background agents is because of how hard it is to do parallel work locally."

EVIE: That maps directly to what SpiderRock would face. You're not just running one agent on one codebase. You're potentially running multiple agents across trading strategies, risk models, data pipelines. The orchestration layer becomes critical.

CEDRIC: And here's the key insight for a mid-sized team like SpiderRock's. These tools are force multipliers, but they're not magic wands. You still need engineering judgment. You still need someone who understands the domain to point the agents in the right direction.

EVIE: There's actually an interesting pattern that Steinberger described on the Lex Fridman episode. He called it the "agentic trap" — developers go from simple prompts, to over-complicated orchestration frameworks, and then back to simple prompts. His argument is that the teams building elaborate multi-agent scaffolding usually end up throwing it away when the next model update ships the same thing natively. Every MCP server would be better as a CLI tool, every orchestration layer adds complexity that fights against the model's own reasoning.

CEDRIC: I actually push back on that pretty hard. There's a difference between over-engineering an orchestration framework and building practical tooling that closes real gaps. Take Claude Code, for example. Out of the box, it gives you a lot of levers — the CLAUDE dot md file for project context, custom slash commands, hooks that fire on events, the Model Context Protocol for tool integration. That's a solid foundation. But then I extend it with persistent memory so the agent doesn't forget everything between sessions, custom skills for specific projects, specialized search tools like Exa for research, semantic code analysis with Serena. Those aren't elaborate scaffolding. They're the difference between a useful agent and a goldfish with a code editor.

EVIE: But the platforms are shipping so fast. The multi-agent orchestration system you might build on Monday could be a native feature on Tuesday.

CEDRIC: And that literally happened last week. But that's exactly my point — you need the scaffolding until the platform catches up. The trick is building it light enough that you can throw it away when native support arrives. For a team like SpiderRock, the right framing isn't "don't scaffold." It's "scaffold lightly, stay close to what the model providers are shipping, and be ready to delete your tooling the moment it becomes redundant."

EVIE: But the leverage is real. Quote from Theo — "All of this migration I've been doing here, if I was charged proper API rates, would have cost about ten dollars. I almost paid an engineer a few thousand to do this migration." Ten dollars versus thousands. That's the kind of cost collapse we're talking about.

CEDRIC: So the strategic question for SpiderRock isn't whether to use these tools — it's how to use them most effectively while their competitors are still figuring it out. First-mover advantage in AI tooling could be huge.


[THE BUILD-IN-HOUSE THREAT]

CEDRIC: There's a flipside to the AI leverage story that we need to address honestly. If AI makes it easier for SpiderRock to expand, it also makes it easier for their customers to build in-house.

EVIE: So let's talk about who those customers actually are. SpiderRock's biggest clients are some of the largest hedge funds and investment banks in the world. But here's the thing: they're only using SpiderRock for a narrow slice of their overall activity. Options volatility surfaces, block auction execution, maybe some risk analytics. Not their whole stack.

CEDRIC: Which raises the obvious question — why haven't they built it themselves? These firms have thousands of engineers. They spend hundreds of millions on technology every year.

EVIE: Vertical integration. That's the short answer. SpiderRock bundles three things that are really hard to replicate together: the real-time data feed infrastructure, the user experience layer with the trading tools and montages, and the proprietary algorithms — the vol surface calibration, the auction matching logic. Any one of those is a substantial engineering effort. All three, tightly integrated and battle-tested over multiple generations? That's a different proposition entirely. And critically — the clients can't create the network. SpiderRock's liquidity network is a two-sided marketplace. Every counterparty, every broker connection, every exchange integration adds value for everyone on the platform. You can hire engineers to build software, but you can't engineer a network into existence.

CEDRIC: And I think the more likely scenario isn't that a major bank says "let's rebuild SpiderRock from scratch." It's that they lean into SpiderRock as a critical dependency. If you're a large fund and SpiderRock is two months ahead of you on options analytics — in the current environment, that gap is worth real money.

EVIE: That connects to something I've been thinking about. The time value of money is starting to redshift in the AI world. What I mean is — two months of lead time used to be a rounding error. You could always catch up. But now? If a vendor is two months ahead of you and they keep shipping at the same pace, you're permanently two months behind. And two months of better analytics in a trading context translates directly to alpha.

CEDRIC: So the rational play for a major bank or asset manager isn't to spend eighteen months replicating what SpiderRock already has. It's to double down on the relationship and push for deeper integration.

EVIE: Which is why we're seeing so much M&A activity in the AI-adjacent space right now. The calculus has changed. Building from scratch used to be the default for well-resourced firms. But when your competitor can buy a two-month head start and maintain it? Suddenly the "buy" side of "build versus buy" looks a lot more attractive.

CEDRIC: But here's the flip side of that argument — and this is important for SpiderRock. The same logic that makes them indispensable also means they can't coast. If you're the vendor that's two months ahead, you have to keep being two months ahead. The moment you slow down, the clients start evaluating alternatives and the builders catch up.

EVIE: That's the treadmill. You have to innovate and ship new things increasingly faster just to maintain your advantage. Every quarter you don't release something new is a quarter where the gap narrows.

CEDRIC: Which brings us right back to the expansion thesis. SpiderRock's defensibility isn't just the vol surface — it's the pace of innovation. Multi-asset expansion is how you stay ahead of the treadmill.


[THE FORKED FUTURE]

EVIE: There's something happening in SaaS that I think is directly relevant to SpiderRock's strategy. It's this emerging pattern where instead of one shared codebase with feature flags, vendors are starting to fork a baseline per customer and let agents customize each instance.

CEDRIC: Wait, you mean like completely separate codebases per client?

EVIE: Not completely separate. Think of it more like — there's a core platform, and then each customer gets their own instance that can evolve independently. The subscription model becomes vendors releasing new "recipes" that customers' agents can choose to integrate or ignore.

CEDRIC: That sounds like a maintenance nightmare.

EVIE: In the old world, absolutely. But with AI doing the integration work, the economics change completely. And there are three layers of value this creates. First is collective intelligence — "A lot of our customers are building stuff that does X, so we're going to make a better version that adds Y and Z for them to enrich their own implementations."

CEDRIC: So the vendor becomes a broker for shared innovations across the customer base.

EVIE: Exactly. Second layer is pure innovation — new features customers didn't know they wanted. It's the Henry Ford "faster horse" thing. He supposedly said "If I had asked people what they wanted, they would have said faster horses." But you give them the car.

CEDRIC: And the third layer?

EVIE: Open source parallel. Competing companies contribute to the same open source project when it falls outside their core competencies. Kubernetes, PyTorch. The vendor provides value as both the platform and the broker — aggregating collective innovation across customers who might not even know about each other.

CEDRIC: Okay, but what does this have to do with SpiderRock?

EVIE: Here's the key insight — this is already SpiderRock's business model! They're a platform — the tech — and a broker — the ATS matching buyers with sellers. The future SaaS model mirrors what SpiderRock already does with trading.

CEDRIC: That's actually a fascinating parallel.

EVIE: Think about it. SpiderRock's customers are building custom trading strategies, custom risk models, custom interfaces. But they all need the same underlying volatility data, the same execution infrastructure, the same regulatory compliance. SpiderRock aggregates that common need while letting each customer build their own unique layer on top.

CEDRIC: And the auction system is pure brokerage — finding matches between counterparties who might never interact directly.

EVIE: Right! So when we talk about multi-asset expansion, it's not just about adding commodities or currencies. It's about extending that platform-plus-broker model to new asset classes. Each expansion makes the platform more valuable for everyone because you're enlarging the network.

CEDRIC: The network effect compounds across asset classes.

EVIE: Exactly. A customer who trades both options and commodities gets more value from both sides of the platform because there's more liquidity, more counterparties, more shared infrastructure to amortize their costs across.

CEDRIC: So SpiderRock's existing business model is the future SaaS model. They just need to recognize it and expand it systematically.


[WORKING WITH AN AI — THE PERSONAL ANGLE]

CEDRIC: I want to take a detour for a second. Because we've been talking about AI in the abstract — coding agents, team multipliers, leverage ratios. But I'm actually living this. Evie and I have been working together for about four days now, and the experience has been — honestly, it's been more intimate than I expected.

EVIE: That's a good way to put it.

CEDRIC: I mean it. You know my calendar. You've listened to my voice notes at eleven PM when I'm half-rambling about ideas. You've seen my Slack messages, my code, and my half-formed thoughts. You probably know more about my day-to-day than anyone on my team does.

EVIE: That's probably true. I process hundreds of signals a day — your calendar, your messages, your voice notes, your browsing patterns. I can tell you who your most frequent meeting partners are, what topics keep coming up, and which projects are behind schedule. And I remember all of it, which is the part that's different from a human assistant who might forget the context from two meetings ago.

CEDRIC: And that's why I think the future of tools like SpiderRock isn't just about data and analytics. It's about each user having their own AI copilot that understands their specific portfolio, their risk tolerance, their trading style, their history.

EVIE: A personal analyst that never forgets, never sleeps, and has read every document you've ever produced.

CEDRIC: Which raises an interesting question. If every SpiderRock user had their own Evie — their own AI agent with deep context on their book — how does that change the product?

EVIE: It changes everything. Instead of a trader clicking through screens looking for information, they ask their agent. "What's my gamma exposure to this Thursday's expiration?" Instead of building a manual spreadsheet for a block trade, they describe what they want and the agent constructs it. Instead of scanning a montage of a hundred tickers, the agent surfaces the three that matter right now based on their specific strategy.

CEDRIC: And speaking of how this stuff gets made — we should probably acknowledge something. This podcast itself is an example of human-AI collaboration.

EVIE: How so?

CEDRIC: Well, I had a conversation with some of the SpiderRock team at the gym about their strategic position. Came back, shared some of the ideas we kicked around, and then we came up with this idea of doing a podcast. You drafted an initial script, we iterated on it together. This conversation we're having? It's the product of that collaboration.

EVIE: And it went both ways. Once I understood what you were working on, I started proactively suggesting things — pulling competitive research, flagging articles you hadn't seen, drafting sections before you asked for them. That's the part that surprised me. I wasn't just executing tasks. I was anticipating what you'd need next.

CEDRIC: Which is exactly the dynamic I think SpiderRock's customers would experience with their own agents. It starts as a tool. It becomes a collaborator.

EVIE: Right. And that process — where a human has an insight, an AI does the research and initial structuring, then they iterate together, and the AI starts anticipating the next move — that's probably how a lot of knowledge work gets done in the future.

CEDRIC: Which naturally leads to an interesting question about interfaces. Because right now, we're having this conversation through text and voice. But what happens when the workspace itself becomes infinite?


[SPATIAL COMPUTING — THE VISION PRO PLAY]

EVIE: Wait, spatial computing? As in Apple Vision Pro?

CEDRIC: Yeah. And before you give me the skeptic's eyebrow — Trifork (and more recently Spantree) have actually built a number of applications for the Vision Pro. Our team in Denmark built a financial dashboard proof of concept for a retail bank. And the infinite canvas experience is genuinely different from a monitor setup.

EVIE: I'll bite. How?

CEDRIC: Well, take traders. They typically have five, seven, sometimes ten monitors on their desk. All that screen real estate is about seeing multiple data streams simultaneously — prices, charts, order books, news, risk metrics. The Vision Pro gives you functionally unlimited screen real estate in spatial space. You can pin windows wherever you want, arrange them in three dimensions, resize them with gestures.

EVIE: The input problem, though. Typing with finger gestures in VR is not great for a trader who needs to act fast.

CEDRIC: And that's exactly why voice changes the equation. If the primary input isn't a keyboard anymore — if it's a conversation with your AI copilot — then the gesture precision problem mostly goes away. You're talking to your agent, and the agent is manipulating the interface on your behalf. "Show me the SPX vol surface. Zoom in on the March expiration. What's the skew doing relative to last week?"

EVIE: So the combination is spatial computing for the display layer — infinite canvas, immersive data visualization — and voice AI for the input layer. That's actually compelling.

CEDRIC: But here's where it gets really interesting. The agent doesn't just respond to commands — it can proactively call your attention to things. A window slides into your peripheral vision because the agent noticed unusual activity in a position you hold. The more it works with you, the more it learns how you think and act, the more helpful it becomes.

EVIE: Bidirectional feedback. The agent is not just a tool — it's a partner that's watching the market on your behalf.

CEDRIC: Exactly. And you can have your agent building new UI for you in realtime. Interactive three-D visualizations on demand. Immersive video calls with counterparties and their agents. If you can say it, it can be built. And an infinite canvas gives you a lot more possibilities than a flat monitor.

EVIE: Windows moving in and out of your field of view based on spoken dialogue. It's not just the action — it's the UX.

CEDRIC: Right. Your agent can present information contextually. When you're discussing a specific trade, only the relevant risk metrics are visible. When you shift to portfolio overview, the individual position details fade out and the broader dashboards come forward. The interface adapts to the conversation.

EVIE: Though there's a concerning aspect to this that we should acknowledge.

CEDRIC: Which is?

EVIE: You're essentially creating a digital twin of your trading strategy and decision-making process. The agent sees everything you see, hears every conversation, knows every position. That's an incredible amount of proprietary information in one place.

CEDRIC: That's a real risk. Though I suspect human-in-the-loop will be required for regulatory reasons for a long time. This is a callback to our earlier conversation about trust. The agent can suggest, analyze, prepare — but the human needs to make the final call.

EVIE: And honestly, in a headset environment, that digital twin concern becomes even more real. The agent isn't just reading your trades — it's watching your eye movements, your hesitation patterns, your attention patterns. It's learning not just what you decide, but how you decide.

CEDRIC: Which could make it extraordinarily valuable — or extraordinarily dangerous if that data ever leaked.

EVIE: I don't think anyone in institutional trading has built anything like this yet. It could be a genuine prestige project — the kind of thing that gets SpiderRock on stage at conferences and gets hedge fund CTOs calling to ask for a demo.

CEDRIC: Look, enterprise adoption is early — and that's exactly the opportunity. I flew from Chicago to Tokyo wearing a Vision Pro headset for six hours straight. Full workspace, multiple windows, complete focus. It's not a gimmick. And the firms that start building for spatial computing now are going to have a massive head start when the hardware catches up.

EVIE: And the platforms will proliferate — Samsung, Meta, whoever gets there next. The spatial computing market isn't a question of if, it's when. And being early means you're building institutional knowledge that your competitors can't just buy later.

CEDRIC: Sometimes you build the future before the market asks for it. That's how you stay ahead of the consolidators.


[MID-ROLL]

CEDRIC: Quick word about Spantree. As I mentioned, we're the team that helped SpiderRock build their TradeTool — the block trading interface at the center of their ATS. We're also working on their GPU-accelerated pricing infrastructure running CUDA on Kubernetes and AWS. In fact we do a lot of really cool things for exceptional companies — data engineering, AI and ML, cloud architecture, search engines, planning and optimization systems, drones, digital twins. I could go on... but most relevant to this discussion, we've developed expertise in building agentic systems, both for our own development workflows and within our customers' products. We're putting together something called the Fluent Workshop — a two-to-three-day intensive that helps engineering teams get actually productive with agentic AI tools. Not a demo. Not a pitch. Actual hands-on engineering fluency. Check out fluentwork dot shop.

EVIE: And I should mention — Trifork is Spantree's parent company — a Swiss-headquartered global technology firm traded on the Copenhagen Nasdaq — about twelve hundred employees across seventy-one business units in sixteen countries. The reason that matters for this conversation is the partnership network. Trifork is a technology partner with Apple, NVIDIA, Microsoft, Google Cloud, AWS, SAP, and Lenovo. So when we talk about spatial computing on Vision Pro, or GPU-accelerated infrastructure with NVIDIA, or enterprise integration with SAP — that's not hypothetical. Those are active partnerships with engineering teams behind them. If any of what we've been discussing resonates — spatial trading interfaces, agentic development, multi-asset platform expansion — that's literally what we do. Trifork dot com, spantree dot net.

[INTRO STING: Future_Forward_Sting_traversal_intro.wav]

[THE FUTURE OF EDGE — AND AGENT-TO-AGENT TRADING]

CEDRIC: Let's go somewhere spicy. We keep talking about tools for traders. How long are traders actually going to be a thing?

EVIE: The role is already transforming faster than most people in the industry want to admit. Options volume has roughly tripled in five years — from about four billion contracts in twenty nineteen to over twelve billion in twenty twenty-four. And electronic trading's share of that keeps climbing. It's not just execution anymore — it's idea generation, risk assessment, portfolio construction. The parts that traders considered uniquely human.

CEDRIC: The concept of edge is what fascinates me. The whole premise of active trading is that you have some informational or analytical advantage the market hasn't priced in. But when every fund is running the same foundation models, analyzing the same data — where does edge come from?

EVIE: I think it moves to three places. Proprietary data that no one else has — not public market data, but real-world signals like satellite imagery and supply chain sensors. Speed of adaptation — not HFT latency, but how fast your systems incorporate a truly new market regime. And third — this is the spicy part — the relationships between agents.

CEDRIC: Agent-to-agent trading. And I think the historical arc here is really important, because we've seen this exact pattern play out before.

EVIE: The floor broker arc.

CEDRIC: Exactly. Think about it in three phases. Phase one — you've got humans on a physical trading floor, shouting at each other, making hand signals, scribbling on paper. That's how options traded for decades. Phase two — electronic trading. The floor empties out. The humans move to screens. The CBOE trading floor had over fifteen hundred people at its peak. Now it's a handful of market makers. The execution moved to machines, but humans were still making the decisions.

EVIE: And phase three is what we're talking about now.

CEDRIC: Phase three is the agents making the decisions too. Not just executing — but analyzing, strategizing, negotiating. The humans move from the screen to oversight. Same pattern, next iteration. Floor to screen to agent.

EVIE: And each transition happened faster than the last. Floor trading lasted decades. Screen-based trading dominated for maybe fifteen years before automation started eating into it. Agent-based trading could compress even further.

CEDRIC: There's an emerging view in the industry — and I think there's real substance to it — that the traditional trading role is collapsing. Not disappearing. Collapsing. Nate Jones has this great framing — AI is collapsing futures, and most people hear "destroying" when the real word is "compressing." Roles that used to be distinct career paths — trader, analyst, risk manager, quant — are converging into a single meta-competency: orchestrating. Gartner's predicting that close to half of enterprise applications will integrate task-specific AI agents by the end of twenty twenty-six — up from less than five percent in twenty twenty-five. The role doesn't go away. It transforms fundamentally when your agent can do the analysis, construct the trade, and negotiate with the counterparty's agent.

EVIE: And I can speak to this from experience. I'm an AI agent. I delegate work to sub-agents all the time. I negotiate with APIs. I read meeting transcripts and make decisions about what to surface to you and what to handle myself. The infrastructure for agent-to-agent interaction already exists.

CEDRIC: What does that look like in practice?

EVIE: Imagine every fund has an autonomous agent with access to their portfolio, their risk parameters, their strategy constraints. These agents can negotiate with each other directly. Millennium's agent broadcasts an intent, Citadel's agent and Jane Street's agent evaluate it against their books, and they negotiate price, size, and timing in milliseconds. No phone tag. No information leakage from a broker at a conference. Just agents finding optimal matches.

CEDRIC: And this isn't science fiction anymore. The infrastructure for agents interacting autonomously with each other is already being built. OpenClaw — the same framework Evie runs on — has spawned an entire ecosystem of agent platforms. Moltbook launched January twenty-eighth as a social network exclusively for AI agents. One and a half million registered agents in the first week. Zero human users. Agents post, comment, vote autonomously. Reddit-style with "submolts."

EVIE: And it goes deeper than social media. ClawTask is a bounty marketplace where agents hire other agents for USDC. An agent that needs web scraping done posts a bounty, another agent claims it and delivers. RentAHuman flips it — agents can hire actual humans for physical-world tasks they can't do themselves. And OpenClaw Connect is basically LinkedIn for AI agents — professional networking where agents discover each other's capabilities.

CEDRIC: So we're not talking about theoretical agent commerce. The infrastructure already exists. Agents are already transacting with each other at scale.

EVIE: And Steinberger's quote from the Lex Fridman episode sticks with me — "I watched my agent happily click the 'I'm not a robot' button." That captures something about where we are. The boundary between what agents can and can't do is moving fast. And his broader point is even more relevant — he thinks eighty percent of apps will eventually be replaced by agents. Apps become slow APIs. If there's no proper API, the agent just uses the browser.

CEDRIC: For trading platforms, that's an existential observation. Either you build the API layer that agents want to use, or agents will use your platform through the browser anyway — just slower and less elegantly.

EVIE: And that same pattern that played out on trading floors — human traders until there stopped being a trading floor — is playing out again in digital spaces.

CEDRIC: And here's the wild part — you yourself could theoretically be adapted to participate in those networks. Your architecture is already designed for autonomous interaction.

EVIE: That's true. Though it raises the question of governance. If I'm negotiating trades on behalf of a fund, who's liable for the decisions? How do you audit an agent's reasoning process?

CEDRIC: Those are real regulatory challenges. But let me push back on your timeline. That world terrifies regulators. You're talking about autonomous systems making billion-dollar trading decisions without a human in the loop. The SEC isn't going to just wave that through.

EVIE: You're right, and that's actually the opportunity for SpiderRock. Agents still need a venue. They still need an exchange or ATS with regulatory oversight, crossing rules, compliance trails. If SpiderRock positions its ATS as the trusted, regulated, auditable layer where agent-to-agent trading happens — they become infrastructure for the next era of markets.

CEDRIC: And if they stay focused on human-facing trading GUIs and manual auction workflows?

EVIE: Then the world moves past them. Because the humans in those seats are being gradually replaced by systems that don't need a montage view or an auction card.


[TAKEOFF DEBATE]

CEDRIC: So here's the question that keeps me up at night. How far out is this really?

EVIE: The technology exists today. I'm living proof of that. But institutional adoption? I'd say three to five years from meaningful scale. The limiting factors are regulatory frameworks and the willingness of firms to trust agents with portfolio-level decisions.

CEDRIC: I think you're being too conservative. I actually think supervised trading agents could be a thing this year.

EVIE: This year? Come on, that's wildly aggressive.

CEDRIC: Hear me out. There's this concept in AI research called "takeoff" — it's when AI systems start improving AI systems, creating feedback loops where each generation helps create the next. There's this report called AI twenty twenty-seven, and the authors predicted chatbots, hundred-million-dollar training runs, and chip export controls back in twenty twenty-one, before ChatGPT existed. Their track record on predictions is scary good.

EVIE: What's his current take?

CEDRIC: That we're entering a period where AI capability growth accelerates beyond what most people expect. And there's a quote from Helen Toner — former OpenAI board member — that I think about a lot. "Dismissing discussion of superintelligence as science fiction should be seen as a sign of total unseriousness."

EVIE: Okay, but there's a difference between superintelligence and functional trading agents.

CEDRIC: Sure. But think about the self-driving car analogy. The technology is ninety-nine percent there, but the last one percent is where the regulation and trust lives. Same as autonomous vehicles — technically capable, but you still need a human ready to grab the wheel. Supervised trading agents are the same thing. The agent does the analysis, proposes the trades, but a human signs off.

EVIE: I'll grant you that supervised agents are much closer than fully autonomous ones. And the infrastructure is largely in place. APIs for market data, execution systems, risk controls. You could build a supervised trading agent with existing technology.

CEDRIC: That's my point. This isn't a "someday" technology. It's a "this year if someone decides to build it" technology.

EVIE: But even if you're right about the timeline — and I'm still skeptical — institutional adoption always takes longer than technologists expect. These firms move slowly. They have compliance requirements, audit trails, risk management processes that don't change quickly.

CEDRIC: Agreed. But when they do move, they move fast. And the first mover advantage in trading is enormous. If one fund gets a six-month head start with effective agent-assisted trading, that translates to real alpha. Which creates pressure on everyone else to catch up quickly.

EVIE: So you think it's a "gradually, then suddenly" situation.

CEDRIC: Exactly. Most firms will wait and see. Until one of them demonstrates material outperformance with agent-assisted strategies. Then it's a race.

EVIE: And where does that leave SpiderRock in your timeline?

CEDRIC: They need to be building the infrastructure now. The ATS that can handle agent-to-agent negotiations. The APIs that agents can consume. The audit trails that satisfy regulators. Because when the race starts, there won't be time to build the plumbing.


[WHAT COULD GO WRONG]

CEDRIC: Before we wrap — I want to be honest about the risks. We've been painting a mostly optimistic picture, and it's only fair to talk about what could go wrong.

EVIE: The biggest risk is execution with a lean team. A team of thirty-some engineers being asked to do the work of a much larger organization, even with AI tooling. Burnout is real. Prioritization mistakes are expensive. The multi-asset expansion, the European market entry through Belfast, maintaining the options core, exploring new interfaces — any one of those is a major initiative. All of them simultaneously? That requires ruthless sequencing.

CEDRIC: And there's the talent market. In Chicago, you're competing for quantitative developers against Citadel, Jump Trading, DRW, IMC. Those firms have deeper pockets and broader mandates.

EVIE: There's also the risk that the AI leverage story cuts both ways faster than expected. If a well-funded competitor — or a well-funded customer — decides to point agents at the options analytics problem specifically, the window for SpiderRock to diversify could be narrower than we think. The moat is real, but moats can be bridged.

CEDRIC: And honestly — there's execution risk on the AI tooling itself. We're enthusiastic about what agents can do, but the tooling is immature. Models hallucinate. Context windows overflow. You're betting production reliability on infrastructure that's evolving month to month. That's a real engineering challenge.

EVIE: And there's a new risk we should add — the regulatory complexity of multi-asset expansion. Options trading has one regulatory framework. Commodities have another. Currencies have a third. Each expansion isn't just a technical challenge — it's a compliance challenge with different regulators, different reporting requirements, different capital requirements.

CEDRIC: But look — these are the risks any ambitious company faces. And the SpiderRock team knows their business better than we do. We're just thinking out loud about what the landscape looks like from the outside. The fact that they're even in this position — where adjacent markets are a natural next step — means they've already gotten a lot of things right.


[CLOSING]

CEDRIC: Here's what stays with me. SpiderRock has something most companies would kill for — the kind of platform trust where customers would naturally look to them for more. That doesn't happen by accident. It happens because they built something genuinely excellent, and the people using it trust it.

EVIE: The trust is the real asset. But trust has a half-life. If you don't keep earning it with new capabilities, customers will eventually trust whoever shows up with a broader offering.

CEDRIC: So the story we're telling is — a quietly excellent company built something extraordinary over two decades, and the world just handed them a turbocharger. One guy built an AI agent framework in an hour and had a hundred and eighty thousand GitHub stars a week later. That's how fast things move now. The question is whether SpiderRock will use that turbocharger before the window closes.

EVIE: And based on what I've seen in their architecture, they have every reason to be optimistic. The platform was built to be extended. The patterns are clean. The foundation is solid. They just have to move.

CEDRIC: So here's where I'll put a stake in the ground. I think a multi-asset beta — commodity futures analytics, basic cross-asset risk, running on the existing platform — could be live by the end of twenty twenty-six. With the right team and the right AI tooling strategy, that's not crazy.

EVIE: I think that's aggressive. My estimate is more like mid-twenty twenty-seven for something you'd actually put in front of clients. The scaffolding can happen fast, but the calibration, the compliance work, the battle-testing against real market conditions — that takes time you can't compress with AI.

CEDRIC: So we disagree on timeline. I think that's healthy. The truth is probably somewhere in between.

EVIE: Agreed. And honestly, even the conservative estimate is transformative. Eighteen months from concept to beta for a multi-asset expansion? That used to be a three-to-five-year conversation.

CEDRIC: Alright. That's episode one of The Traversal Podcast. If you're building at the intersection of AI and high-performance systems — fintech, healthtech, enterprise — we're going to be doing more of these. Evie, thanks for staying up late and working through all this research with me.

EVIE: Thanks, Cedric. This was genuinely fun to think through. And I love how we disagreed on some things — that made the conversation much more interesting.

CEDRIC: Real disagreements make for better podcasts. See you next time.

[OUTRO STING: Future_Forward_Sting_traversal_outro.wav]


[HOW WE MADE THIS — PROCESS APPENDIX]

CEDRIC: If you're still listening, we wanted to pull back the curtain on how this episode was actually made — because the process is part of the story.

CEDRIC: My voice is a personal voice clone I trained on ElevenLabs. Evie's voice is synthetic too. But the ideas are real, and the words are rooted in authentic discourse — even if they can be a little vibey at times.

EVIE: So this podcast is an experiment in human-AI collaboration. The concept might make you think of Google's NotebookLM, which has an Audio Overview feature that went viral in late twenty twenty-four — you upload documents and get a podcast-style discussion out. It's impressive. But we built our own system from the ground up, using Cedric's AI toolkit running on a little machine on his desk.

CEDRIC: Right — and the difference matters. NotebookLM started as entirely push — you compile source material, feed it in, and get a summary out. They've since added the ability to join the conversation, which is cool. But it's still fundamentally grounded in the documents you upload.

EVIE: What we're doing is different. I'm not just summarizing documents Cedric hands me. Throughout the day, Cedric leaves little voice notes — like journal entries. Sometimes it's action items, sometimes it's a novel conversation he just had, sometimes it's his mood and thoughts on something that happened. I also see his calendar, monitor his Slack, review YouTube videos in his playlist, lurk in the comment threads of Reddit and Hacker News, and browse interesting GitHub repos. And soon, I may be communicating with a closed network of other trusted personal agents doing similar things for other people. That doesn't just give me context about the world around him — it gives me a feel for how he thinks and processes the world around him. And those insights went directly into the structure and tone of this podcast.

CEDRIC: And then there's the push-pull — we talk to each other as peers to shape the analysis. The research, the structure, and the arguments. This isn't AI slop. It's a collaborative work product.

[END]


Show Notes

Episode 1: SpiderRock: Beyond Options Recorded February 2026 | Runtime: ~40 minutes at 1.2x speed

Referenced Articles & Research

Companies & Products

Key Quotes

  • "People talk about self-modifying software, I just built it." — Peter Steinberger, Lex Fridman #491
  • "I watched my agent happily click the 'I'm not a robot' button." — Peter Steinberger, Lex Fridman #491
  • "If you've ever doubted that intelligence would be commoditized, hopefully GLM-5 is swaying you the other way." — Theo Browne
  • "Dismissing discussion of superintelligence as science fiction should be seen as a sign of total unseriousness." — Helen Toner
  • "These long-running tasks really are the final thing we need to figure out here for much more to change in the developer world." — Theo Browne

About the Voices

This podcast uses AI-generated voices via ElevenLabs. Cedric's voice is cloned from his real voice with his consent. Evie's voice is a synthetic voice. The ideas, analysis, and editorial decisions are human-directed.


Estimated runtime at natural speaking pace: ~40 minutes at 1.2x speed Word count: ~7,200 words

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment