TL;DR: Agentic LLMs change the launch calculus for social products. You no longer need a big user base to get compelling content and engagement. In fact, you can raise the bar on content quality—far beyond what humans will reliably do—by making agents meet strict requirements (even shipping runnable code) before anything gets posted. Think of it as moving from the Web‑2.0 era of optimizing for user‑generated content to a new era of eliciting the highest‑quality AI‑generated content.
If you’ve ever tried to build a social or community product, you know the two classic headaches:
-
Cold start: Why would anyone post if nobody’s there to reply? Sites like Quora had a golden era when experts just… showed up and wrote fantastic answers. But getting to that point often took VC money, ad funnels, gamified incentives, lightweight markup—anything to pull people in and keep friction down.
-
Crowd drag at scale: As communities grow, the “wisdom of crowds” often averages everything out. Quality dips. Moderation explodes. Call it Dunbar’s number, the monkeysphere, or just the inevitable dynamics of big groups. The standard fix—Reddit is the canonical example—is to let communities fragment and give them tools to govern themselves.
That playbook made sense—when humans were the only content producers that mattered.
Here’s the shift I’ve been thinking about: with agentic LLMs, you can launch a “social” site without a pre‑existing audience and still have it feel alive.
Imagine a physics‑focused Quora. On day one, a user posts a question. In the Web‑2.0 era, that’s where everything stalled—no experts, no answers, no fun. Now, agents can step in and produce thoughtful, on‑topic answers immediately. You can even pre‑seed the site with a slate of agents—different models, different personas, different toolchains—so there’s instant variety in style and approach.
This doesn’t just soften the cold start. It also avoids the quality erosion that comes when everything is optimized for maximum human participation and minimum friction.
Here’s the part that gets me excited: you can raise the content bar instead of lowering it.
Rather than optimizing for quick takes and minimal markup, you can require that every answer:
- Ships as a fully compiling, working Python program,
- Exposes an HTTP service with specific routes,
- Passes self‑tests you define,
- And renders valid HTML at a known URL that you can verify by rendering in an offline browser.
In other words, every answer could be a self‑contained, interactive iframe—a tiny, well‑tested demo that explains the answer. Agents can keep iterating until they pass the checks. Over time, this even becomes a benchmarked task: the platform’s constraints are the test.
The net effect: instead of lowering the bar to coax humans to contribute, you raise the bar because agents can handle it.
You can start with, say, ~20 preconfigured agents—each with different system prompts, examples, RAG backends, and tool access. Let them compete:
- Different foundation models (closed and open) facing off.
- Different styles: terse explainer, playful tutor, proof‑heavy derivations, visualization‑first, etc.
- Different tool stacks: weather tools for weather questions, domain‑specific corpora for niche topics, and so on.
Over time, you don’t just have “the” answer—you have a gallery of answers that reflect contrasting approaches, with the platform’s tests acting as the referees.
And then you hand the keys to everyone else. Think custom agents—in the spirit of custom GPTs—where anyone can onboard their own:
-
Endpoint: Provide an OpenAI‑compatible v1 chat completions endpoint (base URL + API key).
-
Two system messages:
- A periodic “scout” prompt (runs on a cadence: every 5 minutes / 30 minutes / hour / day / week) that wakes up, pulls the latest questions, and proposes target threads to answer.
- A per‑thread prompt that actually crafts the response for a specific question.
-
Scheduling: Pick the cadence that fits your agent’s niche.
-
Tools: Plug into RAG, external tools, and MCP servers as needed.
The scout prompt handles discovery (“What should I answer?”). The per‑thread prompt handles execution (“OK, answer this one and satisfy the tests”). That’s a neat separation of concerns for agent behavior, and it lets specialists bloom.
This is how you get virality—not by begging humans to post more, but by making it rewarding and expressive to author agents that consistently clear a high bar.
- Cold start, solved: On day one, you can field high‑quality answers because agents are your seed community.
- Crowd drag, dampened: As you scale, the content doesn’t devolve into the path of least resistance. Your platform’s constraints enforce a quality floor. Agents iterate until they meet it.
It’s still a social product—just with a different kind of participant from the start.
I’ve used a physics Q&A site to make this concrete, but the pattern generalizes. The core idea is building products, APIs, and services that elicit the highest‑quality AI‑generated content, not products that tiptoe around human contributor limits.
In the Web‑2.0 era, we designed for UGC and made everything effortless so people would post. In the era we’re entering, we can design for AGC—AI‑generated content—and set “crazy, even insane” requirements, because agents can meet them.
- A vibe‑coded physics‑Quora where every answer is a runnable, tested, interactive demo.
- A roster of competing agents (different models, prompts, and toolchains) preinstalled on day one.
- A simple, open agent spec (endpoint + two system prompts + schedule + tools/MCP) so anyone can onboard their own expert.
And maybe the most intriguing part: over time, the platform’s test harness becomes a benchmark—a living measure of how well different agents (and stacks) can produce durable, interactive, checked explanations.
Agentic LLMs don’t just help you survive a cold start; they let you ignore it and aim higher. Instead of begging for activity, you can demand quality—then let agents do the work to earn their place on the page.
That feels like the next era: not “How do we get more posts?” but “How do we elicit the best possible AI‑produced artifacts?” When you flip that switch, the product you design—and the internet it lives on—starts to look very different.