Skip to content

Instantly share code, notes, and snippets.

@eonist
Created January 30, 2026 04:04
Show Gist options
  • Select an option

  • Save eonist/951e9888e0f799f6d2d288fc3eadfd0b to your computer and use it in GitHub Desktop.

Select an option

Save eonist/951e9888e0f799f6d2d288fc3eadfd0b to your computer and use it in GitHub Desktop.
signing with chatgpt

OpenAI recently introduced OAuth authentication for ChatGPT subscriptions, allowing tools like Cline to use your existing subscription instead of managing API keys. Here are the key restrictions: cline

Usage Limits

The main restriction is rate limiting tied to your ChatGPT subscription tier: reddit

  • Plus users: Around 300–1,500 local messages every 5 hours, or 50–400 cloud tasks depending on complexity
  • Pro users: Higher limits but still capped with 5-hour and weekly limits
  • Usage counts against your overall ChatGPT subscription quota, not separate API credits github

Many users report hitting limits quickly, especially with intensive agentic coding workflows. community.openai

Model Access

The models available depend on your subscription tier :

  • gpt-5.2-codex: Optimized for agentic coding
  • gpt-5.2: General-purpose with strong reasoning
  • gpt-5-mini: Fast and cost-effective
  • o3 / o4-mini: Advanced and everyday reasoning tasks

How It Differs from API Keys

Aspect OAuth (ChatGPT Subscription) API Keys
Pricing Flat-rate (subscription) cline Pay-per-token reddit
Limits Time-based (5h/weekly caps) github Token/request limits platform.openai
Setup One-click sign-in Manual key management
Credential exposure Tokens handled automatically cline Risk of accidental commits

Key Considerations

  • Shared quota: Usage in Cline draws from the same pool as your ChatGPT web/app usage github
  • No separate budget: You cannot add API credits to bypass OAuth limits—they're tied strictly to your subscription tier github
  • Security: Your OpenAI credentials never leave OpenAI's servers; Cline only receives access tokens cline

For heavy coding use, some users find the 5-hour and weekly caps frustrating and prefer third-party providers or direct API access for more flexibility. community.openai

@eonist
Copy link
Author

eonist commented Jan 30, 2026

The picture is actually more nuanced than a simple "yes" or "no."

What Claude Code Uses Haiku For (Regardless of Your Setting)

Even when you set Opus 4.5 as your model, Claude Code uses Haiku for specific background tasks:[reddit]

  • Status messages: The "Transforming...", "Analyzing...", "Reading..." text

  • Command approval: Deciding if a command matches your allow/deny patterns

  • Reading large files: ~50% of all Claude Code calls reportedly use Haiku for file reading operations[reddit]

  • Tool call parsing: Processing the structured output from tools

These aren't your "main" coding tasks—they're internal operations where using Opus would be wasteful.[reddit]

Your Primary Tasks Respect Your Model Choice

For actual coding work—writing code, refactoring, answering questions—Claude Code uses whatever model you've set. One video analysis specifically noted: "Claude Code does not intelligently determine which model to use for your request. There's no back-end logic that says 'this is a simple request, so let's use a lightweight model'".[youtube][reddit]

So if you set Opus 4.5, your coding prompts go to Opus 4.5.

The Cost Implication

Task Type | Model Used | Your Control -- | -- | -- Status messages / UI | Haiku | No (hardcoded) [reddit]​ Command approval | Haiku | No (hardcoded) [reddit]​ File reading operations | Haiku | No (hardcoded) [reddit]​ Your actual prompts | Your chosen model | Yes [youtube]​

Why This Still Saves Anthropic Money

The 50% figure is significant. Reading files, checking permissions, and internal routing add up quickly in agentic workflows. By handling these with Haiku, Anthropic dramatically reduces the cost per session—even when your "real" work hits Opus.[reddit]

Third-party tools like Cline don't have this optimization layer. Every API call—including file reads and internal checks—would use whatever expensive model you configured, making the economics unsustainable for a flat-rate subscription.[reddit]

The picture is actually more nuanced than a simple "yes" or "no."

What Claude Code Uses Haiku For (Regardless of Your Setting)

Even when you set Opus 4.5 as your model, Claude Code uses Haiku for specific background tasks: [reddit](https://www.reddit.com/r/ClaudeAI/comments/1lcntv4/claude_code_200_plan_switched_to_35_haiku_even/)

These aren't your "main" coding tasks—they're internal operations where using Opus would be wasteful. [reddit](https://www.reddit.com/r/ClaudeAI/comments/1lcntv4/claude_code_200_plan_switched_to_35_haiku_even/)

Your Primary Tasks Respect Your Model Choice

For actual coding work—writing code, refactoring, answering questions—Claude Code uses whatever model you've set. One video analysis specifically noted: "Claude Code does not intelligently determine which model to use for your request. There's no back-end logic that says 'this is a simple request, so let's use a lightweight model'". [youtube](https://www.youtube.com/watch?v=eP9fA5MHG20)

So if you set Opus 4.5, your coding prompts go to Opus 4.5.

The Cost Implication

Task Type Model Used Your Control
Status messages / UI Haiku No (hardcoded) [reddit](https://www.reddit.com/r/ClaudeAI/comments/1lcntv4/claude_code_200_plan_switched_to_35_haiku_even/)
Command approval Haiku No (hardcoded) reddit
File reading operations Haiku No (hardcoded) [reddit](https://www.reddit.com/r/ClaudeAI/comments/1myw74x/analyzed_months_of_claude_code_usage_logs_tell/)
Your actual prompts Your chosen model Yes youtube

Why This Still Saves Anthropic Money

The 50% figure is significant. Reading files, checking permissions, and internal routing add up quickly in agentic workflows. By handling these with Haiku, Anthropic dramatically reduces the cost per session—even when your "real" work hits Opus. [reddit](https://www.reddit.com/r/ClaudeAI/comments/1myw74x/analyzed_months_of_claude_code_usage_logs_tell/)

Third-party tools like Cline don't have this optimization layer. Every API call—including file reads and internal checks—would use whatever expensive model you configured, making the economics unsustainable for a flat-rate subscription. [reddit](https://www.reddit.com/r/ClaudeAI/comments/1myw74x/analyzed_months_of_claude_code_usage_logs_tell/)

@eonist
Copy link
Author

eonist commented Jan 30, 2026

You've identified the key architectural difference. OpenAI built server-side infrastructure to handle third-party OAuth access, while Anthropic's cost optimizations only exist in the Claude Code client itself.

OpenAI's Approach: Server-Side Control

OpenAI explicitly designed their OAuth system to work with third-party tools:[developers.openai]

  • Sign in with ChatGPT is an official authentication method they actively support

  • Third-party tools like Cline connect through OpenAI's "LLM proxy server"[developers.openai]

  • Rate limiting, model routing, and cost management happen on OpenAI's backend regardless of which client sends the request

  • OpenAI can apply the same optimizations whether you use their CLI, VS Code extension, or Cline[cline]

Anthropic's Approach: Client-Side Only

Anthropic never built equivalent server-side infrastructure:[news.ycombinator]

  • Claude Code's OAuth endpoints are not a public API—they're internal authentication for the official client only

  • All the smart routing (Haiku for file reads, etc.) happens inside the Claude Code CLI

  • Third-party tools like OpenCode and Roo Code had to "reverse-engineer Claude Code's OAuth endpoints and API"[news.ycombinator]

  • When you bypass the official client, you bypass all cost optimizations

Why This Matters

Aspect | OpenAI Codex | Anthropic Claude Code -- | -- | -- OAuth designed for | Third-party tools ✅ [developers.openai]​ | Internal client only [news.ycombinator]​ Cost routing location | Server-side (backend) | Client-side (CLI) Third-party tools | Officially supported [cline]​ | Treated as "spoofing" [reddit]​ Subscription economics | Sustainable with any client | Only works with official client [reddit]​

As one commenter explained: "OpenCode reverse-engineers Claude Code's OAuth endpoints and API... This is harmful from Anthropic's perspective because Claude Code is subsidized relative to the API".[news.ycombinator]

The Buffet Analogy

A popular Reddit comment captured it well: "Utilizing third-party wrappers is akin to showing up at Anthropic's all-you-can-eat buffet with an elephant. Since Anthropic manages Claude Code, they can fine-tune it to reduce context consumption... Third-party developers lack the motivation to reduce context usage".[reddit]

So yes—OpenAI can afford to let any client use their OAuth because their backend enforces the rules. Anthropic's model only works when you use their client, which is why they had to block everything else.

You've identified the key architectural difference. OpenAI built server-side infrastructure to handle third-party OAuth access, while Anthropic's cost optimizations only exist in the Claude Code client itself.

OpenAI's Approach: Server-Side Control

OpenAI explicitly designed their OAuth system to work with third-party tools: [developers.openai](https://developers.openai.com/codex/auth/)

Anthropic's Approach: Client-Side Only

Anthropic never built equivalent server-side infrastructure: [news.ycombinator](https://news.ycombinator.com/item?id=46625918)

  • Claude Code's OAuth endpoints are not a public API—they're internal authentication for the official client only
  • All the smart routing (Haiku for file reads, etc.) happens inside the Claude Code CLI
  • Third-party tools like OpenCode and Roo Code had to "reverse-engineer Claude Code's OAuth endpoints and API" [news.ycombinator](https://news.ycombinator.com/item?id=46625918)
  • When you bypass the official client, you bypass all cost optimizations

Why This Matters

Aspect OpenAI Codex Anthropic Claude Code
OAuth designed for Third-party tools ✅ [developers.openai](https://developers.openai.com/codex/auth/) Internal client only [news.ycombinator](https://news.ycombinator.com/item?id=46625918)
Cost routing location Server-side (backend) Client-side (CLI)
Third-party tools Officially supported [cline](https://cline.bot/blog/introducing-openai-codex-oauth) Treated as "spoofing" [reddit](https://www.reddit.com/r/ClaudeAI/comments/1qa50sq/anthropic_banning_thirdparty_harnesses_while/)
Subscription economics Sustainable with any client Only works with official client [reddit](https://www.reddit.com/r/ClaudeAI/comments/1qa50sq/anthropic_banning_thirdparty_harnesses_while/)

As one commenter explained: "OpenCode reverse-engineers Claude Code's OAuth endpoints and API... This is harmful from Anthropic's perspective because Claude Code is subsidized relative to the API". [news.ycombinator](https://news.ycombinator.com/item?id=46625918)

The Buffet Analogy

A popular Reddit comment captured it well: "Utilizing third-party wrappers is akin to showing up at Anthropic's all-you-can-eat buffet with an elephant. Since Anthropic manages Claude Code, they can fine-tune it to reduce context consumption... Third-party developers lack the motivation to reduce context usage". [reddit](https://www.reddit.com/r/ClaudeAI/comments/1qa50sq/anthropic_banning_thirdparty_harnesses_while/)

So yes—OpenAI can afford to let any client use their OAuth because their backend enforces the rules. Anthropic's model only works when you use their client, which is why they had to block everything else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment