Skip to content

Instantly share code, notes, and snippets.

@eonist
Created January 30, 2026 04:04
Show Gist options
  • Select an option

  • Save eonist/951e9888e0f799f6d2d288fc3eadfd0b to your computer and use it in GitHub Desktop.

Select an option

Save eonist/951e9888e0f799f6d2d288fc3eadfd0b to your computer and use it in GitHub Desktop.
signing with chatgpt

OpenAI recently introduced OAuth authentication for ChatGPT subscriptions, allowing tools like Cline to use your existing subscription instead of managing API keys. Here are the key restrictions: cline

Usage Limits

The main restriction is rate limiting tied to your ChatGPT subscription tier: reddit

  • Plus users: Around 300–1,500 local messages every 5 hours, or 50–400 cloud tasks depending on complexity
  • Pro users: Higher limits but still capped with 5-hour and weekly limits
  • Usage counts against your overall ChatGPT subscription quota, not separate API credits github

Many users report hitting limits quickly, especially with intensive agentic coding workflows. community.openai

Model Access

The models available depend on your subscription tier :

  • gpt-5.2-codex: Optimized for agentic coding
  • gpt-5.2: General-purpose with strong reasoning
  • gpt-5-mini: Fast and cost-effective
  • o3 / o4-mini: Advanced and everyday reasoning tasks

How It Differs from API Keys

Aspect OAuth (ChatGPT Subscription) API Keys
Pricing Flat-rate (subscription) cline Pay-per-token reddit
Limits Time-based (5h/weekly caps) github Token/request limits platform.openai
Setup One-click sign-in Manual key management
Credential exposure Tokens handled automatically cline Risk of accidental commits

Key Considerations

  • Shared quota: Usage in Cline draws from the same pool as your ChatGPT web/app usage github
  • No separate budget: You cannot add API credits to bypass OAuth limits—they're tied strictly to your subscription tier github
  • Security: Your OpenAI credentials never leave OpenAI's servers; Cline only receives access tokens cline

For heavy coding use, some users find the 5-hour and weekly caps frustrating and prefer third-party providers or direct API access for more flexibility. community.openai

@eonist
Copy link
Author

eonist commented Jan 30, 2026

The Code Review in OpenAI Codex is a dedicated feature that analyzes your code changes and provides feedback before you commit or open a pull request. It has a separate usage quota from regular local/cloud tasks.

What Code Review Does

The /review command launches a specialized reviewer that examines diffs and reports prioritized, actionable findings without modifying your code. It offers several review modes:[inventivehq]

  • Review against a base branch: Compares your branch against main or develop before opening a PR

  • Review uncommitted changes: Analyzes staged or modified files

  • Review a commit: Examines a specific commit

  • Custom review instructions: Tailored analysis (e.g., "Focus on SQL injection and XSS vulnerabilities")[inventivehq]

Why It Has Separate Limits

Code reviews are treated differently because they're designed to provide "the quality and depth of a senior engineer". The reviews:[linkedin]

  • Use a more capable model by default (GPT-5.2-Codex)[openai]

  • Analyze entire diffs rather than responding to individual prompts

  • Generate comprehensive, structured feedback

Weekly vs. Per-Task Quotas

Task Type | What Counts | Limit Type -- | -- | -- Local messages | Chat/edit prompts in your IDE | 5-hour rolling + weekly cap [github]​ Cloud tasks | Heavy refactors, migrations, parallel jobs | 5-hour rolling + weekly cap [vladimirsiedykh]​ Code reviews | /review command or GitHub PR reviews | Weekly only [apidog]​

The code review quota is intentionally lower because each review consumes significant compute to thoroughly analyze your changes. It's meant for pre-commit/pre-PR checks rather than continuous use throughout development.[openai]

The **Code Review** in OpenAI Codex is a dedicated feature that analyzes your code changes and provides feedback before you commit or open a pull request. It has a separate usage quota from regular local/cloud tasks.

What Code Review Does

The /review command launches a specialized reviewer that examines diffs and reports prioritized, actionable findings without modifying your code. It offers several review modes: [inventivehq](https://inventivehq.com/knowledge-base/openai/how-to-use-codex-for-code-review)

Why It Has Separate Limits

Code reviews are treated differently because they're designed to provide "the quality and depth of a senior engineer". The reviews: [linkedin](https://www.linkedin.com/posts/openai_automatic-code-reviews-with-[openai](https://openai.com/index/introducing-upgrades-to-codex/)-codex-activity-7391540012443590656-5j_Q)

  • Use a more capable model by default (GPT-5.2-Codex) openai
  • Analyze entire diffs rather than responding to individual prompts
  • Generate comprehensive, structured feedback

Weekly vs. Per-Task Quotas

Task Type What Counts Limit Type
Local messages Chat/edit prompts in your IDE 5-hour rolling + weekly cap [github](openai/codex#2251)
Cloud tasks Heavy refactors, migrations, parallel jobs 5-hour rolling + weekly cap [vladimirsiedykh](https://vladimirsiedykh.com/blog/codex-local-vs-cloud-workflows-2025)
Code reviews /review command or GitHub PR reviews Weekly only [apidog](https://apidog.com/blog/codex-usage-limits/)

The code review quota is intentionally lower because each review consumes significant compute to thoroughly analyze your changes. It's meant for pre-commit/pre-PR checks rather than continuous use throughout development. [openai](https://openai.com/index/introducing-upgrades-to-codex/)

@eonist
Copy link
Author

eonist commented Jan 30, 2026

No, Claude Code does not offer an official OAuth sign-in for third-party tools like Cline. In fact, Anthropic recently took the opposite approach—actively blocking such usage.

The Workaround (Now Blocked)

Previously, users could connect their Claude Max subscription to Cline through the Claude Code CLI as a workaround:[cline]

  1. Install Claude Code CLI: npm install -g @anthropic-ai/claude-code

  2. Configure Cline to use "Claude Code" as the API provider

  3. Set the path to your Claude CLI executable

This let Cline delegate authentication to the Claude Code CLI, effectively routing requests through your subscription.[github]

Anthropic's Crackdown (January 2026)

Anthropic implemented strict technical safeguards in early January 2026 to prevent third-party applications from using Claude Code subscription tokens. The changes:[venturebeat]

  • Subscription OAuth tokens no longer work outside the official Claude Code client[ai-checker.webcoda.com]

  • Third-party "harnesses" (software wrappers that managed OAuth tokens) were specifically targeted[venturebeat]

  • Users attempting to use credentials externally now receive errors on the first request[ai-checker.webcoda.com]

Why Anthropic Blocked It

According to reports, Anthropic's motivations include:[ai-checker.webcoda.com]

  • Economic concerns: Flat-rate subscriptions being used for heavy automated workflows that would cost significantly more at API rates

  • Support burden: Users blaming Anthropic for issues caused by third-party tools

  • Ecosystem control: Maintaining direct relationship with developers rather than becoming a "swappable backend"

Current Options for Cline Users

Method | Cost Model | Status -- | -- | -- Claude Code CLI integration | Subscription | Blocked [ai-checker.webcoda.com]​ Anthropic API key | Pay-per-token | Works ✅ Third-party proxies (CLIProxyAPI) | Subscription | Blocked [antran]​

The community response has been largely critical, with some developers canceling their Max subscriptions in protest. As one user noted: "When I pay for a subscription, I'm not paying for Claude Code. I'm paying for Anthropic models. The client should be able to choose how those models are consumed".linkedin+1

For now, if you want to use Claude in Cline, you'll need to use a separate Anthropic API key with pay-per-token billing rather than your subscription.[reddit]

**No**, Claude Code does not offer an official OAuth sign-in for third-party tools like Cline. In fact, Anthropic recently took the opposite approach—actively blocking such usage.

The Workaround (Now Blocked)

Previously, users could connect their Claude Max subscription to Cline through the Claude Code CLI as a workaround: [cline](https://cline.bot/blog/how-to-use-your-claude-max-subscription-in-cline)

  1. Install Claude Code CLI: npm install -g @anthropic-ai/claude-code
  2. Configure Cline to use "Claude Code" as the API provider
  3. Set the path to your Claude CLI executable

This let Cline delegate authentication to the Claude Code CLI, effectively routing requests through your subscription. [github](anomalyco/opencode#1461)

Anthropic's Crackdown (January 2026)

Anthropic implemented strict technical safeguards in early January 2026 to prevent third-party applications from using Claude Code subscription tokens. The changes: [venturebeat](https://venturebeat.com/technology/anthropic-cracks-down-on-unauthorized-claude-usage-by-third-party-harnesses)

Why Anthropic Blocked It

According to reports, Anthropic's motivations include: [ai-checker.webcoda.com](https://ai-checker.webcoda.com.au/articles/anthropic-blocks-claude-code-subscriptions-third-party-tools-2026)

  • Economic concerns: Flat-rate subscriptions being used for heavy automated workflows that would cost significantly more at API rates
  • Support burden: Users blaming Anthropic for issues caused by third-party tools
  • Ecosystem control: Maintaining direct relationship with developers rather than becoming a "swappable backend"

Current Options for Cline Users

Method Cost Model Status
Claude Code CLI integration Subscription Blocked [ai-checker.webcoda.com](https://ai-checker.webcoda.com.au/articles/anthropic-blocks-claude-code-subscriptions-third-party-tools-2026)
Anthropic API key Pay-per-token Works ✅
Third-party proxies (CLIProxyAPI) Subscription Blocked [antran](https://antran.app/2025/claude_code_max_api/)

The community response has been largely critical, with some developers canceling their Max subscriptions in protest. As one user noted: "When I pay for a subscription, I'm not paying for Claude Code. I'm paying for Anthropic models. The client should be able to choose how those models are consumed". linkedin

For now, if you want to use Claude in Cline, you'll need to use a separate Anthropic API key with pay-per-token billing rather than your subscription. [reddit](https://www.reddit.com/r/ClaudeAI/comments/1klpiw1/claude_max_plan_api_connect_to_cline/)

@eonist
Copy link
Author

eonist commented Jan 30, 2026

That's likely a significant factor. Claude Code CLI uses intelligent model routing to minimize costs, while third-party tools like Cline typically send everything to the most capable (and expensive) model.

How Claude Code CLI Optimizes Costs

The official Claude Code CLI implements a multi-tier system:claudelog+1

Task Type | Model Used | Relative Cost -- | -- | -- Routing/analysis | Haiku 4.5 | ~3x cheaper than Sonnet Simple tasks (linting, syntax fixes, explanations) | Haiku 4.5 | ~3x cheaper than Sonnet Standard coding | Sonnet 4.5 | Baseline Complex reasoning/architecture | Opus 4.5 | ~5x more expensive than Sonnet

Haiku runs 4-5x faster than Sonnet with sub-200ms latency, making it ideal for high-frequency lightweight tasks. The CLI analyzes query complexity first, then routes to the most cost-effective model.notchrisgroves+1[youtube]

Why This Matters for Anthropic's Subscription

The economics are stark:news.ycombinator+1

  • A heavy Claude Code user might consume $1,000+ worth of API tokens per month

  • Anthropic charges $200/month for the Max subscription

  • This only works if most requests go to cheap models like Haiku

When third-party tools bypass this routing and send everything to Sonnet or Opus, Anthropic loses money on every request. As one Hacker News commenter noted: "By treating Anthropic models as a commodity within a neutral harness, you empower users at the cost of Anthropic".[reddit]

The Cost Difference Is Massive

Users who've implemented their own routing report 60-80% cost reductions compared to using expensive models for everything. Without routing:reddit+1

  • 90% of requests that could use cheap models instead hit premium models

  • Simple tasks like variable renaming or code comments use the same compute as architectural decisions

  • The "all-you-can-eat" subscription becomes financially unsustainable for Anthropic

So yes—Cline always hitting Sonnet/Opus while Claude Code intelligently routes most tasks to Haiku likely made third-party usage far more expensive for Anthropic to subsidize, contributing to the crackdown.[news.ycombinator]

That's likely a significant factor. Claude Code CLI uses **intelligent model routing** to minimize costs, while third-party tools like Cline typically send everything to the most capable (and expensive) model.

How Claude Code CLI Optimizes Costs

The official Claude Code CLI implements a multi-tier system: claudelog

Task Type Model Used Relative Cost
Routing/analysis Haiku 4.5 ~3x cheaper than Sonnet
Simple tasks (linting, syntax fixes, explanations) Haiku 4.5 ~3x cheaper than Sonnet
Standard coding Sonnet 4.5 Baseline
Complex reasoning/architecture Opus 4.5 ~5x more expensive than Sonnet

Haiku runs 4-5x faster than Sonnet with sub-200ms latency, making it ideal for high-frequency lightweight tasks. The CLI analyzes query complexity first, then routes to the most cost-effective model. notchrisgroves

Why This Matters for Anthropic's Subscription

The economics are stark: news.ycombinator

  • A heavy Claude Code user might consume $1,000+ worth of API tokens per month
  • Anthropic charges $200/month for the Max subscription
  • This only works if most requests go to cheap models like Haiku

When third-party tools bypass this routing and send everything to Sonnet or Opus, Anthropic loses money on every request. As one Hacker News commenter noted: "By treating Anthropic models as a commodity within a neutral harness, you empower users at the cost of Anthropic". [reddit](https://www.reddit.com/r/ClaudeCode/comments/1qa4h1q/crazy_to_see_openai_step_up_since_anthropic_has/)

The Cost Difference Is Massive

Users who've implemented their own routing report 60-80% cost reductions compared to using expensive models for everything. Without routing: reddit

  • 90% of requests that could use cheap models instead hit premium models
  • Simple tasks like variable renaming or code comments use the same compute as architectural decisions
  • The "all-you-can-eat" subscription becomes financially unsustainable for Anthropic

So yes—Cline always hitting Sonnet/Opus while Claude Code intelligently routes most tasks to Haiku likely made third-party usage far more expensive for Anthropic to subsidize, contributing to the crackdown. [news.ycombinator](https://news.ycombinator.com/item?id=46549823)

@eonist
Copy link
Author

eonist commented Jan 30, 2026

The picture is actually more nuanced than a simple "yes" or "no."

What Claude Code Uses Haiku For (Regardless of Your Setting)

Even when you set Opus 4.5 as your model, Claude Code uses Haiku for specific background tasks:[reddit]

  • Status messages: The "Transforming...", "Analyzing...", "Reading..." text

  • Command approval: Deciding if a command matches your allow/deny patterns

  • Reading large files: ~50% of all Claude Code calls reportedly use Haiku for file reading operations[reddit]

  • Tool call parsing: Processing the structured output from tools

These aren't your "main" coding tasks—they're internal operations where using Opus would be wasteful.[reddit]

Your Primary Tasks Respect Your Model Choice

For actual coding work—writing code, refactoring, answering questions—Claude Code uses whatever model you've set. One video analysis specifically noted: "Claude Code does not intelligently determine which model to use for your request. There's no back-end logic that says 'this is a simple request, so let's use a lightweight model'".[youtube][reddit]

So if you set Opus 4.5, your coding prompts go to Opus 4.5.

The Cost Implication

Task Type | Model Used | Your Control -- | -- | -- Status messages / UI | Haiku | No (hardcoded) [reddit]​ Command approval | Haiku | No (hardcoded) [reddit]​ File reading operations | Haiku | No (hardcoded) [reddit]​ Your actual prompts | Your chosen model | Yes [youtube]​

Why This Still Saves Anthropic Money

The 50% figure is significant. Reading files, checking permissions, and internal routing add up quickly in agentic workflows. By handling these with Haiku, Anthropic dramatically reduces the cost per session—even when your "real" work hits Opus.[reddit]

Third-party tools like Cline don't have this optimization layer. Every API call—including file reads and internal checks—would use whatever expensive model you configured, making the economics unsustainable for a flat-rate subscription.[reddit]

The picture is actually more nuanced than a simple "yes" or "no."

What Claude Code Uses Haiku For (Regardless of Your Setting)

Even when you set Opus 4.5 as your model, Claude Code uses Haiku for specific background tasks: [reddit](https://www.reddit.com/r/ClaudeAI/comments/1lcntv4/claude_code_200_plan_switched_to_35_haiku_even/)

These aren't your "main" coding tasks—they're internal operations where using Opus would be wasteful. [reddit](https://www.reddit.com/r/ClaudeAI/comments/1lcntv4/claude_code_200_plan_switched_to_35_haiku_even/)

Your Primary Tasks Respect Your Model Choice

For actual coding work—writing code, refactoring, answering questions—Claude Code uses whatever model you've set. One video analysis specifically noted: "Claude Code does not intelligently determine which model to use for your request. There's no back-end logic that says 'this is a simple request, so let's use a lightweight model'". [youtube](https://www.youtube.com/watch?v=eP9fA5MHG20)

So if you set Opus 4.5, your coding prompts go to Opus 4.5.

The Cost Implication

Task Type Model Used Your Control
Status messages / UI Haiku No (hardcoded) [reddit](https://www.reddit.com/r/ClaudeAI/comments/1lcntv4/claude_code_200_plan_switched_to_35_haiku_even/)
Command approval Haiku No (hardcoded) reddit
File reading operations Haiku No (hardcoded) [reddit](https://www.reddit.com/r/ClaudeAI/comments/1myw74x/analyzed_months_of_claude_code_usage_logs_tell/)
Your actual prompts Your chosen model Yes youtube

Why This Still Saves Anthropic Money

The 50% figure is significant. Reading files, checking permissions, and internal routing add up quickly in agentic workflows. By handling these with Haiku, Anthropic dramatically reduces the cost per session—even when your "real" work hits Opus. [reddit](https://www.reddit.com/r/ClaudeAI/comments/1myw74x/analyzed_months_of_claude_code_usage_logs_tell/)

Third-party tools like Cline don't have this optimization layer. Every API call—including file reads and internal checks—would use whatever expensive model you configured, making the economics unsustainable for a flat-rate subscription. [reddit](https://www.reddit.com/r/ClaudeAI/comments/1myw74x/analyzed_months_of_claude_code_usage_logs_tell/)

@eonist
Copy link
Author

eonist commented Jan 30, 2026

You've identified the key architectural difference. OpenAI built server-side infrastructure to handle third-party OAuth access, while Anthropic's cost optimizations only exist in the Claude Code client itself.

OpenAI's Approach: Server-Side Control

OpenAI explicitly designed their OAuth system to work with third-party tools:[developers.openai]

  • Sign in with ChatGPT is an official authentication method they actively support

  • Third-party tools like Cline connect through OpenAI's "LLM proxy server"[developers.openai]

  • Rate limiting, model routing, and cost management happen on OpenAI's backend regardless of which client sends the request

  • OpenAI can apply the same optimizations whether you use their CLI, VS Code extension, or Cline[cline]

Anthropic's Approach: Client-Side Only

Anthropic never built equivalent server-side infrastructure:[news.ycombinator]

  • Claude Code's OAuth endpoints are not a public API—they're internal authentication for the official client only

  • All the smart routing (Haiku for file reads, etc.) happens inside the Claude Code CLI

  • Third-party tools like OpenCode and Roo Code had to "reverse-engineer Claude Code's OAuth endpoints and API"[news.ycombinator]

  • When you bypass the official client, you bypass all cost optimizations

Why This Matters

Aspect | OpenAI Codex | Anthropic Claude Code -- | -- | -- OAuth designed for | Third-party tools ✅ [developers.openai]​ | Internal client only [news.ycombinator]​ Cost routing location | Server-side (backend) | Client-side (CLI) Third-party tools | Officially supported [cline]​ | Treated as "spoofing" [reddit]​ Subscription economics | Sustainable with any client | Only works with official client [reddit]​

As one commenter explained: "OpenCode reverse-engineers Claude Code's OAuth endpoints and API... This is harmful from Anthropic's perspective because Claude Code is subsidized relative to the API".[news.ycombinator]

The Buffet Analogy

A popular Reddit comment captured it well: "Utilizing third-party wrappers is akin to showing up at Anthropic's all-you-can-eat buffet with an elephant. Since Anthropic manages Claude Code, they can fine-tune it to reduce context consumption... Third-party developers lack the motivation to reduce context usage".[reddit]

So yes—OpenAI can afford to let any client use their OAuth because their backend enforces the rules. Anthropic's model only works when you use their client, which is why they had to block everything else.

You've identified the key architectural difference. OpenAI built server-side infrastructure to handle third-party OAuth access, while Anthropic's cost optimizations only exist in the Claude Code client itself.

OpenAI's Approach: Server-Side Control

OpenAI explicitly designed their OAuth system to work with third-party tools: [developers.openai](https://developers.openai.com/codex/auth/)

Anthropic's Approach: Client-Side Only

Anthropic never built equivalent server-side infrastructure: [news.ycombinator](https://news.ycombinator.com/item?id=46625918)

  • Claude Code's OAuth endpoints are not a public API—they're internal authentication for the official client only
  • All the smart routing (Haiku for file reads, etc.) happens inside the Claude Code CLI
  • Third-party tools like OpenCode and Roo Code had to "reverse-engineer Claude Code's OAuth endpoints and API" [news.ycombinator](https://news.ycombinator.com/item?id=46625918)
  • When you bypass the official client, you bypass all cost optimizations

Why This Matters

Aspect OpenAI Codex Anthropic Claude Code
OAuth designed for Third-party tools ✅ [developers.openai](https://developers.openai.com/codex/auth/) Internal client only [news.ycombinator](https://news.ycombinator.com/item?id=46625918)
Cost routing location Server-side (backend) Client-side (CLI)
Third-party tools Officially supported [cline](https://cline.bot/blog/introducing-openai-codex-oauth) Treated as "spoofing" [reddit](https://www.reddit.com/r/ClaudeAI/comments/1qa50sq/anthropic_banning_thirdparty_harnesses_while/)
Subscription economics Sustainable with any client Only works with official client [reddit](https://www.reddit.com/r/ClaudeAI/comments/1qa50sq/anthropic_banning_thirdparty_harnesses_while/)

As one commenter explained: "OpenCode reverse-engineers Claude Code's OAuth endpoints and API... This is harmful from Anthropic's perspective because Claude Code is subsidized relative to the API". [news.ycombinator](https://news.ycombinator.com/item?id=46625918)

The Buffet Analogy

A popular Reddit comment captured it well: "Utilizing third-party wrappers is akin to showing up at Anthropic's all-you-can-eat buffet with an elephant. Since Anthropic manages Claude Code, they can fine-tune it to reduce context consumption... Third-party developers lack the motivation to reduce context usage". [reddit](https://www.reddit.com/r/ClaudeAI/comments/1qa50sq/anthropic_banning_thirdparty_harnesses_while/)

So yes—OpenAI can afford to let any client use their OAuth because their backend enforces the rules. Anthropic's model only works when you use their client, which is why they had to block everything else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment