Skip to content

Codex API 400 Error - Root Cause Analysis and Solution #38

@GoBeromsu

Description

@GoBeromsu

Summary

Resolved Codex API 400 errors that prevented classification from working. This issue documents the debugging process and solution for future reference.

Problem

When using the Codex provider for classification, users encountered HTTP 400 errors with no visible error message:

Streaming request failed (HTTP 400): Request failed, status 400

Obsidian's requestUrl throws exceptions on 4xx errors without exposing the response body, making debugging difficult.

Root Cause Analysis

After comparing our implementation with Smart Composer, we identified three issues:

1. HTTP Client Choice

  • Problem: Obsidian's requestUrl doesn't properly support streaming and throws on 4xx errors without exposing response body
  • Solution: Use Node's native https module directly (like Smart Composer does)
  • Reference: Smart Composer's httpTransport.ts

2. Header Case Sensitivity

3. Unsupported Parameter

  • Problem: Codex Responses API does not support the temperature parameter
  • Solution: Removed temperature from Codex request body
  • Reference: Smart Composer's codexMessageAdapter.ts - no temperature in buildRequestBody

4. Model Availability

  • Finding: Only gpt-5.2 works consistently; gpt-5-mini, gpt-4.1-mini, o4-mini return errors
  • Solution: Updated presets to only show gpt-5.2 as the recommended model

Solution Implementation

Before (Problematic)

// Using Obsidian's requestUrl
const response = await requestUrl({
  url,
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${token}`,
    'ChatGPT-Account-ID': accountId,  // Wrong case
  },
  body: JSON.stringify({
    model,
    instructions,
    input: [...],
    stream: true,
    store: false,
    temperature: 0.6,  // Not supported
  }),
});

After (Working)

// Using Node's https module like Smart Composer
const https = require('https');
const request = https.request({
  hostname: 'chatgpt.com',
  path: '/backend-api/codex/responses',
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Content-Length': payloadLength.toString(),
    Authorization: `Bearer ${token}`,
    'ChatGPT-Account-Id': accountId,  // Correct case
  },
}, callback);

// Request body without temperature
{
  model: 'gpt-5.2',
  instructions,
  input: [...],
  stream: true,
  store: false,
  // No temperature parameter
}

Why This Solution Works

  1. Node's https module bypasses CORS restrictions and provides full access to response bodies (including error messages)
  2. Correct header case ensures the server properly identifies the account
  3. Removing temperature avoids "invalid parameter" errors from the Codex API
  4. Using gpt-5.2 ensures model availability (other models may require higher subscription tiers)

Commits

  • fix: remove temperature parameter from Codex API requests
  • fix: use Node's https module for Codex API like Smart Composer
  • fix: ChatGPT-Account-Id header case (lowercase 'd')
  • chore: update Codex presets to only include gpt-5.2

References

  • Smart Composer Plugin: https://github.com/glowingjade/obsidian-smart-composer
  • Key files referenced:
    • src/utils/llm/httpTransport.ts - Node https implementation
    • src/core/llm/openaiCodexProvider.ts - Header construction
    • src/core/llm/codexMessageAdapter.ts - Request body format

Lessons Learned

  1. When Obsidian's requestUrl throws on errors, consider using Node's native http/https modules for better debugging
  2. Custom headers like ChatGPT-Account-Id may be case-sensitive even though HTTP headers are traditionally case-insensitive
  3. Always compare implementations with working reference projects when debugging API integration issues

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions