Skip to content

Conversation

@ansh-info
Copy link

@ansh-info ansh-info commented Dec 11, 2025

• Summary

Issue #474

  • Make init_chat_model provider-agnostic by honoring passed args, allowing a supplied chat_model, and avoiding forced ChatOpenAI rebinding.
  • Add configurable timeout to the logging wrapper and preserve timeout/factory through tool/structured bindings.
  • Keep logging behavior intact while letting custom chat models (e.g., Ollama, OpenRouter, NVIDIA) plug in.

Motivation

  • Current implementation ignores init_chat_model arguments, always instantiates ChatOpenAI, and hits a hardcoded 10-minute timeout, breaking local Ollama or other providers.
    Details

  • init_chat_model now:

    • Accepts chat_model to plug in any LangChain-compatible chat model.
    • Honors overrides for model, base_url, api_key, temperature, and extra kwargs.
    • Stores a factory so with_config can rebuild without forcing ChatOpenAI.
  • LoggingLLM:

    • Adds timeout_seconds (defaults to 10 minutes, configurable).
    • Preserves timeout/factory when binding tools or structured output.
    • with_config rebuilds via the stored factory or delegates to the underlying with_config instead of hard rebinding to OpenAI.

Co-authored-by: Apoorva Gupta <apoorvaagupta.info@gmail.com>
@ansh-info ansh-info changed the title feat: provider-agnostic configuration and timeout changes feat: provider-agnostic configuration and timeout changes for init_chat_model Dec 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant