Skip to content

Conversation

@keenranger
Copy link

@keenranger keenranger commented Jan 13, 2026

Langgraph not only supports messages streams but also custom. And it is only way to modify what to stream in livekit-langgraph integration.

So for that,

  • Add stream_mode parameter supporting "messages" and "custom" modes
  • Enable multi-mode streaming for LangGraph's StreamWriter output
  • Extend _to_chat_chunk() to handle dict and object inputs

Summary by CodeRabbit

  • New Features

    • StreamMode configuration for LangGraph streaming with "messages" (default) and "custom" modes
    • Multi-mode streaming to mix and emit different payload types
    • Broader input acceptance: strings, dicts with content, and message-like objects; empty mode disables streaming
  • Tests

    • Comprehensive tests for messages, custom, multi-mode streaming, edge cases and validation
  • Bug Fixes

    • Validation now rejects unsupported stream modes and enforces allowed values

✏️ Tip: You can customize this high-level summary in your review settings.

@CLAassistant
Copy link

CLAassistant commented Jan 13, 2026

CLA assistant check
All committers have signed the CLA.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7b0639d783

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@keenranger keenranger marked this pull request as draft January 13, 2026 15:09
@keenranger keenranger marked this pull request as ready for review January 14, 2026 02:48
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6eb8409988

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@keenranger
Copy link
Author

keenranger commented Jan 14, 2026

@davidzhao I saw you’ve reviewed related pr #3112 before.
If you have time, could you take a look at this one as well? Thanks. :)

@keenranger keenranger changed the title Add custom stream mode support in LangChain LLMAdapter feat(langgraph): add custom stream mode support in LangChain LLMAdapter Jan 15, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 19, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Adds StreamMode-based streaming to LangGraph: introduces supported modes, validates and stores stream_mode in LLMAdapter and LangGraphStream, propagates the mode through chat/stream creation, and updates runtime streaming to handle single-mode and multi-mode (("messages","custom")) payloads and dict content extraction.

Changes

Cohort / File(s) Summary
Core changes
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
Import StreamMode; add _SUPPORTED_MODES = {"messages", "custom"}; add stream_mode param to LLMAdapter.__init__ and LangGraphStream.__init__; validate and store _stream_mode.
Streaming control flow
.../langgraph.py
Update LangGraphStream._run to detect list vs single stream_mode, support multi-mode items (mode, data) and single-mode flows, branch emission for "custom" vs "messages".
Chunk conversion & compatibility
.../langgraph.py
Extend _to_chat_chunk to accept dicts with "content", support objects with content attribute, and use BaseMessageChunk.text attribute.
Tests
tests/test_langgraph.py
Add tests and helpers (build_messages_graph, build_custom_graph, build_combined_graph, collect_chunks) covering messages default, custom string/dict, multi-mode mixing, validation rejections/acceptance, empty/mode-isolation cases.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client
  participant Adapter as LLMAdapter
  participant Stream as LangGraphStream
  participant Graph as GraphNode
  participant Consumer as Consumer

  Client->>Adapter: chat(..., stream_mode)
  Adapter->>Stream: create stream (propagate stream_mode)
  Stream->>Graph: run graph (yield items)
  alt multi-mode tuples
    Graph-->>Stream: ("custom", payload) or ("messages", token)
    Stream->>Consumer: emit custom chunk (payload) or message token chunk
  else single-mode flow
    Graph-->>Stream: payload or token
    Stream->>Consumer: emit according to configured stream_mode ("custom"/"messages")
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 Hop, hop — modes both new and neat,
Messages and custom now may meet.
Tuples, dicts, and token streams in tow,
I nibble chunks as they come and go.
Tiny rabbit cheers — let pipelines flow!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 61.54% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically summarizes the main change: adding custom stream mode support to LangChain's LLMAdapter for LangGraph.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`:
- Around line 141-175: In the multi-mode loop (is_multi_mode) unexpected items
that are not 2-tuples with a string mode currently fall through into the
single-mode checks and get silently dropped because self._stream_mode is a list;
fix by adding an explicit else/guard in the async for loop to handle unexpected
tuple shapes or non-string modes: when an item is in multi-mode but not
(isinstance(item, tuple) and len(item)==2 and isinstance(mode,str)), call a
diagnostic path (e.g., log a warning via the same logger/context or send an
error chunk) and continue, and ensure you only use the single-mode handling when
not is_multi_mode; update references around is_multi_mode, _stream_mode,
_to_chat_chunk, _extract_message_chunk, and _event_ch.send_nowait to implement
this defensive branch.
🧹 Nitpick comments (1)
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py (1)

50-55: Consider validating non-empty stream_mode.

An empty list stream_mode=[] passes validation (no unsupported modes) but would set is_multi_mode=True in _run() with no modes to match, potentially causing unexpected behavior where all items are silently dropped.

💡 Suggested validation
         modes = {stream_mode} if isinstance(stream_mode, str) else set(stream_mode)
+        if not modes:
+            raise ValueError("stream_mode must specify at least one mode.")
         unsupported = modes - _SUPPORTED_MODES
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 80f2e33 and 15f7e53.

📒 Files selected for processing (2)
  • .claude/settings.json
  • livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings

Files:

  • livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: unit-tests
  • GitHub Check: type-check (3.9)
  • GitHub Check: type-check (3.13)
🔇 Additional comments (3)
.claude/settings.json (1)

1-10: LGTM!

The Claude settings file appropriately scopes permissions to development tooling (mypy, pytest, make) and restricts web fetches to the LangChain documentation domain. This follows the principle of least privilege.

livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py (2)

34-36: LGTM!

Good design: the broad StreamMode type alias enables future extensibility and type-checker compatibility, while _SUPPORTED_MODES enforces runtime constraints. This separation is clean.


247-254: LGTM!

The extended _to_chat_chunk correctly handles dict and object inputs from StreamWriter. The defensive isinstance(raw, str) checks ensure only valid string content is processed.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`:
- Around line 46-54: In the LangGraph constructor (__init__), add validation to
reject empty list values for the stream_mode parameter: detect when stream_mode
is a list and is empty (so that modes becomes an empty set) and raise a
ValueError with a clear message (similar style to the existing check that uses
_SUPPORTED_MODES); this prevents downstream logic that checks is_multi_mode
(e.g., is_multi_mode = isinstance(self._stream_mode, list)) from treating an
empty list as multi-mode and stalling event emission. Ensure the check occurs
before computing unsupported = modes - _SUPPORTED_MODES and references
stream_mode/self._stream_mode and _SUPPORTED_MODES so it fails fast on empty
lists.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 15f7e53 and 04b4d26.

📒 Files selected for processing (1)
  • livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings

Files:

  • livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: type-check (3.9)
  • GitHub Check: type-check (3.13)
  • GitHub Check: unit-tests
🔇 Additional comments (4)
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py (4)

79-90: Config propagation looks good.

Passing stream_mode through the adapter keeps stream construction consistent.


94-118: Storing stream_mode on the stream is clear.

Keeps _run() logic straightforward and avoids re-deriving configuration.


119-175: Multi vs single mode handling is clean and readable.

The branching logic is easy to follow and matches the intended behavior.


236-253: Nice normalization for dict/object payloads.

Handling "content" in dicts/objects expands compatibility with custom stream payloads.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Copy link
Member

@davidzhao davidzhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the change looks reasonable. can you suggest a way that we could test the plugin's behavior across these different modes?

@keenranger
Copy link
Author

keenranger commented Jan 23, 2026

Thanks for the review, @davidzhao.

  • For messages, we can use GenericFakeChatModel to assert token streaming without calling a real LLM.
  • For custom, a small LangGraph node with StreamWriter should be enough to verify payloads pass through unchanged.
  • Then for multi-mode, we can combine both and check routing + ordering.

If that sounds reasonable, I can add a small pytest example or follow whatever testing style you prefer here.

@davidzhao
Copy link
Member

@keenranger that sounds great!

@keenranger
Copy link
Author

@davidzhao I made some test cases that fulfills requirements.

We have messages custom both case for stream_mode
And LLM(messages) StreamWriter(custom) combined for actual stream.

Test stream_mode Graph Verifies
test_messages_mode "messages" LLM Token streaming works
test_messages_mode_is_default (default) LLM Backward compatibility
test_custom_mode_string "custom" StreamWriter String payloads work
test_custom_mode_dict "custom" StreamWriter Dict {"content":...} works
test_multi_mode ["messages","custom"] Both Multi-mode routing
test_empty_stream_mode_disables_streaming [] Both Opt-out of streaming
test_custom_mode_no_messages_output "custom" LLM Mode isolation (no leak)
test_messages_mode_no_custom_output "messages" StreamWriter Mode isolation (no leak)
test_validation_rejects_unsupported_mode "values" - Invalid mode rejected
test_validation_rejects_unsupported_in_list ["messages","updates"] - Invalid in list rejected
test_validation_accepts_supported_modes various - Valid modes accepted

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tests/test_langgraph.py`:
- Around line 22-28: Update the TypedDict message list annotations to use a
concrete message element type so mypy strict mode passes: replace
Annotated[list, add_messages] in both MessagesState and CustomState with
Annotated[list[BaseMessage], add_messages] (or the equivalent built-in generic
syntax like list[BaseMessage]); ensure BaseMessage is imported where these
TypedDicts are defined and keep the add_messages metadata unchanged.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5aad8b5 and 2b12bc6.

📒 Files selected for processing (1)
  • tests/test_langgraph.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings

Files:

  • tests/test_langgraph.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: keenranger
Repo: livekit/agents PR: 4511
File: livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py:46-54
Timestamp: 2026-01-19T07:59:42.108Z
Learning: In the LiveKit LangChain LangGraph integration (`livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`), passing an empty list for `stream_mode` parameter (i.e., `stream_mode=[]`) is valid and intentional behavior—it allows users to opt out of streaming modes.
📚 Learning: 2026-01-19T07:59:42.108Z
Learnt from: keenranger
Repo: livekit/agents PR: 4511
File: livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py:46-54
Timestamp: 2026-01-19T07:59:42.108Z
Learning: In the LiveKit LangChain LangGraph integration (`livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`), passing an empty list for `stream_mode` parameter (i.e., `stream_mode=[]`) is valid and intentional behavior—it allows users to opt out of streaming modes.

Applied to files:

  • tests/test_langgraph.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: type-check (3.13)
  • GitHub Check: type-check (3.9)
  • GitHub Check: unit-tests
🔇 Additional comments (15)
tests/test_langgraph.py (15)

34-46: Good, deterministic messages graph setup.

The fake LLM and simple state transitions make this test graph stable and easy to reason about.


49-62: Solid custom StreamWriter test harness.

Covers both string and dict payloads cleanly.


65-83: Nice combined graph for multi-mode coverage.

Clear separation between chat and custom streaming nodes.


89-95: Helper is concise and reusable.

The chunk collection logic is simple and aligned with the tests’ expectations.


101-116: Messages-mode test is focused and stable.

Assertions match the fake model’s tokenization behavior well.


119-133: Default mode coverage looks good.

Confirms the implicit behavior without extra complexity.


138-151: Custom string payload test is clear.

Good direct assertions on emitted chunks.


153-165: Dict payload conversion is exercised well.

Covers the content-extraction path for custom streams.


171-185: Multi-mode mixing validation is solid.

The combined assertions ensure both streams contribute output.


191-197: Unsupported mode rejection test is appropriate.

Covers the failure path cleanly.


199-205: Invalid entry in list is properly exercised.

Good negative coverage for list validation.


207-215: Positive validation coverage looks good.

Covers all supported options succinctly.


220-231: Opt‑out behavior test is useful.

Verifies that empty stream modes suppress output.


234-245: Custom-only isolation test is clear.

Confirms that messages are excluded when only custom mode is requested.


248-259: Messages-only isolation test is clear.

Confirms custom outputs are excluded when only messages mode is requested.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants