-
Notifications
You must be signed in to change notification settings - Fork 2.7k
feat(langgraph): add custom stream mode support in LangChain LLMAdapter #4511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7b0639d783
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6eb8409988
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
Show resolved
Hide resolved
|
@davidzhao I saw you’ve reviewed related pr #3112 before. |
6eb8409 to
15f7e53
Compare
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds StreamMode-based streaming to LangGraph: introduces supported modes, validates and stores Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Adapter as LLMAdapter
participant Stream as LangGraphStream
participant Graph as GraphNode
participant Consumer as Consumer
Client->>Adapter: chat(..., stream_mode)
Adapter->>Stream: create stream (propagate stream_mode)
Stream->>Graph: run graph (yield items)
alt multi-mode tuples
Graph-->>Stream: ("custom", payload) or ("messages", token)
Stream->>Consumer: emit custom chunk (payload) or message token chunk
else single-mode flow
Graph-->>Stream: payload or token
Stream->>Consumer: emit according to configured stream_mode ("custom"/"messages")
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`:
- Around line 141-175: In the multi-mode loop (is_multi_mode) unexpected items
that are not 2-tuples with a string mode currently fall through into the
single-mode checks and get silently dropped because self._stream_mode is a list;
fix by adding an explicit else/guard in the async for loop to handle unexpected
tuple shapes or non-string modes: when an item is in multi-mode but not
(isinstance(item, tuple) and len(item)==2 and isinstance(mode,str)), call a
diagnostic path (e.g., log a warning via the same logger/context or send an
error chunk) and continue, and ensure you only use the single-mode handling when
not is_multi_mode; update references around is_multi_mode, _stream_mode,
_to_chat_chunk, _extract_message_chunk, and _event_ch.send_nowait to implement
this defensive branch.
🧹 Nitpick comments (1)
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py (1)
50-55: Consider validating non-empty stream_mode.An empty list
stream_mode=[]passes validation (no unsupported modes) but would setis_multi_mode=Truein_run()with no modes to match, potentially causing unexpected behavior where all items are silently dropped.💡 Suggested validation
modes = {stream_mode} if isinstance(stream_mode, str) else set(stream_mode) + if not modes: + raise ValueError("stream_mode must specify at least one mode.") unsupported = modes - _SUPPORTED_MODES
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.claude/settings.jsonlivekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings
Files:
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: unit-tests
- GitHub Check: type-check (3.9)
- GitHub Check: type-check (3.13)
🔇 Additional comments (3)
.claude/settings.json (1)
1-10: LGTM!The Claude settings file appropriately scopes permissions to development tooling (mypy, pytest, make) and restricts web fetches to the LangChain documentation domain. This follows the principle of least privilege.
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py (2)
34-36: LGTM!Good design: the broad
StreamModetype alias enables future extensibility and type-checker compatibility, while_SUPPORTED_MODESenforces runtime constraints. This separation is clean.
247-254: LGTM!The extended
_to_chat_chunkcorrectly handles dict and object inputs from StreamWriter. The defensiveisinstance(raw, str)checks ensure only valid string content is processed.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`:
- Around line 46-54: In the LangGraph constructor (__init__), add validation to
reject empty list values for the stream_mode parameter: detect when stream_mode
is a list and is empty (so that modes becomes an empty set) and raise a
ValueError with a clear message (similar style to the existing check that uses
_SUPPORTED_MODES); this prevents downstream logic that checks is_multi_mode
(e.g., is_multi_mode = isinstance(self._stream_mode, list)) from treating an
empty list as multi-mode and stalling event emission. Ensure the check occurs
before computing unsupported = modes - _SUPPORTED_MODES and references
stream_mode/self._stream_mode and _SUPPORTED_MODES so it fails fast on empty
lists.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings
Files:
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: type-check (3.9)
- GitHub Check: type-check (3.13)
- GitHub Check: unit-tests
🔇 Additional comments (4)
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py (4)
79-90: Config propagation looks good.Passing
stream_modethrough the adapter keeps stream construction consistent.
94-118: Storingstream_modeon the stream is clear.Keeps
_run()logic straightforward and avoids re-deriving configuration.
119-175: Multi vs single mode handling is clean and readable.The branching logic is easy to follow and matches the intended behavior.
236-253: Nice normalization for dict/object payloads.Handling
"content"in dicts/objects expands compatibility with custom stream payloads.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
Show resolved
Hide resolved
04b4d26 to
c9bd9b2
Compare
davidzhao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the change looks reasonable. can you suggest a way that we could test the plugin's behavior across these different modes?
|
Thanks for the review, @davidzhao.
If that sounds reasonable, I can add a small pytest example or follow whatever testing style you prefer here. |
|
@keenranger that sounds great! |
|
@davidzhao I made some test cases that fulfills requirements. We have
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/test_langgraph.py`:
- Around line 22-28: Update the TypedDict message list annotations to use a
concrete message element type so mypy strict mode passes: replace
Annotated[list, add_messages] in both MessagesState and CustomState with
Annotated[list[BaseMessage], add_messages] (or the equivalent built-in generic
syntax like list[BaseMessage]); ensure BaseMessage is imported where these
TypedDicts are defined and keep the add_messages metadata unchanged.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/test_langgraph.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings
Files:
tests/test_langgraph.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: keenranger
Repo: livekit/agents PR: 4511
File: livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py:46-54
Timestamp: 2026-01-19T07:59:42.108Z
Learning: In the LiveKit LangChain LangGraph integration (`livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`), passing an empty list for `stream_mode` parameter (i.e., `stream_mode=[]`) is valid and intentional behavior—it allows users to opt out of streaming modes.
📚 Learning: 2026-01-19T07:59:42.108Z
Learnt from: keenranger
Repo: livekit/agents PR: 4511
File: livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py:46-54
Timestamp: 2026-01-19T07:59:42.108Z
Learning: In the LiveKit LangChain LangGraph integration (`livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`), passing an empty list for `stream_mode` parameter (i.e., `stream_mode=[]`) is valid and intentional behavior—it allows users to opt out of streaming modes.
Applied to files:
tests/test_langgraph.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: type-check (3.13)
- GitHub Check: type-check (3.9)
- GitHub Check: unit-tests
🔇 Additional comments (15)
tests/test_langgraph.py (15)
34-46: Good, deterministic messages graph setup.The fake LLM and simple state transitions make this test graph stable and easy to reason about.
49-62: Solid custom StreamWriter test harness.Covers both string and dict payloads cleanly.
65-83: Nice combined graph for multi-mode coverage.Clear separation between chat and custom streaming nodes.
89-95: Helper is concise and reusable.The chunk collection logic is simple and aligned with the tests’ expectations.
101-116: Messages-mode test is focused and stable.Assertions match the fake model’s tokenization behavior well.
119-133: Default mode coverage looks good.Confirms the implicit behavior without extra complexity.
138-151: Custom string payload test is clear.Good direct assertions on emitted chunks.
153-165: Dict payload conversion is exercised well.Covers the content-extraction path for custom streams.
171-185: Multi-mode mixing validation is solid.The combined assertions ensure both streams contribute output.
191-197: Unsupported mode rejection test is appropriate.Covers the failure path cleanly.
199-205: Invalid entry in list is properly exercised.Good negative coverage for list validation.
207-215: Positive validation coverage looks good.Covers all supported options succinctly.
220-231: Opt‑out behavior test is useful.Verifies that empty stream modes suppress output.
234-245: Custom-only isolation test is clear.Confirms that messages are excluded when only custom mode is requested.
248-259: Messages-only isolation test is clear.Confirms custom outputs are excluded when only messages mode is requested.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Langgraph not only supports messages streams but also custom. And it is only way to modify what to stream in livekit-langgraph integration.
So for that,
stream_modeparameter supporting"messages"and"custom"modes_to_chat_chunk()to handle dict and object inputsSummary by CodeRabbit
New Features
Tests
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.