Feature/element data classes+id+dataclasses#611
Feature/element data classes+id+dataclasses#611FBumann wants to merge 29 commits intofeature/element-data-classes+idfrom
Conversation
- Piece: 2 init fields (start, end) + 1 internal field (has_time_dim with init=False)
- Piecewise: 1 field (pieces) + property has_time_dim with propagation setter
- PiecewiseConversion: 1 field (piecewises) + property has_time_dim with propagation setter
- PiecewiseEffects: 2 fields (piecewise_origin, piecewise_shares) + property has_time_dim with propagation setter
- InvestParameters: 9 fields with defaults + __post_init__ for None→{} and None→epsilon normalization
- StatusParameters: 11 fields with defaults + __post_init__ for None→{} normalization
All classes:
- Use @DataClass(eq=False) to avoid issues with numpy/DataArray equality
- Keep Interface inheritance (serialization still works)
- Keep transform_data() and link_to_flow_system() (to be removed in later phases when *Data classes handle alignment)
⏺ Task #11 complete. 389 test_math tests pass + 136 IO tests pass. The leaf interface classes (Piece, Piecewise, PiecewiseConversion, PiecewiseEffects,
InvestParameters, StatusParameters) are now @DataClass with auto-generated __init__ and __repr__, storing raw types in their fields.
StatusData, InvestmentData, and ConvertersData now handle their own effect alignment via coords/normalize_effects params, removing the need for transform_data()/link_to_flow_system() calls on StatusParameters, InvestParameters, and PiecewiseConversion from Element classes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…terfaces Remove now-unused methods from Piece, Piecewise, PiecewiseConversion, PiecewiseEffects, InvestParameters, and StatusParameters since alignment is now handled by *Data classes in batched.py. Also remove the has_time_dim propagation mechanism that was only used by transform_data. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Flow attributes now stay as raw user data after connect_and_transform(). Alignment to model coordinates happens lazily in FlowsData at model-build time via a new _align() helper. Also extends align_to_coords to handle Python lists (for IO roundtrip) and fixes serialization of unnamed DataArrays. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove Interface inheritance from Piece, Piecewise, PiecewiseConversion, PiecewiseEffects, InvestParameters, StatusParameters. Add dataclass serialization support in _extract_dataarrays_recursive for plain dataclasses. Add standalone _has_value function and __repr__ methods. Convert Piece values to DataArray in __post_init__. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Effect attributes are now aligned lazily in EffectsData via align_to_coords/align_effects_to_coords instead of eagerly in Effect.transform_data(). Move validate_config checks to EffectsData.validate(). Effect.transform_data() is now a no-op. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Bus.imbalance_penalty_per_flow_hour is now aligned lazily in BusesData via align_to_coords. Move validate_config checks to BusesData.validate(). Bus.transform_data() is now a no-op. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rsData Conversion factors are now aligned lazily in ConvertersData via align_to_coords. Move validate_config checks to ConvertersData.validate(). LinearConverter.transform_data() now only calls super for status propagation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Storage attributes are now aligned lazily in StoragesData via align_to_coords. Move validate_config checks to StoragesData.validate(). Storage.transform_data() now only calls super for status propagation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…nsData Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the element.transform_data() loop in connect_and_transform() with explicit _propagate_all_status_parameters(). Remove transform_data() from all element classes and Interface base. Remove unused _fit_coords and _fit_effect_coords from Interface. All data alignment now happens lazily in *Data classes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move all validation checks from element validate_config() methods into the corresponding *Data.validate() in batched.py. Remove validate_config() and _plausibility_checks() from Flow, Component, EffectsCollection, and the abstract _plausibility_checks from Element base class. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…th-based naming Replace Interface's ~500 lines of IO infrastructure with standalone functions (create_reference_structure, resolve_reference_structure, etc.) that use deterministic path-based DataArray keys (e.g., components.Boiler.size). Interface methods become thin wrappers. FlowSystemDatasetIO uses standalone functions via lazy imports. This enables future removal of Interface inheritance. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move handle_deprecated_kwarg() and validate_kwargs() to module-level standalone functions. Remove dead wrapper methods (_create_reference_structure, _resolve_dataarray_reference, _resolve_reference_structure, _has_value) from Interface. Remove Interface inheritance from Carrier since it only needs @register_class_for_io. Update FlowSystem to build datasets directly via its own _create_reference_structure instead of super().to_dataset(). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis PR redesigns the Flow constructor API from positional to keyword-based arguments, converts multiple core classes to dataclasses with post-initialization validation, refactors serialization into standalone utilities, introduces coordinate-based alignment for batched data, and updates all documentation and tests to reflect these breaking changes. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/Tests
participant FS as FlowSystem
participant Element as Element (Component/Bus/Flow)
participant Structure as structure (new utils)
participant IO as io (serialization)
User->>FS: build model with elements
FS->>Element: add components/buses/flows (dataclass init)
Element->>Element: __post_init__ validation & normalization
Element->>FS: register in element registries
User->>FS: optimize/connect_and_transform
FS->>Structure: _propagate_all_status_parameters
FS->>Structure: _prepare_effects with alignment
Note over Structure: New alignment logic
Structure->>Structure: align_to_coords (data→DataArray)
Structure->>Structure: align_effects_to_coords (dict→aligned dict)
User->>IO: to_dataset/from_netcdf
IO->>Structure: create_reference_structure (extract refs)
Structure->>IO: return path-based refs + DataArrays
IO->>IO: serialize/deserialize via paths
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ❌ 4❌ Failed checks (3 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
The base method was a one-liner and all 6 overrides were either no-ops (just calling super) or propagating to flows (which _connect_network already handles). Replace with direct _flow_system assignment at the 4 call sites and in _connect_network for flows. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Merge Interface into Element, remove Interface class (~240 lines) - Convert Effect, Bus, Component to @DataClass(eq=False, repr=False) - Convert Flow to @DataClass(eq=False, repr=False, init=False) preserving deprecation bridge - Convert LinearConverter, Storage, Transmission, Source, Sink, SourceAndSink to @DataClass - Add _io_exclude class attribute for excluding computed fields from serialization - Remove dead ContainerMixin, FlowContainer, ElementContainer, ResultsContainer (~244 lines) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove Element.__init__(), validate_kwargs(), and handle_deprecated_kwarg() as dead code now that all element classes are @DataClass. Simplify Flow's custom __init__ to use a positional-only param with clean bus/flow_id resolution, dropping all deprecation bridges (label=, id=, *args form). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Extract valid_id() and replace_references_with_stats() to module-level functions - Remove _short_id indirection from all element classes - Remove unused IO methods from Element (to_dataset, from_dataset, to_netcdf, from_netcdf, to_json, get_structure, copy) — only FlowSystem uses these - Remove Element.id property/setter, __repr__, _valid_id, _valid_label - Remove nonsensical Flow.id setter (computed property can't round-trip) - Add Flow.label override returning flow_id with deprecation warning Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ersion Make create_reference_structure() coords-aware so numpy arrays and pandas objects are converted to DataArrays during serialization. Remove Piece __post_init__ that eagerly converted start/end to DataArray. Remove unused obj_to_dataset/obj_from_dataset standalone serialization functions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rename _is_alignable_numeric to _is_numeric_array, remove unnecessary bool guard, improve docstrings. Only convert arrays when coords is available (avoids dim_0 dimension conflicts). Minor formatting. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove dead model references (_flows_model, _buses_model, _storages_model) - Move runtime state (_flow_system, _variable_names, _constraint_names) to FlowSystem registry - Convert Flow to keyword-only dataclass init (bus= required kwarg) - Prefer dataclasses.fields() over inspect.signature() for IO serialization - Remove Element.solution and Element.flow_system properties Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…t Storage flows - Remove unnecessary flow_id= from docs, docstrings, and examples - Update conversion_factors keys to match bus names where flow_id was removed - Storage auto-defaults flow_id to 'charging'/'discharging' when not set - Defer Flow.flow_id defaulting to Component._connect_flows for composability - Fix effects_per_flow_hour type annotation to Numeric_TPS | dict | None - Keep meta_data as field(default_factory=dict) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…_init__ Avoids Python dataclass ordering issues (required fields after defaults from parent) by using explicit __init__ that calls super().__init__() directly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
…tputs to list[Flow] - Remove fake '' defaults from mandatory fields (Component.id, Flow.bus) - Serialize IdList as list instead of dict (keys are derivable from objects) - Remove dict-to-list conversion code from Component.__post_init__ and subclasses - Simplify type annotations: list[Flow] instead of list[Flow] | dict[str, Flow] Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
flixopt/elements.py (1)
164-172:⚠️ Potential issue | 🟠 MajorFix flow_id uniqueness check order to avoid false duplicates.
With multiple flows lackingflow_id, the uniqueness check runs before defaults are applied and will raise even when bus names are distinct.✅ Suggested fix (default flow_id before uniqueness check)
_inputs = self.inputs or [] _outputs = self.outputs or [] + # Default flow_id to bus name before uniqueness check + for flow in _inputs + _outputs: + if flow.flow_id is None: + flow.flow_id = valid_id(flow.bus if isinstance(flow.bus, str) else str(flow.bus)) + # Check uniqueness on raw lists (before connecting) all_flow_ids = [flow.flow_id for flow in _inputs + _outputs] if len(set(all_flow_ids)) != len(all_flow_ids): duplicates = {fid for fid in all_flow_ids if all_flow_ids.count(fid) > 1} raise ValueError(f'Flow names must be unique! "{self.id}" got 2 or more of: {duplicates}')Also applies to: 219-223
docs/home/quick-start.md (1)
54-83:⚠️ Potential issue | 🟡 MinorQuick-start example: add explicit flow_id for storage charge/discharge to avoid relying on defaults
The storage example now uses:
charging=fx.Flow(bus='electricity', size=50)discharging=fx.Flow(bus='electricity', size=50)(Line 77-78)If
flow_iddefaults to the bus name, having two flows on the same bus in the same component can become ambiguous unless Storage assigns defaults internally. To keep the quick-start copy/paste-safe and to illustrate the new API clearly, I’d recommend addingflow_idexplicitly.Suggested doc change
battery = fx.Storage( 'battery', - charging=fx.Flow(bus='electricity', size=50), - discharging=fx.Flow(bus='electricity', size=50), + charging=fx.Flow(bus='electricity', flow_id='charge', size=50), + discharging=fx.Flow(bus='electricity', flow_id='discharge', size=50), capacity_in_flow_hours=100, # 100 kWh capacityflixopt/batched.py (1)
577-636:⚠️ Potential issue | 🟠 MajorStoragesData: fail fast if coords is missing (otherwise _align will crash)
StoragesData.__init__acceptscoords: dict[str, pd.Index] | None = None, but the_align()method unconditionally passesself._coordstoalign_to_coords(), which requires a non-None dict. If any caller instantiatesStoragesData(..., coords=None)and then accesses an aligned property (e.g.,eta_charge,capacity_upper), this will raise at runtime with a cryptic error fromalign_to_coords.Current instantiations always pass
coords=self._fs.indexes, but the signature allows None, creating a footgun for future usage. Add validation in__init__to fail fast:def __init__( self, storages: list, dim_name: str, effect_ids: list[str], timesteps_extra: pd.DatetimeIndex | None = None, coords: dict[str, pd.Index] | None = None, normalize_effects: Any = None, ): """Initialize StoragesData. ... """ self._storages = storages self._dim_name = dim_name self._effect_ids = effect_ids self._timesteps_extra = timesteps_extra + if coords is None: + raise ValueError('StoragesData requires coords for alignment (got None).') self._coords = coords self._normalize_effects = normalize_effects self._by_id = {s.id: s for s in storages}docs/notebooks/03-investment-optimization.ipynb (1)
157-168:⚠️ Potential issue | 🔴 CriticalUnmigrated positional argument will cause
TypeErrorat runtime.Line 158 passes
'Heat'as the first positional argument toFlow. With the new dataclass definition wherebusis the first field,Flow('Heat', bus='Heat', ...)assigns'Heat'tobuspositionally and via keyword, resulting inTypeError: __init__() got multiple values for argument 'bus'.🐛 Proposed fix
" fx.Source(\n", " 'SolarCollectors',\n", " outputs=[\n", " fx.Flow(\n", - " 'Heat',\n", - " bus='Heat',\n", + " bus='Heat',\n", + " flow_id='Heat',\n",docs/notebooks/10-transmission.ipynb (2)
461-486:⚠️ Potential issue | 🔴 CriticalUnmigrated positional arguments in Example 3 will cause
TypeErrorat runtime.Lines 464 and 474 pass
'from_A'and'from_B'as the first positional argument toFlow. With the new dataclass wherebusis the first field,Flow('from_A', bus='Heat_A', ...)assigns'from_A'tobuspositionally and'Heat_A'via keyword, raisingTypeError: __init__() got multiple values for argument 'bus'.Note: The
out1/out2flows on lines 472 and 482 were correctly migrated withflow_id=keyword, but thein1/in2flows were missed.🐛 Proposed fix
" fx.Transmission(\n", " 'Pipe_AB',\n", " in1=fx.Flow(\n", - " 'from_A',\n", - " bus='Heat_A',\n", + " bus='Heat_A',\n", + " flow_id='from_A',\n", " size=fx.InvestParameters(\n", " effects_of_investment_per_size={'costs': PIPE_COST_PER_KW * 7}, # Weekly cost\n", " minimum_size=0,\n", " maximum_size=300,\n", " ),\n", " ),\n", " out1=fx.Flow(bus='Heat_B', flow_id='to_B'),\n", " in2=fx.Flow(\n", - " 'from_B',\n", - " bus='Heat_B',\n", + " bus='Heat_B',\n", + " flow_id='from_B',\n", " size=fx.InvestParameters(\n", " effects_of_investment_per_size={'costs': PIPE_COST_PER_KW * 7},\n", " minimum_size=0,\n", " maximum_size=300,\n", " ),\n", " ),\n",
541-587:⚠️ Potential issue | 🟡 MinorDocumentation code examples in Key Concepts still use old positional Flow API.
The markdown code blocks at lines 543–549 and 578–586 show
Flow('from_A', bus='Bus_A', ...)syntax, which is inconsistent with the newFlow(bus=..., flow_id=...)API. While these won't cause runtime errors (they're markdown, not executable), they'll confuse readers.📝 Proposed fix for lines 543-549
- " in1=fx.Flow(bus='Bus_A', flow_id='from_A', size=100),\n", - " out1=fx.Flow(bus='Bus_B', flow_id='to_B', size=100),\n", + " in1=fx.Flow(bus='Bus_A', flow_id='from_A', size=100),\n", + " out1=fx.Flow(bus='Bus_B', flow_id='to_B', size=100),\n", " # Direction 2: B → A (optional - omit for unidirectional)\n", - " in2=fx.Flow(bus='Bus_B', flow_id='from_B', size=100),\n", - " out2=fx.Flow(bus='Bus_A', flow_id='to_A', size=100),\n", + " in2=fx.Flow(bus='Bus_B', flow_id='from_B', size=100),\n", + " out2=fx.Flow(bus='Bus_A', flow_id='to_A', size=100),\n",Ah wait — lines 545–549 are already marked as changed with
~and use the new API. The issue is specifically in the Investment Optimization example at lines 577–586:"```python\n", "in1=fx.Flow(\n", - " 'from_A', \n", - " bus='Bus_A',\n", + " bus='Bus_A',\n", + " flow_id='from_A',\n", " size=fx.InvestParameters( # Pass InvestParameters as size\n",
🤖 Fix all issues with AI agents
In `@docs/user-guide/building-models/index.md`:
- Around line 255-266: The example in flow_system.add_elements is missing a
fx.Bus('Gas') declaration while fx.Source('Gas') and fx.LinearConverter
reference bus='Gas'; add fx.Bus('Gas') alongside fx.Bus('Heat') in the same
flow_system.add_elements call so the Gas bus exists before the Source and
LinearConverter (look for fx.Bus('Heat'), fx.Source('Gas'), and
fx.LinearConverter to locate the snippet).
In `@flixopt/batched.py`:
- Around line 1720-1777: The validate() method rejects numpy scalar types for
flow.previous_flow_rate; update the type check inside validate (the block that
currently tests isinstance(flow.previous_flow_rate, np.ndarray) and
isinstance(..., (int, float, list))) to also accept numpy scalar types by
including np.number in the isinstance tuple (so numpy scalars like np.float64
pass), ensuring consistency with np.isscalar() usage elsewhere and with
assignments in optimize_accessor.py that may set a scalar via .item().
In `@flixopt/flow_system.py`:
- Around line 1694-1698: The id()-only guard in the FlowSystem register path
allows an element to be added to different FlowSystem instances; switch to a
weak-ref owner registry to enforce exclusive ownership: add a module-level or
FlowSystem-class attribute like a weakref.WeakKeyDictionary mapping element ->
FlowSystem owner, update the register path (where id(element) and
_registered_elements are checked) to first consult the weak owner registry and
raise or refuse registration if another FlowSystem is recorded, set the weak
owner entry to self when registration succeeds, and clear the weak owner entry
on element removal/deregistration (or let it vanish when the element is
collected); keep the _check_if_element_is_unique call but use the weak registry
as the authoritative cross‑FlowSystem protection.
In `@flixopt/structure.py`:
- Around line 770-780: The serialization branch in _extract_recursive currently
tags any dataclass with a '__class__' marker but resolve_reference_structure
only reconstructs classes listed in CLASS_REGISTRY, causing unregistered
dataclasses to fail round-trip; update _extract_recursive to detect when
obj.__class__.__name__ is not present in CLASS_REGISTRY and either (a) remove
the '__class__' marker and serialize the dataclass as a plain dict of its
non-empty fields, or (b) raise an explicit serialization error with a clear
message so callers fail fast; reference _extract_recursive,
resolve_reference_structure, and CLASS_REGISTRY when implementing the guard.
🧹 Nitpick comments (5)
tests/superseded/math/test_linear_converter.py (1)
17-25: Use.flow_idinstead of deprecated.labelin conversion factor dicts.Throughout this file,
input_flow.labelandoutput_flow.labelare used as keys inconversion_factorsdicts (Lines 25, 59, 92-94, 127, 173-177, 211, 244, 282). The.labelproperty is deprecated and emits aDeprecationWarningperflixopt/elements.py. Since this PR migrates to the newflow_idAPI, these should be updated to.flow_idfor consistency and to avoid warnings during test runs.Example fix (apply same pattern throughout)
- conversion_factors=[{input_flow.label: 0.8, output_flow.label: 1.0}], + conversion_factors=[{input_flow.flow_id: 0.8, output_flow.flow_id: 1.0}],tests/test_clustering/test_cluster_reduce_expand.py (1)
18-31: Flow(bus=..., flow_id=...) migration in clustering tests looks consistentThe updated Flow constructions (e.g.,
HeatDemand(Q),Boiler(Q_th),Boiler(Q_fu)) match the newcomponent(flow_id)naming that the tests later select from solution/statistics (e.g.,sel(flow='Boiler(Q_th)')). Lines 23-30, 252-260, and 756-763 look coherent.Optional: consider also setting
flow_idon flows where it’s currently omitted (e.g.,GasSourceoutput, storage charge/discharge flows) to avoid relying on defaults and keep dataset keys stable if defaults change.Also applies to: 245-262, 391-405, 577-633, 751-765, 946-1003, 1283-1317, 1539-1667
flixopt/batched.py (1)
2272-2317: ConvertersData: consider raising PlausibilityError (not KeyError) for unknown conversion_factors keys
aligned_conversion_factors()currently doesconverter.flows[flow_label](Line 2309) which can throw a rawKeyErrorbeforevalidate()has a chance to produce the nicer message (Line 2553-2557). A small guard here would improve UX in misconfigured models.Proposed tweak (friendlier error)
for idx, conv_factor in enumerate(converter.conversion_factors): aligned_dict = {} for flow_label, values in conv_factor.items(): - flow_id = converter.flows[flow_label].id + try: + flow_id = converter.flows[flow_label].id + except KeyError as e: + raise PlausibilityError( + f'{converter.id}: Flow "{flow_label}" in conversion_factors is not in inputs/outputs' + ) from e aligned = align_to_coords(values, self._coords, name=f'{flow_id}|conversion_factor{idx}')Also applies to: 2373-2392
docs/notebooks/04-operational-constraints.ipynb (1)
127-147: Drop the deprecated positional Flow argument forthermal_flow.The main boiler’s
thermal_flowstill uses the legacy positional flow name plusbus=.... With the new Flow signature, this is deprecated and can emit warnings. Prefer keyword-only bus (and optionalflow_idif needed) for consistency.♻️ Suggested cleanup
- thermal_flow=fx.Flow( - 'Steam', - bus='Steam', - size=500, - relative_minimum=0.3, # Minimum load: 30% = 150 kW - ), + thermal_flow=fx.Flow( + bus='Steam', + size=500, + relative_minimum=0.3, # Minimum load: 30% = 150 kW + ),tests/test_scenarios.py (1)
62-80: Inconsistent positional vs keywordbususage withintest_systemfixture.Lines 62 and 66 pass the bus as a positional argument (
Flow(electricity_bus.label_full, ...)), while the rest of this PR consistently uses the keyword form (Flow(bus=..., ...)). Consider updating these untouched lines for consistency:- demand = Flow(electricity_bus.label_full, flow_id='Demand', fixed_relative_profile=demand_profiles) + demand = Flow(bus=electricity_bus.label_full, flow_id='Demand', fixed_relative_profile=demand_profiles) ... - power_gen = Flow( - electricity_bus.label_full, + power_gen = Flow( + bus=electricity_bus.label_full, flow_id='Generation', ... - storage_charge = Flow(electricity_bus.label_full, size=10) - storage_discharge = Flow(electricity_bus.label_full, size=10) + storage_charge = Flow(bus=electricity_bus.label_full, size=10) + storage_discharge = Flow(bus=electricity_bus.label_full, size=10)
| flow_system.add_elements( | ||
| fx.Bus('Heat'), | ||
| fx.Effect('costs', '€', is_standard=True, is_objective=True), | ||
| fx.Source('Gas', outputs=[fx.Flow('gas', bus=None, size=500, effects_per_flow_hour=0.05)]), | ||
| fx.Source('Gas', outputs=[fx.Flow(bus='Gas', size=500, effects_per_flow_hour=0.05)]), | ||
| fx.LinearConverter( | ||
| 'Boiler', | ||
| inputs=[fx.Flow('gas', bus=None, size=500)], # Inline source | ||
| outputs=[fx.Flow('heat', bus='Heat', size=450)], | ||
| conversion_factors=[{'gas': 1, 'heat': 0.9}], | ||
| inputs=[fx.Flow(bus='Gas', size=500)], | ||
| outputs=[fx.Flow(bus='Heat', size=450)], | ||
| conversion_factors=[{'Gas': 1, 'Heat': 0.9}], | ||
| ), | ||
| fx.Sink('Demand', inputs=[fx.Flow('heat', bus='Heat', size=1, fixed_relative_profile=demand)]), | ||
| fx.Sink('Demand', inputs=[fx.Flow(bus='Heat', size=1, fixed_relative_profile=demand)]), | ||
| ) |
There was a problem hiding this comment.
Missing Bus('Gas') in Pattern 1 example.
This pattern example adds Bus('Heat') but not Bus('Gas'), even though the Source and LinearConverter reference bus='Gas'. This would fail at runtime if a user copies the snippet directly. This appears to be a pre-existing omission, but worth fixing since the example is now being updated anyway.
Proposed fix
flow_system.add_elements(
fx.Bus('Heat'),
+ fx.Bus('Gas'),
fx.Effect('costs', '€', is_standard=True, is_objective=True),
fx.Source('Gas', outputs=[fx.Flow(bus='Gas', size=500, effects_per_flow_hour=0.05)]),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| flow_system.add_elements( | |
| fx.Bus('Heat'), | |
| fx.Effect('costs', '€', is_standard=True, is_objective=True), | |
| fx.Source('Gas', outputs=[fx.Flow('gas', bus=None, size=500, effects_per_flow_hour=0.05)]), | |
| fx.Source('Gas', outputs=[fx.Flow(bus='Gas', size=500, effects_per_flow_hour=0.05)]), | |
| fx.LinearConverter( | |
| 'Boiler', | |
| inputs=[fx.Flow('gas', bus=None, size=500)], # Inline source | |
| outputs=[fx.Flow('heat', bus='Heat', size=450)], | |
| conversion_factors=[{'gas': 1, 'heat': 0.9}], | |
| inputs=[fx.Flow(bus='Gas', size=500)], | |
| outputs=[fx.Flow(bus='Heat', size=450)], | |
| conversion_factors=[{'Gas': 1, 'Heat': 0.9}], | |
| ), | |
| fx.Sink('Demand', inputs=[fx.Flow('heat', bus='Heat', size=1, fixed_relative_profile=demand)]), | |
| fx.Sink('Demand', inputs=[fx.Flow(bus='Heat', size=1, fixed_relative_profile=demand)]), | |
| ) | |
| flow_system.add_elements( | |
| fx.Bus('Heat'), | |
| fx.Bus('Gas'), | |
| fx.Effect('costs', '€', is_standard=True, is_objective=True), | |
| fx.Source('Gas', outputs=[fx.Flow(bus='Gas', size=500, effects_per_flow_hour=0.05)]), | |
| fx.LinearConverter( | |
| 'Boiler', | |
| inputs=[fx.Flow(bus='Gas', size=500)], | |
| outputs=[fx.Flow(bus='Heat', size=450)], | |
| conversion_factors=[{'Gas': 1, 'Heat': 0.9}], | |
| ), | |
| fx.Sink('Demand', inputs=[fx.Flow(bus='Heat', size=1, fixed_relative_profile=demand)]), | |
| ) |
🤖 Prompt for AI Agents
In `@docs/user-guide/building-models/index.md` around lines 255 - 266, The example
in flow_system.add_elements is missing a fx.Bus('Gas') declaration while
fx.Source('Gas') and fx.LinearConverter reference bus='Gas'; add fx.Bus('Gas')
alongside fx.Bus('Heat') in the same flow_system.add_elements call so the Gas
bus exists before the Source and LinearConverter (look for fx.Bus('Heat'),
fx.Source('Gas'), and fx.LinearConverter to locate the snippet).
| def validate(self) -> None: | ||
| """Validate all flows (config + DataArray checks). | ||
|
|
||
| Performs both: | ||
| - Config validation via Flow.validate_config() | ||
| - DataArray validation (post-transformation checks) | ||
|
|
||
| Raises: | ||
| PlausibilityError: If any validation check fails. | ||
| """ | ||
| if not self.elements: | ||
| return | ||
|
|
||
| for flow in self.elements.values(): | ||
| flow.validate_config() | ||
| # Size is required when using StatusParameters (for big-M constraints) | ||
| if flow.status_parameters is not None and flow.size is None: | ||
| raise PlausibilityError( | ||
| f'Flow "{flow.id}" has status_parameters but no size defined. ' | ||
| f'A size is required when using status_parameters to bound the flow rate.' | ||
| ) | ||
|
|
||
| if flow.size is None and flow.fixed_relative_profile is not None: | ||
| raise PlausibilityError( | ||
| f'Flow "{flow.id}" has a fixed_relative_profile but no size defined. ' | ||
| f'A size is required because flow_rate = size * fixed_relative_profile.' | ||
| ) | ||
|
|
||
| # Size is required for load factor constraints (total_flow_hours / size) | ||
| if flow.size is None and flow.load_factor_min is not None: | ||
| raise PlausibilityError( | ||
| f'Flow "{flow.id}" has load_factor_min but no size defined. ' | ||
| f'A size is required because the constraint is total_flow_hours >= size * load_factor_min * hours.' | ||
| ) | ||
|
|
||
| if flow.size is None and flow.load_factor_max is not None: | ||
| raise PlausibilityError( | ||
| f'Flow "{flow.id}" has load_factor_max but no size defined. ' | ||
| f'A size is required because the constraint is total_flow_hours <= size * load_factor_max * hours.' | ||
| ) | ||
|
|
||
| # Validate previous_flow_rate type | ||
| if flow.previous_flow_rate is not None: | ||
| if not any( | ||
| [ | ||
| isinstance(flow.previous_flow_rate, np.ndarray) and flow.previous_flow_rate.ndim == 1, | ||
| isinstance(flow.previous_flow_rate, (int, float, list)), | ||
| ] | ||
| ): | ||
| raise TypeError( | ||
| f'previous_flow_rate must be None, a scalar, a list of scalars or a 1D-numpy-array. ' | ||
| f'Got {type(flow.previous_flow_rate)}. ' | ||
| f'Different values in different periods or scenarios are not yet supported.' | ||
| ) | ||
|
|
||
| # Warning: fixed_relative_profile + status_parameters is unusual | ||
| if flow.fixed_relative_profile is not None and flow.status_parameters is not None: | ||
| logger.warning( | ||
| f'Flow {flow.id} has both a fixed_relative_profile and status_parameters. ' | ||
| f'This will allow the flow to be switched active and inactive, ' | ||
| f'effectively differing from the fixed_flow_rate.' | ||
| ) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# First, let's find the structure of the file and confirm the line numbers
wc -l flixopt/batched.py
# Find usages of previous_flow_rate with assignments
echo "=== previous_flow_rate assignments ==="
rg -n "previous_flow_rate\s*=" flixopt/
# Find the validate method and the specific line mentioned
echo "=== Line 1568 context (np.isscalar usage) ==="
sed -n '1560,1575p' flixopt/batched.py
# Find the validation block mentioned (1758-1763)
echo "=== Validation block (1758-1763) ==="
sed -n '1755,1770p' flixopt/batched.py
# Check if np.number is referenced anywhere
echo "=== np.number references ==="
rg -n "np\.number" flixopt/Repository: flixOpt/flixopt
Length of output: 2617
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find test files that might test previous_flow_rate
echo "=== Test files with previous_flow_rate ==="
find . -name "test*.py" -o -name "*test.py" | xargs grep -l "previous_flow_rate" 2>/dev/null || echo "No matches found"
# Check all previous_flow_rate usages in test files
echo "=== previous_flow_rate in tests ==="
rg -n "previous_flow_rate" tests/ --type py 2>/dev/null || echo "No tests directory or no matches"
# Check numpy scalar usage patterns
echo "=== numpy scalar usage patterns ==="
rg -n "np\.float64|np\.int64|np\.number" flixopt/batched.py | head -20Repository: flixOpt/flixopt
Length of output: 12420
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify np.isscalar behavior with numpy types
python3 << 'EOF'
import numpy as np
# Test np.isscalar with various types
test_values = [
(10, "int"),
(10.5, "float"),
(np.int64(10), "np.int64"),
(np.float64(10.5), "np.float64"),
(np.array([10]), "np.array([10])"),
([10], "list"),
]
print("=== np.isscalar() behavior ===")
for val, name in test_values:
print(f"np.isscalar({name:20s}) = {np.isscalar(val)}")
print("\n=== isinstance with np.number ===")
for val, name in test_values:
is_np_number = isinstance(val, np.number)
print(f"isinstance({name:20s}, np.number) = {is_np_number}")
print("\n=== values.item() return type ===")
arr = np.array([42.5])
scalar = arr.item()
print(f"type(np.array([42.5]).item()) = {type(scalar)}")
print(f"isinstance(scalar, np.number) = {isinstance(scalar, np.number)}")
print(f"isinstance(scalar, (int, float)) = {isinstance(scalar, (int, float))}")
EOFRepository: flixOpt/flixopt
Length of output: 176
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Look at optimize_accessor.py:352 in full context
echo "=== optimize_accessor.py context around line 352 ==="
sed -n '340,365p' flixopt/optimize_accessor.py
# Check the method signature and what values could be
sed -n '300,365p' flixopt/optimize_accessor.py | head -70Repository: flixOpt/flixopt
Length of output: 4059
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Let's verify the behavior by looking at numpy documentation and checking
# if the codebase already uses np.number elsewhere
echo "=== Checking if np.number is imported/used in batched.py ==="
head -100 flixopt/batched.py | grep -E "^import|^from"
echo "=== Check if np is imported ==="
grep -n "^import numpy\|^from numpy" flixopt/batched.py | head -5
echo "=== Verify line 1568 again for full context ==="
sed -n '1560,1575p' flixopt/batched.py
echo "=== Full validation method ==="
sed -n '1720,1777p' flixopt/batched.pyRepository: flixOpt/flixopt
Length of output: 4287
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check what numpy scalar types are mentioned in project docs
echo "=== Documentation of np.number in codebase ==="
grep -B2 -A2 "np.number" flixopt/core.py
# Check if there are any other uses of np.number in isinstance checks
echo "=== Other isinstance checks with np.number ==="
rg -n "isinstance.*np\.number" flixopt/Repository: flixOpt/flixopt
Length of output: 671
Fix validate() to accept numpy scalar types for previous_flow_rate
The validation at lines 1758-1763 rejects numpy scalar types (e.g., np.float64) but should accept them. This creates an inconsistency:
- Line 1568 uses
np.isscalar()which returnsTruefor numpy scalars - optimize_accessor.py:352 assigns numpy scalars via
.item():target_flow.previous_flow_rate = values.item() if values.size == 1 else values - Project documentation in core.py explicitly lists
np.numberas a supported scalar type
Add np.number to the isinstance check:
Proposed fix
if not any(
[
isinstance(flow.previous_flow_rate, np.ndarray) and flow.previous_flow_rate.ndim == 1,
- isinstance(flow.previous_flow_rate, (int, float, list)),
+ isinstance(flow.previous_flow_rate, (int, float, np.number, list)),
]
):🤖 Prompt for AI Agents
In `@flixopt/batched.py` around lines 1720 - 1777, The validate() method rejects
numpy scalar types for flow.previous_flow_rate; update the type check inside
validate (the block that currently tests isinstance(flow.previous_flow_rate,
np.ndarray) and isinstance(..., (int, float, list))) to also accept numpy scalar
types by including np.number in the isinstance tuple (so numpy scalars like
np.float64 pass), ensuring consistency with np.isscalar() usage elsewhere and
with assignments in optimize_accessor.py that may set a scalar via .item().
| if id(element) in self._registered_elements: | ||
| return # Already registered to this FlowSystem | ||
| # Check if any other FlowSystem has claimed this element — not possible to detect | ||
| # with id()-based tracking alone, but duplicates are caught by _check_if_element_is_unique | ||
|
|
There was a problem hiding this comment.
Reintroduce cross‑FlowSystem ownership protection.
The new id()-based tracking only prevents re‑adding an element to the same FlowSystem and no longer detects elements already attached to a different FlowSystem. This can lead to shared mutable state across systems and subtle cross‑contamination. Consider a weak‑ref owner registry to enforce exclusivity.
🛠️ Possible fix using a weak owner registry
+import weakref
+
+_ELEMENT_OWNERS = weakref.WeakKeyDictionary()
+
def _check_if_element_already_assigned(self, element: Element) -> None:
- if id(element) in self._registered_elements:
- return # Already registered to this FlowSystem
- # Check if any other FlowSystem has claimed this element — not possible to detect
- # with id()-based tracking alone, but duplicates are caught by _check_if_element_is_unique
+ owner = _ELEMENT_OWNERS.get(element)
+ if owner is not None and owner is not self:
+ raise ValueError(f'Element {element.id} is already assigned to another FlowSystem.')
+ _ELEMENT_OWNERS[element] = self🤖 Prompt for AI Agents
In `@flixopt/flow_system.py` around lines 1694 - 1698, The id()-only guard in the
FlowSystem register path allows an element to be added to different FlowSystem
instances; switch to a weak-ref owner registry to enforce exclusive ownership:
add a module-level or FlowSystem-class attribute like a
weakref.WeakKeyDictionary mapping element -> FlowSystem owner, update the
register path (where id(element) and _registered_elements are checked) to first
consult the weak owner registry and raise or refuse registration if another
FlowSystem is recorded, set the weak owner entry to self when registration
succeeds, and clear the weak owner entry on element removal/deregistration (or
let it vanish when the element is collected); keep the
_check_if_element_is_unique call but use the weak registry as the authoritative
cross‑FlowSystem protection.
| if dataclasses.is_dataclass(obj) and not isinstance(obj, type): | ||
| structure: dict[str, Any] = {'__class__': obj.__class__.__name__} | ||
| for field in dataclasses.fields(obj): | ||
| value = getattr(obj, field.name) | ||
| if value is None: | ||
| continue | ||
| processed, field_arrays = _extract_recursive(value, f'{path}.{field.name}', coords) | ||
| arrays.update(field_arrays) | ||
| if processed is not None and not _is_empty(processed): | ||
| structure[field.name] = processed | ||
| return structure, arrays |
There was a problem hiding this comment.
Prevent round‑trip failures for unregistered dataclasses.
_extract_recursive emits a __class__ marker for any dataclass, but resolve_reference_structure only reconstructs classes in CLASS_REGISTRY. Any dataclass not explicitly registered will serialize and then fail to deserialize. Consider failing fast at serialization time (or serialize as a plain dict without __class__).
🛠️ Proposed guard to fail fast
- if dataclasses.is_dataclass(obj) and not isinstance(obj, type):
- structure: dict[str, Any] = {'__class__': obj.__class__.__name__}
+ if dataclasses.is_dataclass(obj) and not isinstance(obj, type):
+ if obj.__class__.__name__ not in CLASS_REGISTRY:
+ raise ValueError(
+ f'Dataclass {obj.__class__.__name__} is not registered for IO. '
+ 'Use `@register_class_for_io` or convert to basic types.'
+ )
+ structure: dict[str, Any] = {'__class__': obj.__class__.__name__}🤖 Prompt for AI Agents
In `@flixopt/structure.py` around lines 770 - 780, The serialization branch in
_extract_recursive currently tags any dataclass with a '__class__' marker but
resolve_reference_structure only reconstructs classes listed in CLASS_REGISTRY,
causing unregistered dataclasses to fail round-trip; update _extract_recursive
to detect when obj.__class__.__name__ is not present in CLASS_REGISTRY and
either (a) remove the '__class__' marker and serialize the dataclass as a plain
dict of its non-empty fields, or (b) raise an explicit serialization error with
a clear message so callers fail fast; reference _extract_recursive,
resolve_reference_structure, and CLASS_REGISTRY when implementing the guard.
…n paths - Serialize scalars as 0-d DataArrays (no broadcasting) alongside arrays - Use flow ids instead of numeric indices for IdList serialization paths - Unwrap 0-d DataArrays back to Python scalars on deserialization - Move flow_id uniqueness check after _connect_flows() to avoid false collisions on unresolved None flow_ids - Simplify FlowSystem restoration with _resolve helper for ::: references Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Description
Brief description of the changes in this PR.
Type of Change
Related Issues
Closes #(issue number)
Testing
Checklist
Summary by CodeRabbit
New Features
Bug Fixes & Improvements
bus,flow_id) for better clarity and flexibility.Documentation