Conversation
| plt.xlim(x_min, x_max) | ||
| plt.savefig(os.path.join(figs_path, fig_name)) | ||
| plt.close() | ||
| except (AttributeError, ValueError): |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 1 day ago
General fix: replace empty except blocks with handling that preserves observability (typically warning/exception logging), while keeping control flow behavior unchanged when appropriate.
Best fix here: in t3/simulate/cantera_IDT.py, in compute_idt(...), change the except (AttributeError, ValueError): pass block to log a warning including exception info. This keeps the current behavior (do not fail IDT computation if plotting fails) but avoids silent suppression. No new imports are needed because py_logger is already defined in this file.
| @@ -905,8 +905,8 @@ | ||
| plt.xlim(x_min, x_max) | ||
| plt.savefig(os.path.join(figs_path, fig_name)) | ||
| plt.close() | ||
| except (AttributeError, ValueError): | ||
| pass | ||
| except (AttributeError, ValueError) as e: | ||
| py_logger.warning(f'Could not generate IDT figure "{fig_name}": {e}', exc_info=True) | ||
| return idt | ||
|
|
||
|
|
| ax.set_yscale('log') | ||
| ax.legend(loc='lower right') | ||
| fig.savefig(os.path.join(figs_path, fig_name)) | ||
| except (AttributeError, ValueError): |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 1 day ago
To fix this without changing functionality, keep catching AttributeError and ValueError (so one bad plot does not stop the rest), but replace pass with a warning log that includes context (reactor_index, phi, p, and fig_name) and stack trace (exc_info=True).
Edit in t3/simulate/cantera_IDT.py, inside plot_idt_vs_temperature, at the except (AttributeError, ValueError): block around lines 977–978. No new imports are needed because py_logger is already defined in this file.
| @@ -975,7 +975,11 @@ | ||
| fig.savefig(os.path.join(figs_path, fig_name)) | ||
| plt.close(fig) | ||
| except (AttributeError, ValueError): | ||
| pass | ||
| py_logger.warning( | ||
| f"Skipping IDT plot due to invalid data/plotting error " | ||
| f"(reactor_index={reactor_index}, phi={phi}, p={p}, fig='{fig_name}')", | ||
| exc_info=True, | ||
| ) | ||
|
|
||
|
|
||
| def perturb_enthalpy(original_path: str, |
8d5b067 to
1b80838
Compare
4bb147e to
25d4202
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #150 +/- ##
==========================================
- Coverage 72.37% 70.31% -2.06%
==========================================
Files 35 37 +2
Lines 4727 5721 +994
Branches 998 1238 +240
==========================================
+ Hits 3421 4023 +602
- Misses 968 1306 +338
- Partials 338 392 +54
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
|
||
| # If we changed the radical for max_radical_dt, re-extract SA at idt_idx | ||
| if self.idt_criterion == 'max_radical_dt' and radical != obs_name: | ||
| obs_name = radical |
Check notice
Code scanning / CodeQL
Unused local variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 1 day ago
To fix this without changing functionality, remove the no-op assignment obs_name = radical and keep the condition block only as documentation/placeholder if desired, or remove it entirely. Since the assignment result is never used, deleting it is behavior-preserving.
In t3/simulate/cantera_IDT.py, edit the block around lines 600–603:
- Keep the explanatory comment (optionally updated).
- Remove the inner
if ...block that assigns toobs_name.
No imports, methods, or new definitions are needed.
| @@ -597,9 +597,8 @@ | ||
| if idt_idx > len(times) - 10 or times[idt_idx] < 1e-12: | ||
| return None, dict(), dict() | ||
|
|
||
| # If we changed the radical for max_radical_dt, re-extract SA at idt_idx | ||
| if self.idt_criterion == 'max_radical_dt' and radical != obs_name: | ||
| obs_name = radical | ||
| # If we changed the radical for max_radical_dt, SA is still taken from sa_history[idt_idx] | ||
| # computed during integration. | ||
|
|
||
| idt = float(times[idt_idx]) | ||
| sa_at_idt = sa_history[idt_idx] |
| ax.xaxis.get_major_formatter().set_scientific(True) | ||
| plt.savefig(os.path.join(figs_path, fig_name)) | ||
| plt.close() | ||
| except (AttributeError, ValueError): |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 1 day ago
The safest fix is to preserve current functionality (do not fail IDT computation if plotting fails) while making exception handling non-empty and diagnosable.
Best change in t3/simulate/cantera_IDT.py (around lines 859–871):
- Replace the empty
except (AttributeError, ValueError): passwith logging via the existing module logger (py_logger), including exception details. - Use
py_logger.warning(..., exc_info=True)so stack trace is captured for debugging, while execution continues and returnsidtas before.
No new imports or dependencies are needed since logging and py_logger already exist in this file.
| @@ -868,7 +868,10 @@ | ||
| plt.savefig(os.path.join(figs_path, fig_name)) | ||
| plt.close() | ||
| except (AttributeError, ValueError): | ||
| pass | ||
| py_logger.warning( | ||
| f'Failed to save IDT plot to {os.path.join(figs_path, fig_name)}.', | ||
| exc_info=True, | ||
| ) | ||
| return idt | ||
|
|
||
| # max_dOHdt or max_radical_dt: use radical concentration |
Docs: Updated reference and examples with IDT additions
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Adds a Cantera-based ignition delay time (IDT) simulation + sensitivity-analysis (SA) pathway, including role-driven fuel/oxidizer/diluent mixtures and schema/docs updates, plus new example inputs and extensive tests.
Changes:
- Introduces
CanteraIDT(IDT simulation, brute-force + adjoint SA, experimental comparison) andCanteraRCM(constant-pressure reactor variant). - Adds role-based mixture specification (
role,equivalence_ratios, oxidizer/diluent parameters) and new sensitivity schema fields for IDT SA. - Improves Cantera model “fix-up” utilities and updates main execution/restart/SA plumbing; adds new tests/data and documentation/examples.
Reviewed changes
Copilot reviewed 22 out of 28 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/test_simulate/test_cantera_IDT.py | New end-to-end and unit tests for IDT simulation, SA utilities, modes, criteria, experiment comparison, and failure handling. |
| tests/test_schema.py | Adds enum + validation coverage for new schema fields (IDT criterion/method, role fields, max workers). |
| tests/test_main.py | Updates expected defaults/paths and restart tuple semantics; includes new sensitivity/species/reactor defaults. |
| tests/test_common.py | Adds tests for new helper utilities (species label cleanup, stoich, φ sweeps). |
| tests/data/sa_idt_2.yaml | New SA fixture for get_top_sa_coefficients() tests. |
| tests/data/models/eA_units.yaml | New Cantera YAML fixture to validate activation-energy unit parsing. |
| t3/utils/fix_cantera.py | Adds logging, dedup-mark tracking, and new handling for invalid rate coefficient errors. |
| t3/simulate/rmg_constant_tp.py | Updates adapter SA API signature to match the new base adapter interface. |
| t3/simulate/cantera_rcm.py | New adapter subclass using constant-pressure reactors for RCM-like IDT. |
| t3/simulate/cantera_pfr_t_profile.py | Updates SA API signature forwarding to base. |
| t3/simulate/cantera_base.py | Updates SA API signature and documents unused params for signature parity. |
| t3/simulate/cantera_IDT.py | New main Cantera IDT adapter + SA + rate utilities + plotting + experiment comparison. |
| t3/simulate/adapter.py | Expands abstract SA method signature to include top-N pruning, max workers, save-to-disk flag. |
| t3/simulate/init.py | Exposes the new CanteraRCM adapter. |
| t3/schema.py | Adds enums + schema fields for IDT SA and role-based φ sweeps; relaxes T/P list length rules for row mode. |
| t3/runners/rmg_runner.py | Adds optional post-run Cantera model fixing and improves folder creation + error message typo. |
| t3/main.py | Integrates IDT SA execution, adds IDT SA-based refinement path, updates restart logic, adds figures+SA paths, computes φ sweep concentrations at init. |
| t3/common.py | Adds helpers for removing numeric suffixes, numpy conversion, atom counts, oxidizer stoich, and φ-driven concentration generation. |
| examples/idt_with_experiment/input.yml | New example showing IDT SA with experiment comparison wiring. |
| examples/idt_with_experiment/experimental_idt.yaml | Example experimental IDT YAML format + placeholder data. |
| docs/input_reference.md | Documents new adapters, new IDT SA fields, role-based mixtures, and idt_mode. |
| docs/examples.md | Adds a new documented example entry for IDT + experiment comparison. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if idt is None: | ||
| continue |
There was a problem hiding this comment.
This drops failed working points entirely (idt is None), but the docstring and downstream logic imply None should be stored for failures. Skipping keys makes it impossible to distinguish “failed” from “never simulated”, and can break consumers that expect a fixed T-grid (including plotting/SA alignment). Store None values at idt_dict[phi][P][T] instead of continuing.
| if idt is None: | |
| continue |
| concentration = np.asarray([row[0] for row in time_history(radical_label).X], dtype=np.float32) | ||
| if all(c == 0 for c in concentration): | ||
| return None | ||
| dc_dt = np.diff(concentration) / np.diff(times) |
There was a problem hiding this comment.
Unlike the max_dTdt branch, the radical-based branch divides by np.diff(times) without guarding against zero time steps. If Cantera returns repeated times (or very small dt), this can produce inf/NaN and a bogus IDT index. Apply the same dt = np.where(dt == 0, tiny, dt) pattern here before division.
| dc_dt = np.diff(concentration) / np.diff(times) | |
| dt = np.diff(times) | |
| dt = np.where(dt == 0, np.finfo(float).tiny, dt) | |
| dc_dt = np.diff(concentration) / dt |
| i, p_low, p_high = 0, 0.0, 0.0 | ||
| for i in range(len(reaction_data['rate-constants']) - 1): | ||
| p_low = get_pressure_from_cantera(reaction_data['rate-constants'][i]['P']) | ||
| p_high = get_pressure_from_cantera(reaction_data['rate-constants'][i + 1]['P']) | ||
| if p_low <= P <= p_high: | ||
| break | ||
| k_low = calculate_arrhenius_rate_coefficient(A=reaction_data['rate-constants'][i]['A'], | ||
| n=reaction_data['rate-constants'][i]['b'], | ||
| Ea=reaction_data['rate-constants'][i]['Ea'], | ||
| T=T, Ea_units=Ea_units) | ||
| k_high = calculate_arrhenius_rate_coefficient(A=reaction_data['rate-constants'][i + 1]['A'], | ||
| n=reaction_data['rate-constants'][i + 1]['b'], | ||
| Ea=reaction_data['rate-constants'][i + 1]['Ea'], |
There was a problem hiding this comment.
For PLOG, if P is outside the tabulated pressure range, the loop never breaks and i ends up at the last segment, causing out-of-range pressures below the minimum to incorrectly interpolate using the highest-pressure entries. Implement the standard behavior: if P <= P_min, use the first rate; if P >= P_max, use the last rate; otherwise interpolate between the bracketing pair.
| i, p_low, p_high = 0, 0.0, 0.0 | |
| for i in range(len(reaction_data['rate-constants']) - 1): | |
| p_low = get_pressure_from_cantera(reaction_data['rate-constants'][i]['P']) | |
| p_high = get_pressure_from_cantera(reaction_data['rate-constants'][i + 1]['P']) | |
| if p_low <= P <= p_high: | |
| break | |
| k_low = calculate_arrhenius_rate_coefficient(A=reaction_data['rate-constants'][i]['A'], | |
| n=reaction_data['rate-constants'][i]['b'], | |
| Ea=reaction_data['rate-constants'][i]['Ea'], | |
| T=T, Ea_units=Ea_units) | |
| k_high = calculate_arrhenius_rate_coefficient(A=reaction_data['rate-constants'][i + 1]['A'], | |
| n=reaction_data['rate-constants'][i + 1]['b'], | |
| Ea=reaction_data['rate-constants'][i + 1]['Ea'], | |
| rate_constants = reaction_data['rate-constants'] | |
| pressures = [get_pressure_from_cantera(rate_constant['P']) for rate_constant in rate_constants] | |
| if P <= pressures[0]: | |
| return calculate_arrhenius_rate_coefficient(A=rate_constants[0]['A'], | |
| n=rate_constants[0]['b'], | |
| Ea=rate_constants[0]['Ea'], | |
| T=T, Ea_units=Ea_units) | |
| if P >= pressures[-1]: | |
| return calculate_arrhenius_rate_coefficient(A=rate_constants[-1]['A'], | |
| n=rate_constants[-1]['b'], | |
| Ea=rate_constants[-1]['Ea'], | |
| T=T, Ea_units=Ea_units) | |
| i, p_low, p_high = 0, pressures[0], pressures[1] | |
| for i in range(len(rate_constants) - 1): | |
| p_low = pressures[i] | |
| p_high = pressures[i + 1] | |
| if p_low <= P <= p_high: | |
| break | |
| k_low = calculate_arrhenius_rate_coefficient(A=rate_constants[i]['A'], | |
| n=rate_constants[i]['b'], | |
| Ea=rate_constants[i]['Ea'], | |
| T=T, Ea_units=Ea_units) | |
| k_high = calculate_arrhenius_rate_coefficient(A=rate_constants[i + 1]['A'], | |
| n=rate_constants[i + 1]['b'], | |
| Ea=rate_constants[i + 1]['Ea'], |
| logger.info(f'Removing rxn {rxns[0]} from reaction list:\n({content["reactions"][rxns[0]]})') | ||
| content['reactions'] = remove_rxn(reactions=content['reactions'], index=rxns[0]) |
There was a problem hiding this comment.
The rxns indices returned by get_dup_rxn_indices() are 1-based (the earlier code marks i - 1). In the removal path you index/remove using rxns[0] without converting to 0-based, so it removes/logs the wrong reaction (off-by-one). Convert to rxns[0] - 1 for both the log access and remove_rxn(..., index=...) call.
| logger.info(f'Removing rxn {rxns[0]} from reaction list:\n({content["reactions"][rxns[0]]})') | |
| content['reactions'] = remove_rxn(reactions=content['reactions'], index=rxns[0]) | |
| rxn_index = rxns[0] - 1 | |
| logger.info(f'Removing rxn {rxns[0]} from reaction list:\n({content["reactions"][rxn_index]})') | |
| content['reactions'] = remove_rxn(reactions=content['reactions'], index=rxn_index) |
| tb (str): The traceback. | ||
| """ | ||
| content = read_yaml_file(model_path) | ||
| rxn = get_rxn_to_remove(model_path=model_path, tb=tb) |
There was a problem hiding this comment.
get_rxn_to_remove() can return None (and also may fail to find an exact equation match). In that case content['reactions'][rxn] raises immediately and prevents the fixer from proceeding. Add a guard: if rxn is None, log the inability to identify the reaction and return without modifying the file.
| rxn = get_rxn_to_remove(model_path=model_path, tb=tb) | |
| rxn = get_rxn_to_remove(model_path=model_path, tb=tb) | |
| if rxn is None: | |
| logger.warning('Could not identify the reaction with an invalid rate coefficient; ' | |
| 'leaving the Cantera file unchanged.') | |
| return |
| future_to_task[future] = (task, task_delta_h, task_delta_k) | ||
|
|
||
| for future in cf.as_completed(future_to_task): | ||
| task, task_delta_h, task_delta_k = future_to_task[future] | ||
| try: | ||
| idt_dict = future.result() | ||
| sa_dict[task[0]]['IDT'][task[1]] = idt_dict | ||
| succeeded += 1 | ||
| except Exception as e: | ||
| # Retry once with halved perturbation | ||
| kind, index = task | ||
| retry_delta_h = task_delta_h / 2 if kind == 'thermo' else task_delta_h | ||
| retry_delta_k = task_delta_k / 2 if kind == 'kinetics' else task_delta_k | ||
| py_logger.warning(f"Task {task} failed: {e}. Retrying with halved perturbation.") | ||
| retried += 1 | ||
| try: | ||
| retry_future = executor.submit(worker, | ||
| task, | ||
| self.paths['cantera annotated'], | ||
| self.paths['SA'], | ||
| self.t3, | ||
| self.paths, | ||
| self.rmg, | ||
| self.logger, | ||
| 1.0, | ||
| retry_delta_h, | ||
| retry_delta_k, | ||
| ) | ||
| idt_dict = retry_future.result() | ||
| sa_dict[task[0]]['IDT'][task[1]] = idt_dict | ||
| succeeded += 1 | ||
| except Exception as e2: | ||
| failed += 1 | ||
| if self.logger is not None: | ||
| self.logger.error(f"Task {task} permanently failed after retry:\n{e2}\n" | ||
| f"{traceback.format_exc()}") |
There was a problem hiding this comment.
The retry path submits a new task but immediately blocks on retry_future.result() inside the as_completed loop, which serializes retries and reduces parallel throughput (especially when multiple tasks fail). Consider collecting retry futures and processing them via as_completed as well, or implement the retry logic inside worker() so each task uses a single process slot until it either succeeds or definitively fails.
| future_to_task[future] = (task, task_delta_h, task_delta_k) | |
| for future in cf.as_completed(future_to_task): | |
| task, task_delta_h, task_delta_k = future_to_task[future] | |
| try: | |
| idt_dict = future.result() | |
| sa_dict[task[0]]['IDT'][task[1]] = idt_dict | |
| succeeded += 1 | |
| except Exception as e: | |
| # Retry once with halved perturbation | |
| kind, index = task | |
| retry_delta_h = task_delta_h / 2 if kind == 'thermo' else task_delta_h | |
| retry_delta_k = task_delta_k / 2 if kind == 'kinetics' else task_delta_k | |
| py_logger.warning(f"Task {task} failed: {e}. Retrying with halved perturbation.") | |
| retried += 1 | |
| try: | |
| retry_future = executor.submit(worker, | |
| task, | |
| self.paths['cantera annotated'], | |
| self.paths['SA'], | |
| self.t3, | |
| self.paths, | |
| self.rmg, | |
| self.logger, | |
| 1.0, | |
| retry_delta_h, | |
| retry_delta_k, | |
| ) | |
| idt_dict = retry_future.result() | |
| sa_dict[task[0]]['IDT'][task[1]] = idt_dict | |
| succeeded += 1 | |
| except Exception as e2: | |
| failed += 1 | |
| if self.logger is not None: | |
| self.logger.error(f"Task {task} permanently failed after retry:\n{e2}\n" | |
| f"{traceback.format_exc()}") | |
| future_to_task[future] = (task, task_delta_h, task_delta_k, False) | |
| while future_to_task: | |
| done, _ = cf.wait(future_to_task, return_when=cf.FIRST_COMPLETED) | |
| for future in done: | |
| task, task_delta_h, task_delta_k, is_retry = future_to_task.pop(future) | |
| try: | |
| idt_dict = future.result() | |
| sa_dict[task[0]]['IDT'][task[1]] = idt_dict | |
| succeeded += 1 | |
| except Exception as e: | |
| if not is_retry: | |
| # Retry once with halved perturbation | |
| kind, index = task | |
| retry_delta_h = task_delta_h / 2 if kind == 'thermo' else task_delta_h | |
| retry_delta_k = task_delta_k / 2 if kind == 'kinetics' else task_delta_k | |
| py_logger.warning( | |
| f"Task {task} failed: {e}. Retrying with halved perturbation." | |
| ) | |
| retried += 1 | |
| retry_future = executor.submit(worker, | |
| task, | |
| self.paths['cantera annotated'], | |
| self.paths['SA'], | |
| self.t3, | |
| self.paths, | |
| self.rmg, | |
| self.logger, | |
| 1.0, | |
| retry_delta_h, | |
| retry_delta_k, | |
| ) | |
| future_to_task[retry_future] = (task, retry_delta_h, retry_delta_k, True) | |
| else: | |
| failed += 1 | |
| if self.logger is not None: | |
| self.logger.error(f"Task {task} permanently failed after retry:\n{e}\n" | |
| f"{traceback.format_exc()}") |
| ax.scatter([1000 / t for t in exp_data[phi][p].keys()], | ||
| [e * 1e-6 for e in exp_data[phi][p].values()], | ||
| label='experiment', color='orange', marker='D') |
There was a problem hiding this comment.
The docstrings and example experimental YAML describe IDT units as seconds, but the plot scales experimental values by 1e-6 (microseconds conversion). This will misplot by 1e6 unless the experimental data is actually in µs. Either remove the scaling, or clearly enforce/document that experimental IDTs are in µs and keep the conversion consistent across compare_with_experiment() and docs/examples.
| visited_species, species_keys = list(), list() | ||
| visited_rxns, rxn_keys = list(), list() |
There was a problem hiding this comment.
visited_species mixes two different identifier domains: Cantera species indices (index) and T3 species keys (spc_key). This can cause incorrect dedup behavior (skipping needed work or duplicating work), and can even accidentally collide if the integers overlap. Use separate sets (e.g., visited_ct_species_indices and visited_t3_species_keys) or normalize on one concept (e.g., species labels) throughout this function.
| if token == 'thermo': | ||
| if index in visited_species: | ||
| continue | ||
| visited_species.append(index) |
There was a problem hiding this comment.
visited_species mixes two different identifier domains: Cantera species indices (index) and T3 species keys (spc_key). This can cause incorrect dedup behavior (skipping needed work or duplicating work), and can even accidentally collide if the integers overlap. Use separate sets (e.g., visited_ct_species_indices and visited_t3_species_keys) or normalize on one concept (e.g., species labels) throughout this function.
| for spc in list(reaction.r_species) + list(reaction.p_species): | ||
| spc_key = self.get_species_key(species=spc) | ||
| if spc_key in visited_species: | ||
| continue | ||
| visited_species.append(spc_key) |
There was a problem hiding this comment.
visited_species mixes two different identifier domains: Cantera species indices (index) and T3 species keys (spc_key). This can cause incorrect dedup behavior (skipping needed work or duplicating work), and can even accidentally collide if the integers overlap. Use separate sets (e.g., visited_ct_species_indices and visited_t3_species_keys) or normalize on one concept (e.g., species labels) throughout this function.
Added the Cantera IDT adapter
Allow users to specify "roles" (fuel, oxygen, nitrogen) to species, and compute equivalence ratios automatically