Add disc injection-moulding cooling simulation notebook#1279
Add disc injection-moulding cooling simulation notebook#1279Tuesdaythe13th wants to merge 6 commits intoNVIDIA:mainfrom
Conversation
Adds notebooks/disc_cooling_sim.ipynb with an Open-in-Colab badge, covering 2-D axisymmetric heat diffusion, Avrami crystallinity kinetics, warp-risk scoring, 2-D field visualisations, radial profile plots, and a mould-temperature parameter sweep. https://claude.ai/code/session_016zF8WWzQUxkQpC2hmiRkuB Signed-off-by: Claude <noreply@anthropic.com>
📝 WalkthroughWalkthroughA new GPU-accelerated disc cooling simulation notebook built on Warp is introduced, featuring configurable geometry and material properties, explicit finite-difference heat diffusion in cylindrical coordinates, Avrami-style crystallisation updates, warp-risk scoring, and parameter sweep visualization capabilities. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/Notebook
participant Setup as Setup Phase
participant GPU as GPU Memory
participant Kernels as Warp Kernels
participant Sim as ArtifexCoolingSim
participant Viz as Visualization
User->>Setup: Define DiscConfig, CoolingParams
Setup->>GPU: Allocate temperature, crystallinity, warp_risk arrays
GPU-->>Sim: Initialize arrays
Sim->>Kernels: Call init_temperature
Kernels->>GPU: Set initial T field
Kernels->>GPU: Return initialized state
loop Time-stepping loop
Sim->>Kernels: Call step_temperature
Kernels->>GPU: Compute heat diffusion (FD in cylindrical coords)
Kernels->>GPU: Apply boundary conditions
Kernels->>GPU: Update temperature field
Sim->>Kernels: Call update_crystallinity
Kernels->>GPU: Compute Avrami crystallisation update
Kernels->>GPU: Update crystallinity field
Sim->>Kernels: Call compute_warp_risk
Kernels->>GPU: Score warp risk from gradients
Kernels->>GPU: Update warp_risk field (mid-plane only)
end
Sim->>GPU: Fetch results (metrics, fields)
GPU-->>Sim: Return results dictionary
Sim->>Viz: Pass final fields and metrics
Viz->>User: Display temperature, crystallinity, warp-risk plots and profiles
User->>User: Execute parameter sweep (mould temp vs warp risk)
User->>Viz: Generate bar plots and pass/fail analysis
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
notebooks/disc_cooling_sim.ipynb (5)
638-660: Parameter sweep creates newArtifexCoolingSiminstance per iteration.Each sweep iteration allocates new GPU arrays. For memory efficiency and speed, consider reusing the simulation instance and just re-initializing the arrays.
♻️ Suggested improvement
T_mold_range = np.linspace(288, 353, 12) # 15 °C – 80 °C in K sweep_results = [] +sim = ArtifexCoolingSim(config, device=DEVICE) # Reuse instance for T_mold_K in T_mold_range: p = CoolingParams() # ... parameter setup ... - s = ArtifexCoolingSim(config, device=DEVICE) - res = s.simulate_cooling(p) + res = sim.simulate_cooling(p) sweep_results.append({...})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` around lines 638 - 660, The loop creates a new ArtifexCoolingSim on each T_mold_K which allocates GPU arrays each iteration; instead, instantiate one ArtifexCoolingSim before the loop and reuse it by reinitializing per-run state via CoolingParams (set p.T_mold, p.T_init, etc.) and calling simulate_cooling on the same simulator instance, ensuring any simulator-level buffers or CUDA arrays are reset or reallocated only when shape changes; update the code to move "s = ArtifexCoolingSim(config, device=DEVICE)" outside the for-loop and keep the existing per-iteration creation of CoolingParams and call s.simulate_cooling(p), reusing sweep_results as before.
686-689: Late import and fragile legend merging.The
Patchimport on line 686 should be at the top of the notebook with other imports. The legend handle concatenation assumesax.legend()was previously called; consider building the complete legend handles list upfront.📝 Suggested improvement
+# Move to imports cell (line ~78) +from matplotlib.patches import Patch + # In the sweep cell: -# Legend patch -from matplotlib.patches import Patch legend_els = [Patch(facecolor="green", label="Pass"), Patch(facecolor="red", label="Fail")] -for ax in axes: - ax.legend(handles=ax.get_legend().legend_handles + legend_els) +for ax in axes: + handles, labels = ax.get_legend_handles_labels() + ax.legend(handles=handles + legend_els)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` around lines 686 - 689, The Patch import and legend construction are fragile: move "from matplotlib.patches import Patch" into the notebook's top imports, then stop relying on ax.get_legend() being present; instead for each axis (axes) build the full handles list by collecting existing handles via ax.get_legend_handles_labels() (or ax.get_legend_handles_labels()[0]) and concatenating your legend_els = [Patch(facecolor="green", label="Pass"), Patch(facecolor="red", label="Fail")] before calling ax.legend(handles=full_handles) so the legend is created deterministically even if no prior legend call exists.
403-403: Hardcoded quality thresholds reduce reusability.The
is_okcheck uses hardcoded values0.15and15.0. Consider making these configurable viaCoolingParamsor as method arguments for different quality requirements.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` at line 403, The quality check currently hardcodes thresholds in the expression that sets is_ok (avg_chi_groove < 0.15 and max_warp_risk < 15.0); make these thresholds configurable by adding fields to CoolingParams (e.g., max_avg_chi_groove and max_warp_risk) or by accepting them as arguments to the function/method that computes is_ok, then replace the literals with references to those fields/arguments (e.g., use cooling_params.max_avg_chi_groove and cooling_params.max_warp_risk or the passed-in parameters) so different quality requirements can be supplied without changing the code.
266-291: Crystallinity update formula differs from standard Avrami.Line 287:
chi = chi + params.dt * params.avrami_n * ratedoesn't match standard Avrami kinetics (which involves time explicitly). The docstring correctly notes this is a placeholder, but consider renamingavrami_k0/avrami_nto avoid confusion with the actual Avrami equation when real kinetics are implemented.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` around lines 266 - 291, The update_crystallinity kernel uses a simplified heuristic but keeps Avrami-like names that are misleading; rename the parameters avrami_k0 and avrami_n (and their uses in update_crystallinity) to explicit names like growth_prefactor and growth_exponent (or similar) in the CoolingParams dataclass, update the kernel to use those new names (e.g., rate = growth_prefactor * x * (1.0 - chi / params.chi_max) and chi += params.dt * growth_exponent * rate), and update the docstring to state these are heuristic growth prefactor/exponent rather than true Avrami kinetics so future readers won’t assume standard Avrami behavior.
463-467: Stability comment is slightly misleading but implementation is correct.The comment references 1D stability criteria (
dt < dr²/(2α)), but for 2D diffusion the combined Fourier number matters. The implementation correctly usesmin(dr, dz)²with a 0.4 safety factor, which ensuresFo < 0.5in the limiting direction.📝 Suggested comment clarification
-# Fourier stability: dt < dr²/(2α) and dt < dz²/(2α) +# Fourier stability for 2D explicit diffusion: Fo = α·dt/Δx² < 0.5 +# Using min(dr,dz)² ensures stability in the limiting direction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@notebooks/disc_cooling_sim.ipynb` around lines 463 - 467, The comment above the stability calculation is misleadingly framed as 1D (dt < dr²/(2α)); update the comment to state the 2D diffusion stability context and clarify that you enforce the limiting direction by using dt_max = 0.4 * min(dr, dz)**2 / alpha (variables alpha, dt_max, dr, dz, config) so the Fourier number in the smallest grid spacing remains below 0.5 with a safety factor; keep the implementation (alpha = config.k/(config.rho*config.cp) and the dt_max computation) unchanged but replace the 1D formula text with a short note about using the minimum grid spacing for multi-dimensional stability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@notebooks/disc_cooling_sim.ipynb`:
- Around line 396-403: max_delta_t currently compares Dirichlet boundary slices
T_np[:, 0] and T_np[:, -1] (both set to T_mold) so it is ~0; change the
computation in the block using T_np (and related variables) to use the first
interior cells adjacent to the boundaries (e.g., T_np[:, 1] and T_np[:, -2])
instead of indices 0 and -1 so the through-thickness thermal gradient uses
interior values; update the use of T_np, the expression computing max_delta_t,
and any dependent logic (is_ok thresholding) to reflect the new interior-indexed
gradient measurement.
---
Nitpick comments:
In `@notebooks/disc_cooling_sim.ipynb`:
- Around line 638-660: The loop creates a new ArtifexCoolingSim on each T_mold_K
which allocates GPU arrays each iteration; instead, instantiate one
ArtifexCoolingSim before the loop and reuse it by reinitializing per-run state
via CoolingParams (set p.T_mold, p.T_init, etc.) and calling simulate_cooling on
the same simulator instance, ensuring any simulator-level buffers or CUDA arrays
are reset or reallocated only when shape changes; update the code to move "s =
ArtifexCoolingSim(config, device=DEVICE)" outside the for-loop and keep the
existing per-iteration creation of CoolingParams and call s.simulate_cooling(p),
reusing sweep_results as before.
- Around line 686-689: The Patch import and legend construction are fragile:
move "from matplotlib.patches import Patch" into the notebook's top imports,
then stop relying on ax.get_legend() being present; instead for each axis (axes)
build the full handles list by collecting existing handles via
ax.get_legend_handles_labels() (or ax.get_legend_handles_labels()[0]) and
concatenating your legend_els = [Patch(facecolor="green", label="Pass"),
Patch(facecolor="red", label="Fail")] before calling
ax.legend(handles=full_handles) so the legend is created deterministically even
if no prior legend call exists.
- Line 403: The quality check currently hardcodes thresholds in the expression
that sets is_ok (avg_chi_groove < 0.15 and max_warp_risk < 15.0); make these
thresholds configurable by adding fields to CoolingParams (e.g.,
max_avg_chi_groove and max_warp_risk) or by accepting them as arguments to the
function/method that computes is_ok, then replace the literals with references
to those fields/arguments (e.g., use cooling_params.max_avg_chi_groove and
cooling_params.max_warp_risk or the passed-in parameters) so different quality
requirements can be supplied without changing the code.
- Around line 266-291: The update_crystallinity kernel uses a simplified
heuristic but keeps Avrami-like names that are misleading; rename the parameters
avrami_k0 and avrami_n (and their uses in update_crystallinity) to explicit
names like growth_prefactor and growth_exponent (or similar) in the
CoolingParams dataclass, update the kernel to use those new names (e.g., rate =
growth_prefactor * x * (1.0 - chi / params.chi_max) and chi += params.dt *
growth_exponent * rate), and update the docstring to state these are heuristic
growth prefactor/exponent rather than true Avrami kinetics so future readers
won’t assume standard Avrami behavior.
- Around line 463-467: The comment above the stability calculation is
misleadingly framed as 1D (dt < dr²/(2α)); update the comment to state the 2D
diffusion stability context and clarify that you enforce the limiting direction
by using dt_max = 0.4 * min(dr, dz)**2 / alpha (variables alpha, dt_max, dr, dz,
config) so the Fourier number in the smallest grid spacing remains below 0.5
with a safety factor; keep the implementation (alpha =
config.k/(config.rho*config.cp) and the dt_max computation) unchanged but
replace the 1D formula text with a short note about using the minimum grid
spacing for multi-dimensional stability.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 43722938-c610-4e52-8b7e-eeadd8243506
📒 Files selected for processing (1)
notebooks/disc_cooling_sim.ipynb
Greptile SummaryThis PR adds a self-contained GPU-accelerated notebook simulating disc cooling after injection moulding, covering 2D axisymmetric heat diffusion, Avrami-based crystallinity evolution, and a warp-risk score. The notebook is well-structured and demonstrates Warp kernel patterns clearly. Several correctness issues flagged in prior review threads remain unaddressed in the current code — in particular, Confidence Score: 4/5Not yet safe to merge — three P1 issues flagged in prior review threads remain present in the notebook code. The three previously-flagged issues (dT_thickness always 0 making the through-thickness thermal gradient metric dead, the 1D Fourier stability formula being too permissive for 2D grids, and the Colab badge pointing to the author's ephemeral fork branch) are all still present in the committed notebook. Until those are resolved the notebook produces a misleading risk score and will show a broken badge to downstream users. The only new finding in this pass is a missing CHANGELOG entry (P2). notebooks/disc_cooling_sim.ipynb — compute_warp_risk kernel (dT_thickness bug), params-cell (stability formula), badge markdown cell (Colab URL). Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[ArtifexCoolingSim.__init__\nAllocate T_a, T_b, chi_a, chi_b, warp_risk] --> B[simulate_cooling]
B --> C[init_temperature\nT_a = T_init everywhere]
C --> D[init_scalar\nchi_a = 0 everywhere]
D --> E{time step loop\nsteps = total_time / dt}
E --> F[step_temperature\nT_a → T_b\nDirichlet BC top/bottom\nNeumann BC axis/outer edge]
F --> G[swap T_a ↔ T_b]
G --> H[update_crystallinity\nchi_a → chi_b\nAvrami kinetics inside T_g–T_m window]
H --> I[swap chi_a ↔ chi_b]
I --> E
E --> J[compute_warp_risk\ndT_mid + dchi per radial cell\nnote: dT_thickness always 0 — reads Dirichlet nodes]
J --> K[.numpy\nReturn T field, chi field, risk profile, scalar metrics]
style J fill:#ffcccc,stroke:#cc0000
Reviews (4): Last reviewed commit: "Merge branch 'NVIDIA:main' into claude/d..." | Re-trigger Greptile |
…dXr3 Claude/disc config struct nd xr3
Merge pull request #1 from Tuesdaythe13th/claude/disc-config-struct-n…
Description
This PR adds a comprehensive GPU-accelerated physics simulation notebook for disc cooling after injection moulding, built with NVIDIA Warp. The notebook demonstrates:
The simulation uses two Warp structs (
DiscConfigandCoolingParams) to keep kernel signatures concise, and includes aArtifexCoolingSimclass that manages GPU memory and orchestrates the time-stepping loop. The notebook is designed for Colab with GPU acceleration and includes stability analysis for the explicit time-stepping scheme.Key features:
Future enhancements (documented in the notebook):
Checklist
Test plan
The notebook is self-contained and executable end-to-end in Google Colab or any Jupyter environment with Warp installed. Verification includes:
https://claude.ai/code/session_016zF8WWzQUxkQpC2hmiRkuB
Summary by CodeRabbit
Release Notes