Skip to content

feat(spp_simulation): add simulation engine for program scenario modeling#46

Open
jeremi wants to merge 3 commits into19.0from
feat/simulation-engine
Open

feat(spp_simulation): add simulation engine for program scenario modeling#46
jeremi wants to merge 3 commits into19.0from
feat/simulation-engine

Conversation

@jeremi
Copy link
Member

@jeremi jeremi commented Feb 17, 2026

Summary

  • spp_simulation (new): Simulation engine for modeling program targeting and entitlements
    • Scenario templates with pre-configured simulation parameters
    • Configurable scenarios with CEL-based eligibility criteria
    • Simulation runs with metric tracking (coverage, cost, fairness)
    • Scenario-to-program conversion to promote simulations to real programs
    • Side-by-side comparison wizard for evaluating scenarios
    • Targeting efficiency and distribution analysis
    • OWL components for results visualization (comparison tables, fairness tables)

Dependencies

Origin

From openspp-modules-v2 branch claude/global-alliance-policy-basket.

Test plan

  • spp_simulation installs successfully
  • Scenario creation and configuration works
  • Simulation runs execute and produce metrics
  • Scenario comparison wizard functions correctly
  • Convert-to-program creates valid spp.program records
  • Security rules restrict access appropriately

Note

Medium Risk
Large new module touching eligibility/expression execution and program creation flows; while it stores only aggregates, simulation/overlap recomputation against live registry data and conversion automation could have functional/performance implications.

Overview
Introduces a new Odoo addon spp_simulation that lets users define CEL-based targeting scenarios (with templates and live preview counts), run non-deletable simulations, and persist only aggregated results for coverage, cost, distribution (incl. Gini/percentiles), fairness/parity, geographic breakdown, targeting efficiency (leakage/undercoverage), and custom metrics.

Adds scenario-to-program conversion via the existing program creation wizard, plus run comparison features including recomputable side-by-side metrics, parameter snapshot tables, and current-registry overlap/Jaccard analysis, with new OWL widgets and a PDF report for viewing results. Includes new menus, security groups/access rules, seeded scenario templates, and a comprehensive test suite enforcing constraints, privacy guarantees, and access controls.

Written by Cursor Bugbot for commit b03812f. This will update automatically on new commits. Configure here.

@gemini-code-assist
Copy link

Summary of Changes

Hello @jeremi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a powerful new spp_simulation module, empowering users to rigorously test and evaluate different program targeting and entitlement strategies. It provides tools to analyze the impact of these strategies on beneficiary coverage, overall cost, fairness across demographic groups, and targeting efficiency. By offering a robust simulation environment and the ability to compare scenarios, it facilitates informed decision-making before committing to actual program deployment, all while adhering to strict privacy principles by only storing aggregated data.

Highlights

  • New Simulation Engine: Introduced a new spp_simulation module to provide a comprehensive engine for modeling program targeting and entitlements, allowing users to simulate and analyze various scenarios.
  • Key Features: The module includes a scenario builder with CEL expressions, a template library, detailed distribution analysis (Gini coefficient, Lorenz curve), fairness analysis (disparity ratios), targeting efficiency metrics (leakage, undercoverage), budget simulation strategies, side-by-side scenario comparison, and custom metric definitions.
  • Privacy-Centric Design: Ensures privacy by storing only aggregated counts, percentages, and metrics, explicitly avoiding the persistence of individual beneficiary records in simulation results.
  • Program Conversion: Enables the conversion of successful simulation scenarios directly into real programs, streamlining the transition from modeling to implementation.
  • Comprehensive Testing and Documentation: Includes extensive test coverage for various components like scenarios, runs, comparisons, metrics, and security, alongside detailed methodology documentation.
Changelog
  • spp_simulation/README.rst
    • Added a new README file detailing the module's purpose, key features, privacy aspects, models, security groups, and menu paths.
  • spp_simulation/init.py
    • Added initialization file to import models, services, and wizards.
  • spp_simulation/manifest.py
    • Added manifest file defining the module's metadata, dependencies (e.g., spp_aggregation, spp_metrics_core), data files, assets, and application status.
  • spp_simulation/data/scenario_templates.xml
    • Added XML data for pre-configured scenario templates, such as Elderly Pension and Female-Headed Households.
  • spp_simulation/docs/methodology.md
    • Added a detailed Markdown document explaining the simulation methodology, metrics (coverage, distribution, fairness, efficiency), budget strategies, and CEL expression examples.
  • spp_simulation/models/init.py
    • Added initialization file for the models directory, importing all new simulation-related models.
  • spp_simulation/models/simulation_comparison.py
    • Added model for side-by-side comparison of simulation runs, including computation of comparison data, parameter comparison HTML, and overlap counts.
  • spp_simulation/models/simulation_entitlement_rule.py
    • Added model for defining amount calculation rules within a simulation scenario, supporting fixed, multiplier, and CEL-based amounts.
  • spp_simulation/models/simulation_metric.py
    • Added model for custom evaluation metrics (aggregate, coverage, ratio) using CEL expressions, inheriting from spp.metric.base.
  • spp_simulation/models/simulation_run.py
    • Added model for aggregated simulation results, including various metrics (beneficiary count, cost, Gini, equity score, leakage, undercoverage), HTML summaries, and a snapshot of the scenario configuration, with deletion prevented for audit compliance.
  • spp_simulation/models/simulation_scenario.py
    • Added core model for defining 'what if' targeting scenarios, including targeting expressions, entitlement rules, budget settings, custom metrics, and state management (draft, ready, archived).
  • spp_simulation/models/simulation_scenario_template.py
    • Added model for pre-built scenario templates, allowing non-technical users to quickly set up common targeting patterns.
  • spp_simulation/pyproject.toml
    • Added build system configuration file.
  • spp_simulation/readme/DESCRIPTION.md
    • Added a Markdown description file for the module, summarizing its features.
  • spp_simulation/report/simulation_report.xml
    • Added XML template for a PDF report of simulation runs.
  • spp_simulation/report/simulation_report_views.xml
    • Added report action for the simulation run PDF report.
  • spp_simulation/security/ir.model.access.csv
    • Added access control rules for all new simulation models, defining read, write, create, and unlink permissions for different security groups.
  • spp_simulation/security/simulation_security.xml
    • Added security groups (Viewer, Officer, Manager) and privileges for the simulation module, establishing a three-tier access control system.
  • spp_simulation/services/init.py
    • Added initialization file for the services directory, importing new simulation services.
  • spp_simulation/services/simulation_service.py
    • Added core service for orchestrating simulation execution, including targeting, entitlement calculation, budget adjustment, and metric computation, also handling conversion of scenarios to programs.
  • spp_simulation/services/targeting_efficiency_service.py
    • Added service for computing targeting efficiency metrics (true positives, false positives/leakage, false negatives/undercoverage) against an ideal population.
  • spp_simulation/static/description/index.html
    • Added HTML description file for the module.
  • spp_simulation/static/src/comparison_table/comparison_table.js
    • Added JavaScript component for displaying side-by-side comparison tables in the UI.
  • spp_simulation/static/src/comparison_table/comparison_table.xml
    • Added XML template for the comparison table UI component.
  • spp_simulation/static/src/fairness_table/fairness_table.js
    • Added JavaScript component for displaying fairness analysis tables in the UI.
  • spp_simulation/static/src/fairness_table/fairness_table.xml
    • Added XML template for the fairness table UI component.
  • spp_simulation/static/src/overlap_table/overlap_table.js
    • Added JavaScript component for displaying population overlap tables in the UI.
  • spp_simulation/static/src/overlap_table/overlap_table.xml
    • Added XML template for the overlap table UI component.
  • spp_simulation/static/src/results_summary/results_summary.js
    • Added JavaScript component for displaying a summary of simulation results in the UI.
  • spp_simulation/static/src/results_summary/results_summary.xml
    • Added XML template for the results summary UI component.
  • spp_simulation/tests/init.py
    • Added initialization file for the tests directory, importing all new test files.
  • spp_simulation/tests/common.py
    • Added common test fixtures and setup for simulation tests, including test users, registrants, and basic scenarios.
  • spp_simulation/tests/test_comparison.py
    • Added tests for the simulation comparison model, including creation, minimum runs validation, computation, and overlap analysis.
  • spp_simulation/tests/test_custom_metrics.py
    • Added tests for custom metric creation and validation.
  • spp_simulation/tests/test_distribution_service.py
    • Added tests for distribution statistics computation (Gini coefficient, percentiles, standard deviation).
  • spp_simulation/tests/test_entitlement_rule.py
    • Added tests for entitlement rule constraints, ensuring proper validation of amount modes and expressions.
  • spp_simulation/tests/test_fairness.py
    • Added tests for fairness analysis, including disparity detection and demographic group classification.
  • spp_simulation/tests/test_metric_constraints.py
    • Added tests for constraints on simulation metrics, ensuring required fields are present based on metric type.
  • spp_simulation/tests/test_privacy.py
    • Added tests to verify that no individual beneficiary data is stored in simulation run records, upholding privacy requirements.
  • spp_simulation/tests/test_scenario.py
    • Added tests for simulation scenario model, covering creation, state transitions, and duplication.
  • spp_simulation/tests/test_scenario_convert_to_program.py
    • Added tests for converting simulation scenarios into real programs, including validation, manager creation, and option handling.
  • spp_simulation/tests/test_scenario_template.py
    • Added tests for simulation scenario templates, covering creation, categories, and archiving.
  • spp_simulation/tests/test_security.py
    • Added tests for the three-tier security model (Viewer, Officer, Manager) for the simulation module.
  • spp_simulation/tests/test_simulation_run.py
    • Added tests for simulation run model, including creation, non-deletability, and HTML summary generation.
  • spp_simulation/tests/test_simulation_service.py
    • Added tests for the core simulation service, covering happy path execution, error handling, budget strategies, and geographic breakdown.
  • spp_simulation/views/menu.xml
    • Added menu items for Simulation under Social Protection, including Scenarios, Results, Comparisons, and Configuration (Templates, Custom Metrics).
  • spp_simulation/views/simulation_comparison_views.xml
    • Added list and form views for simulation comparisons, including a 'Recompute' button and custom widgets for tables.
  • spp_simulation/views/simulation_metric_views.xml
    • Added list and form views for custom simulation metrics.
  • spp_simulation/views/simulation_run_views.xml
    • Added list, form, and search views for simulation runs, displaying key metrics, summaries, and status.
  • spp_simulation/views/simulation_scenario_template_views.xml
    • Added list, form, and search views for simulation scenario templates.
  • spp_simulation/views/simulation_scenario_views.xml
    • Added list, form, and search views for simulation scenarios, including state management buttons, preview counts, and entitlement rules.
  • spp_simulation/wizard/compare_wizard.py
    • Added wizard model for comparing simulation runs, including validation for minimum runs and dynamic naming.
  • spp_simulation/wizard/compare_wizard_views.xml
    • Added form view for the compare simulation runs wizard.
Activity
  • The pull request introduces a new module, spp_simulation, indicating a significant new feature addition to the OpenSPP system.
  • The author, jeremi, has provided a detailed summary and test plan in the PR description, suggesting a well-thought-out and documented development process.
  • Extensive test files have been added, covering various aspects of the new module's functionality, including scenarios, runs, comparisons, metrics, and security, demonstrating a commitment to quality and reliability.
  • The module is explicitly marked as 'Alpha' in its manifest and README, signaling that it is an early version and may undergo further changes and refinements.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

_logger.info(
"Simulation completed for scenario '%s': %d beneficiaries, cost %.2f, " "equity %.0f, duration %.2fs",
scenario.name,
beneficiary_count,

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (private)
as clear text.

Copilot Autofix

AI about 10 hours ago

In general, to fix clear‑text logging of sensitive information, you should avoid writing sensitive values directly into logs. Either (a) remove them from log messages entirely, (b) replace them with more general or coarser-grained information, or (c) mask/obfuscate them so that logs remain useful but do not expose sensitive details.

Here, the minimum-impact fix is to keep the simulation behavior and database writes exactly the same (we still store beneficiary_count in the run record), but change the _logger.info call so that it no longer logs beneficiary_count. We can still log other high-level, presumably non-sensitive metrics (e.g., total cost, equity score, duration) or, if desired, replace the beneficiary count with a generic statement that does not reveal the exact number.

Concretely in spp_simulation/services/simulation_service.py:

  • Locate the _logger.info call around lines 131–138.
  • Remove the %d beneficiaries placeholder from the format string and the associated beneficiary_count argument.
  • Optionally, adjust the message to continue making sense, e.g., "Simulation completed for scenario '%s': cost %.2f, equity %.0f, duration %.2fs".

No new imports or helper functions are required; only that single logging call needs to be updated.

Suggested changeset 1
spp_simulation/services/simulation_service.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/spp_simulation/services/simulation_service.py b/spp_simulation/services/simulation_service.py
--- a/spp_simulation/services/simulation_service.py
+++ b/spp_simulation/services/simulation_service.py
@@ -129,9 +129,8 @@
                 }
             )
             _logger.info(
-                "Simulation completed for scenario '%s': %d beneficiaries, cost %.2f, equity %.0f, duration %.2fs",
+                "Simulation completed for scenario '%s': cost %.2f, equity %.0f, duration %.2fs",
                 scenario.name,
-                beneficiary_count,
                 total_cost,
                 equity_score,
                 duration,
EOF
@@ -129,9 +129,8 @@
}
)
_logger.info(
"Simulation completed for scenario '%s': %d beneficiaries, cost %.2f, equity %.0f, duration %.2fs",
"Simulation completed for scenario '%s': cost %.2f, equity %.0f, duration %.2fs",
scenario.name,
beneficiary_count,
total_cost,
equity_score,
duration,
Copilot is powered by AI and may make mistakes. Always verify output.
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive new module, spp_simulation, for modeling and analyzing program targeting scenarios. While the overall structure is well-organized, a security audit identified several Cross-Site Scripting (XSS) vulnerabilities within the spp_simulation module, specifically in computed HTML fields and chatter messages. These issues arise from concatenating unescaped user-controlled strings, potentially allowing an attacker with 'Simulation Officer' privileges to execute arbitrary JavaScript. Additionally, the review highlights areas for improving maintainability by reducing code duplication, removing hardcoded values, and fixing inconsistencies between backend and frontend components, including refactoring duplicated logic in the comparison model, dynamically fetching selection labels, and addressing a bug in the simulation report.

Comment on lines 496 to 498
f"<tr><td>{mode}</td><td>{amount:,.2f}</td><td>{mult_field}</td>"
f"<td>{max_mult}</td><td><code>{cel_expr}</code></td>"
f"<td><code>{condition}</code></td></tr>"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

Multiplier fields, CEL expressions, and condition expressions are included in the scenario snapshot HTML without escaping, posing an XSS risk.

try:
dt = datetime.fromisoformat(executed_at.replace("Z", "+00:00"))
date_str = dt.strftime("%b %d, %Y %H:%M")
html_parts.append(f"<th>{name}<br/><small class='text-muted'>{date_str}</small></th>")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The scenario name is inserted into the HTML table header without escaping, leading to a potential Cross-Site Scripting (XSS) vulnerability. An attacker could craft a scenario name containing malicious scripts that would execute when another user views the comparison.

rules_html += f"<li>Multiplier: {amount:,.2f} × {field}</li>"
elif mode == "cel":
cel = rule.get("amount_cel_expression", "?")
rules_html += f"<li>CEL: <code>{cel}</code></li>"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The CEL expression in entitlement rules is inserted into the HTML without escaping, posing an XSS risk.

target_type_label = "households" if record.scenario_id.target_type == "group" else "individuals"
coverage_pct = f"{record.coverage_rate:.1f}" if record.coverage_rate else "0.0"
parts = [
f"<strong>{scenario_name}</strong> targets "

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The scenario name is included in the executive summary HTML without escaping. This is a stored XSS vulnerability.

amount = area_info.get("amount", 0)
coverage = area_info.get("coverage_rate", 0)
html_parts.append(
f"<tr><td>{name}</td><td>{count:,}</td>" f"<td>{amount:,.2f}</td><td>{coverage:.1f}%</td></tr>"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

Area names are included in the geographic breakdown HTML table without escaping, which can lead to XSS.

# Post chatter message
scenario.message_post(
body=_("Converted to program: %s") % program.name,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The program name is formatted into the chatter message body without escaping. Since Odoo chatter messages are rendered as HTML, this allows for a stored XSS attack via the program name.

Comment on lines +218 to +260
def action_compute_comparison(self):
"""Compute the side-by-side comparison data."""
self.ensure_one()
comparison_data = {"runs": []}
for run in self.run_ids:
run_data = {
"run_id": run.id,
"scenario_name": run.scenario_id.name,
"beneficiary_count": run.beneficiary_count,
"total_cost": run.total_cost,
"coverage_rate": run.coverage_rate,
"equity_score": run.equity_score,
"gini_coefficient": run.gini_coefficient,
"has_disparity": run.has_disparity,
"leakage_rate": run.leakage_rate,
"undercoverage_rate": run.undercoverage_rate,
"budget_utilization": run.budget_utilization,
"executed_at": run.executed_at.isoformat() if run.executed_at else None,
}
comparison_data["runs"].append(run_data)
self.comparison_json = comparison_data

# Build parameters comparison from snapshots
parameters_data = {"runs": []}
for run in self.run_ids:
snapshot = run.scenario_snapshot_json or {}
param_data = {
"run_id": run.id,
"scenario_name": run.scenario_id.name,
"executed_at": run.executed_at.isoformat() if run.executed_at else None,
"target_type": snapshot.get("target_type"),
"targeting_expression": snapshot.get("targeting_expression"),
"budget_amount": snapshot.get("budget_amount"),
"budget_strategy": snapshot.get("budget_strategy"),
"entitlement_rules": snapshot.get("entitlement_rules") or [],
"ideal_population_expression": snapshot.get("ideal_population_expression"),
}
parameters_data["runs"].append(param_data)
self.parameters_comparison_json = parameters_data

# Compute overlap if possible
self._compute_overlap()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is significant code duplication between action_compute_comparison and _onchange_run_ids. Both methods build comparison_data and parameters_data using identical logic. To adhere to the DRY (Don't Repeat Yourself) principle and improve maintainability, this logic should be extracted into private helper methods.

For example, you could create _build_comparison_data(self) and _build_parameters_data(self) methods and call them from both places.

return {}

partner_model = self.env["res.partner"]
amount_map = dict(zip(beneficiary_ids, amounts, strict=False))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The zip function is used here. In Python 3.10+ (which Odoo 17 supports), zip has a strict argument. Since beneficiary_ids and amounts are expected to have the same length, using strict=True is safer as it will raise a ValueError if their lengths differ. This can help catch potential bugs early.

Suggested change
amount_map = dict(zip(beneficiary_ids, amounts, strict=False))
amount_map = dict(zip(beneficiary_ids, amounts, strict=True))

{key: "gini_coefficient", label: "Benefit Equality (Gini)", format: "decimal"},
{key: "leakage_rate", label: "Leakage", format: "percent"},
{key: "undercoverage_rate", label: "Missed Population", format: "percent"},
{key: "targeting_accuracy", label: "Targeting Accuracy", format: "percent"},

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The metrics array includes 'targeting_accuracy', but the backend code in spp.simulation.comparison does not add this key to the comparison_json data. As a result, this metric will always appear as empty ('-') in the comparison table. You should either add the logic to compute and include this metric in the backend or remove it from the frontend component.

const values = this.state.runs.map((r) => r[metricKey] || 0);
const value = values[runIndex];
// For these metrics, lower is better
const lowerIsBetter = ["gini_coefficient", "leakage_rate", "undercoverage_rate", "total_cost"];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of metrics where a lower value is better is hardcoded. This is fragile and requires updating this component if new 'lower-is-better' metrics are added. A more maintainable approach would be to include this information in the metric definition within the component's state.

For example:

{
    key: 'gini_coefficient',
    label: 'Benefit Equality (Gini)',
    format: 'decimal',
    lowerIsBetter: true
}

Then isBestValue can check this property dynamically.

@codecov
Copy link

codecov bot commented Feb 18, 2026

Codecov Report

❌ Patch coverage is 0% with 1712 lines in your changes missing coverage. Please review.
✅ Project coverage is 42.44%. Comparing base (5ac7496) to head (b03812f).

Files with missing lines Patch % Lines
spp_simulation/services/simulation_service.py 0.00% 269 Missing ⚠️
spp_simulation/models/simulation_run.py 0.00% 243 Missing ⚠️
spp_simulation/models/simulation_comparison.py 0.00% 166 Missing ⚠️
...mulation/tests/test_scenario_convert_to_program.py 0.00% 150 Missing ⚠️
spp_simulation/models/simulation_scenario.py 0.00% 98 Missing ⚠️
spp_simulation/tests/test_simulation_service.py 0.00% 98 Missing ⚠️
spp_simulation/tests/test_fairness.py 0.00% 77 Missing ⚠️
spp_simulation/tests/test_distribution_service.py 0.00% 64 Missing ⚠️
spp_simulation/tests/test_scenario.py 0.00% 54 Missing ⚠️
spp_simulation/tests/test_comparison.py 0.00% 46 Missing ⚠️
... and 20 more

❗ There is a different number of reports uploaded between BASE (5ac7496) and HEAD (b03812f). Click for more details.

HEAD has 7 uploads less than BASE
Flag BASE (5ac7496) HEAD (b03812f)
fastapi 1 0
endpoint_route_handler 1 0
spp_alerts 1 0
spp_api_v2_cycles 1 0
spp_api_v2_change_request 1 0
spp_api_v2_data 1 0
spp_api_v2 1 0
Additional details and impacted files
@@             Coverage Diff             @@
##             19.0      #46       +/-   ##
===========================================
- Coverage   71.31%   42.44%   -28.88%     
===========================================
  Files         299      141      -158     
  Lines       23618    10866    -12752     
===========================================
- Hits        16844     4612    -12232     
+ Misses       6774     6254      -520     
Flag Coverage Δ
endpoint_route_handler ?
fastapi ?
spp_alerts ?
spp_api_v2 ?
spp_api_v2_change_request ?
spp_api_v2_cycles ?
spp_api_v2_data ?
spp_base_common 92.81% <ø> (ø)
spp_programs 49.56% <ø> (ø)
spp_security 51.08% <ø> (ø)
spp_simulation 0.00% <0.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

"ideal_population_expression": snapshot.get("ideal_population_expression"),
}
parameters_data["runs"].append(param_data)
self.parameters_comparison_json = parameters_data
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicated comparison-building logic in onchange and action

Low Severity

_onchange_run_ids and action_compute_comparison contain nearly identical code blocks for building comparison_json and parameters_comparison_json. If the data structure changes, both copies need to be updated in sync, risking divergence and inconsistent behavior.

Additional Locations (1)

Fix in Cursor Fix in Web

"count": count,
"amount": area_amount,
"coverage_rate": count / total_count * 100 if total_count else 0,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Geographic "coverage_rate" actually computes distribution share

Medium Severity

The coverage_rate in the geographic breakdown is computed as count / total_count * 100 where total_count is the total number of beneficiaries — making it a distribution share (% of beneficiaries in this area), not a coverage rate (% of the area's population covered). The HTML column header labels this "Coverage," which is misleading in a social protection context where "coverage" has a specific per-area population meaning.

Additional Locations (1)

Fix in Cursor Fix in Web

total_registry_count = self._get_total_registry_count(scenario)

beneficiary_count = len(beneficiary_ids)
coverage_rate = (beneficiary_count / total_registry_count * 100) if total_registry_count else 0.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beneficiary count not adjusted after cap_total budget

High Severity

beneficiary_count and coverage_rate are computed at line 58–59 before budget adjustment at line 65. When cap_total budget strategy zeroes out some beneficiaries' amounts, beneficiary_count still includes everyone who matched targeting—including those who receive nothing. The field's help text says "Number of registrants who would receive benefits," but this count is inflated. Additionally, the distribution statistics (Gini, percentiles) are computed on the post-cap amounts including the zeros, artificially inflating inequality measures.

Additional Locations (1)

Fix in Cursor Fix in Web

rule.multiplier_field or "unknown",
rule.amount,
)
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed entitlement rules missing explicit amount_mode in conversion

Medium Severity

In _create_wizard_entitlement_items, amount_mode is explicitly set for "cel" and "multiplier" rules but not for "fixed" rules — the most common case. When rule.amount_mode == "fixed", the item_vals dict is created without an amount_mode key, relying entirely on the wizard item's field default. If that default ever changes or differs from "fixed", all fixed-amount rules would be created with the wrong mode during scenario-to-program conversion.

Fix in Cursor Fix in Web

return value === Math.min(...values);
}
return value === Math.max(...values);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comparison table highlights uncomputed metrics as best

Medium Severity

isBestValue uses r[metricKey] || 0 which converts null/undefined (meaning "not computed") to 0. For lower-is-better metrics like leakage_rate and undercoverage_rate, an uncomputed metric becomes 0 and gets highlighted green as "best value." Meanwhile, formatValue correctly displays null as "-", so a cell can show "-" text with misleading green "best" highlighting. This happens whenever one scenario has an ideal population expression and the other doesn't.

Fix in Cursor Fix in Web

base_domain=[("disabled", "=", False)],
limit=0,
)
record.targeting_preview_count = result.get("count", 0)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Preview count uses different targeting path than simulation

Medium Severity

_compute_targeting_preview_count uses cel_service.compile_expression with a hardcoded base_domain=[("disabled", "=", False)], while the actual simulation in _execute_targeting uses spp.cel.executor.compile_for_batch with the profile's base domain loaded from spp.cel.registry.load_profile(). These are entirely different code paths with different base domains. The profile's base domain likely includes additional filters (e.g., is_registrant, is_group), so the preview count displayed in the scenario form can differ significantly from the actual simulation result, misleading users who rely on it to estimate targeting reach.

Additional Locations (1)

Fix in Cursor Fix in Web

…ling

New module providing:
- Scenario templates and configurable simulation scenarios
- Simulation run execution with metric tracking
- Scenario-to-program conversion for promoting simulations to real programs
- Comparison wizard for side-by-side scenario analysis
- Targeting efficiency and coverage metrics
…ng targeting_accuracy field

- Add markupsafe Markup/escape imports and use Markup.format() for all
  HTML computed fields in simulation_run.py and simulation_comparison.py
  to safely escape user-controlled values (scenario names, area names,
  metric names, CEL expressions, targeting expressions, multiplier fields)
- Use Markup() for static HTML strings in html_parts lists so join works correctly
- Escape program.name in simulation_service.py chatter message_post body
- Replace non-existent doc.targeting_accuracy field in simulation_report.xml
  with a condition on the existing leakage_rate/undercoverage_rate fields
- Remove targeting_accuracy from comparison_table.js metrics array
…allback logic

- Load CEL profile in _get_ideal_population_ids so the executor runs with
  the correct base domain and model context (matches _execute_targeting)
- Fix _compute_metric_results_html to read "rate" and "ratio" keys for
  coverage and ratio metrics instead of always reading "value"
- Fix distribution stats to use explicit None check so zero values render
  as numbers instead of dashes
- Add "scenario_id.target_type" to @api.depends for _compute_summary_html
- Set amount_mode = "fixed" in the multiplier-to-fixed fallback branch so
  the wizard item is created with an explicit mode
- Fix aggregate metric to use metric.aggregation as the result key so that
  sum/avg/min/max aggregations read the correct value from the CEL result
@jeremi jeremi force-pushed the feat/simulation-engine branch from 3393dca to b03812f Compare February 18, 2026 14:30
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

"res.partner", scenario.targeting_expression, batch_size=5000
):
all_ids.extend(batch_ids)
run_sets[run.id] = set(all_ids)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overlap computation uses current expression instead of snapshot

Medium Severity

_compute_overlap reads run.scenario_id.targeting_expression (the current scenario state) instead of using the targeting expression from run.scenario_snapshot_json that was captured at simulation execution time. If a scenario's targeting expression is modified after a run completes, the overlap computation will use the new expression, producing results that don't match the actual simulation run data. The snapshot data is readily available on each run record but is not used here.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments