Skip to content

feat: add LLM Opinion Dynamics example using mesa-llm#360

Open
abhinavk0220 wants to merge 8 commits intomesa:mainfrom
abhinavk0220:add/llm-opinion-dynamics
Open

feat: add LLM Opinion Dynamics example using mesa-llm#360
abhinavk0220 wants to merge 8 commits intomesa:mainfrom
abhinavk0220:add/llm-opinion-dynamics

Conversation

@abhinavk0220
Copy link

@abhinavk0220 abhinavk0220 commented Mar 6, 2026

Summary

Adds a new example demonstrating LLM-powered opinion dynamics, where
agents debate a topic using natural language reasoning instead of
classical mathematical rules.

Motivation

The existing deffuant_weisbuch example models opinion change using
a fixed convergence parameter (μ). This example shows what becomes
possible when agents use an LLM to genuinely reason about their
neighbors' arguments — producing emergent consensus or polarization
that no hardcoded rule could replicate.

What's included

  • agent.pyOpinionAgent extending LLMAgent with CoT reasoning
  • model.pyLLMOpinionDynamicsModel on an OrthogonalMooreGrid
  • app.py — SolaraViz with opinion heatmap, trajectory lines, and population dynamics
  • README.md — explanation with comparison table vs Deffuant-Weisbuch

How it works

Each step, agents observe their Moore neighborhood, construct a prompt
summarizing neighbor opinions, and let the LLM decide whether to update
their opinion score (0–10). The topic is fully configurable via the UI.

Visualization

Step 0 — Initial random opinions:

Initial State

Step 4 — After LLM-driven persuasion:

Step 4 — Convergence

Key emergent behaviors observed:

  • Agents 2 & 3 independently converged to 3.8 — emergent clustering, no hardcoded rule
  • Agent 4 started at 9.6, saw a neighbor at 0.5, and reasoned itself down to 2.0 in one step — genuine LLM persuasion
  • Agent 1 (spatially isolated) held firm at 9.8 — isolation preserves extreme opinions
  • Variance dropped from ~15 → ~7 across 4 steps (visible in Population Dynamics panel)

Testing

  • Runs end-to-end with groq/llama-3.1-8b-instant (free tier) and gemini/gemini-2.0-flash
  • Import check passes: from llm_opinion_dynamics.model import LLMOpinionDynamicsModel
  • Requires an API key — see .env.example

Related: mesa/mesa-llm#153


GSoC contributor checklist

Context & motivation

Built this model to explore how LLM reasoning changes opinion dynamics
compared to the classic rule-based version. Opinion dynamics is a
foundational ABM — adding LLM agents lets researchers study how
language-based persuasion affects consensus and polarization in ways
a fixed threshold rule never could.

What I learned

LLM agents are significantly more resistant to opinion change than
rule-based agents. They reason about why neighbors hold different
opinions before deciding to update — producing more stable minority
opinions than the classical model. The system prompt design has an
outsized effect on macro outcomes.

Learning repo

🔗 My learning repo: https://github.com/abhinavk0220/GSoC-learning-space
🔗 Relevant model: https://github.com/abhinavk0220/GSoC-learning-space/tree/main/models/llm_opinion_dynamics

Readiness checks

  • This PR addresses an agreed-upon problem (linked issue or discussion with maintainer approval), or is a small/trivial fix
  • I have read the contributing guide
  • I have performed a self-review
  • Another GSoC contributor has reviewed this PR
  • Tests pass locally
  • Code is formatted (ruff check . --fix)

AI Assistance Disclosure

This PR was developed with AI assistance (Claude) for code generation
and debugging. All code has been reviewed, tested, and understood by
the contributor.


Mesa Examples Review Checklist (#390)

Does it belong?

  • No significant overlap with existing examples
  • Well-scoped simplest model that demonstrates the idea
  • Showcases Mesa features not already well-covered
  • Showcases interesting ABM mechanics (LLM reasoning vs rule-based dynamics)

Is it correct?

  • Uses current Mesa APIs (OrthogonalMooreGrid, DataCollector)
  • Runs and visualizes out of the box (requires API key — see .env.example)
  • Deterministic with fixed rng seed — LLM outputs are non-deterministic by nature
  • Moved to llm/ directory

Is it clean?

  • No dead code or unused imports
  • Clear naming, logic readable
  • README explains what it does, what it demonstrates, how to run it
  • PR follows template, commits reasonably clean

abhinavKumar0206 and others added 4 commits March 6, 2026 09:33
…le model demonstrating LLM-powered opinion dynamics,where agents debate a topic using natural language reasoning insteadof classical mathematical convergence rules (cf. Deffuant-Weisbuch).Each agent:- Holds an opinion score (0-10) on a configurable debate topic- Observes neighboring agents' opinions each step- Uses CoT reasoning via LLM to decide whether to update its opinion- Produces emergent consensus or polarization from genuine reasoningVisualization includes opinion trajectory plot, mean opinion, andvariance over time via SolaraViz.Depends on mesa-llm for LLMAgent and CoTReasoning.
@EwoutH
Copy link
Member

EwoutH commented Mar 15, 2026

Thanks for the PR, looks like an interesting model.

Could you:

  • Checkout the test failure
  • Request a peer-review (and maybe do one or two yourself)

All LLM PRs will go in a new llm folder, but that can be done later.

@abhinavk0220
Copy link
Author

abhinavk0220 commented Mar 16, 2026

Thanks for the PR, looks like an interesting model.

Could you:

  • Checkout the test failure
  • Request a peer-review (and maybe do one or two yourself)

All LLM PRs will go in a new llm folder, but that can be done later.

Hi @EwoutH! Updates done:

READY TO MERGE !!!!!

- Replace placeholder dashboard screenshot with actual run (Step 4)
- Add initial state screenshot (Step 0) for before/after comparison
- Expand README with Visualization section documenting emergent behaviors:
  convergence to 3.8, spatial isolation preserving 9.8, variance drop

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants