feat: add LLM Opinion Dynamics example using mesa-llm#360
Open
abhinavk0220 wants to merge 8 commits intomesa:mainfrom
Open
feat: add LLM Opinion Dynamics example using mesa-llm#360abhinavk0220 wants to merge 8 commits intomesa:mainfrom
abhinavk0220 wants to merge 8 commits intomesa:mainfrom
Conversation
…le model demonstrating LLM-powered opinion dynamics,where agents debate a topic using natural language reasoning insteadof classical mathematical convergence rules (cf. Deffuant-Weisbuch).Each agent:- Holds an opinion score (0-10) on a configurable debate topic- Observes neighboring agents' opinions each step- Uses CoT reasoning via LLM to decide whether to update its opinion- Produces emergent consensus or polarization from genuine reasoningVisualization includes opinion trajectory plot, mean opinion, andvariance over time via SolaraViz.Depends on mesa-llm for LLMAgent and CoTReasoning.
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
19 tasks
19 tasks
Member
|
Thanks for the PR, looks like an interesting model. Could you:
All LLM PRs will go in a new |
Author
Hi @EwoutH! Updates done:
READY TO MERGE !!!!! |
- Replace placeholder dashboard screenshot with actual run (Step 4) - Add initial state screenshot (Step 0) for before/after comparison - Expand README with Visualization section documenting emergent behaviors: convergence to 3.8, spatial isolation preserving 9.8, variance drop Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds a new example demonstrating LLM-powered opinion dynamics, where
agents debate a topic using natural language reasoning instead of
classical mathematical rules.
Motivation
The existing
deffuant_weisbuchexample models opinion change usinga fixed convergence parameter (μ). This example shows what becomes
possible when agents use an LLM to genuinely reason about their
neighbors' arguments — producing emergent consensus or polarization
that no hardcoded rule could replicate.
What's included
agent.py—OpinionAgentextendingLLMAgentwith CoT reasoningmodel.py—LLMOpinionDynamicsModelon anOrthogonalMooreGridapp.py— SolaraViz with opinion heatmap, trajectory lines, and population dynamicsREADME.md— explanation with comparison table vs Deffuant-WeisbuchHow it works
Each step, agents observe their Moore neighborhood, construct a prompt
summarizing neighbor opinions, and let the LLM decide whether to update
their opinion score (0–10). The topic is fully configurable via the UI.
Visualization
Step 0 — Initial random opinions:
Step 4 — After LLM-driven persuasion:
Key emergent behaviors observed:
Testing
groq/llama-3.1-8b-instant(free tier) andgemini/gemini-2.0-flashfrom llm_opinion_dynamics.model import LLMOpinionDynamicsModel.env.exampleRelated: mesa/mesa-llm#153
GSoC contributor checklist
Context & motivation
Built this model to explore how LLM reasoning changes opinion dynamics
compared to the classic rule-based version. Opinion dynamics is a
foundational ABM — adding LLM agents lets researchers study how
language-based persuasion affects consensus and polarization in ways
a fixed threshold rule never could.
What I learned
LLM agents are significantly more resistant to opinion change than
rule-based agents. They reason about why neighbors hold different
opinions before deciding to update — producing more stable minority
opinions than the classical model. The system prompt design has an
outsized effect on macro outcomes.
Learning repo
🔗 My learning repo: https://github.com/abhinavk0220/GSoC-learning-space
🔗 Relevant model: https://github.com/abhinavk0220/GSoC-learning-space/tree/main/models/llm_opinion_dynamics
Readiness checks
ruff check . --fix)AI Assistance Disclosure
This PR was developed with AI assistance (Claude) for code generation
and debugging. All code has been reviewed, tested, and understood by
the contributor.
Mesa Examples Review Checklist (#390)
Does it belong?
Is it correct?
rngseed — LLM outputs are non-deterministic by naturellm/directoryIs it clean?