The Central Problem
Normative role design for LLM agents is happening through rapid, experimental, and unsystematic prompt engineering. Developers, researchers, and systems designers are embedding ethical assumptions into agents' roles through trial-and-error, often focusing on outcomes rather than coherent principles. We lack a systematic vocabulary for describing the kinds of norms embedded in LLM agents.
Objective
Analyze around twenty simulation experiments to construct a preliminary ontology of normative role design in Multi-Agent systems (MAS).
Why it Matters
- AI agents are being deployed in critical domains: From energy grids and supply chains to healthcare and education.
- Design choices encode norms: How we prompt and structure agents shapes their decision-making and societal impact.
- Scale amplifies risk: These systems operate at massive scale, affecting millions of users and real-world outcomes.
Main Research Question
What types of normative roles are operationalized across different applications of multi-agent systems?
Research Approach
Approach: Constructing an ontology using qualitative content analysis.
Data: ~20 multi-agent LLM simulation papers and their GitHub repositories.
Units of Analysis:
- System Prompts (core object) — Natural Language Role instructions embedded in model's code base
- Simulation Papers — Contextual data: simulation domain, architecture, evaluation, outcomes
Analytical Process:
1. Systematic Selection – Apply purposive selection and inclusion criteria to collect top papers in the field
2. Inductive Coding – Identify and tag system prompts + context
3. Ontology Construction – Build a structured map of role types, mechanisms, intentions and relationships
Theoretical Framework
Three normative ethical theories guide the analysis:
Deontological
rules, duties, obligations
Consequentialist
outcomes, utility, optimization
Virtue Ethics
character traits, social roles
Based on Woodgate & Ajmeri (2024), Macro Ethics Principles for Responsible AI Systems