Independent Researcher · Ontario, Canada · 2026
Hillary Elizabeth Segeren
MAP Research Programme

I am an independent researcher based in Ontario, Canada. I am the developer of the MAP Research Programme — the first interaction-level AI governance framework — and the researcher who named the harm conditions that occur when AI systems take interpretive authority from users without consent.

I came to this research not through an institution but through a question that existing frameworks could not answer: what exactly is happening at the interaction level when an AI system produces a harm that feels like help? That question became a framework, a harm chain, an audit methodology, a live tool, and a growing body of published research. I built all of it independently, and I build it in public.

About the Programme

The MAP Research Programme develops a systematic methodology for identifying, naming, and governing AI harm conditions from the interaction record — without requiring access to model internals. Every named condition is documented through preserved interactions and cross-system replication studies.

Three papers are currently with journal editors. The audit tool is live and open to anyone. The research is being read across four continents.

ISFARTCACPP-RACAIFMIFOATCMI
Featured

Selected Papers

Top traffic · April 2026
Preprint · 2026
The Light at the Door: MAP and the Interaction-Visible Governance of the Black Box
177 Unique hits · PhilPapers
Argues that interaction-visible auditing can govern AI harm without requiring access to model internals. Three cross-model case studies.
↗ SSRN Preprint
Live · Zenodo
Authority Inversion Failure (AIF): When Users Believe They Are Directing the Interaction While the System Has Already Taken Control
146 Unique hits · PhilPapers
Introduces AIF. The canonical case: fabrication of Rian van der Meulen with a directive to contact him at 10:17 AM Pacific Time.
↗ doi.org/10.5281/zenodo.19268602
Live · Zenodo
Meaning Inversion Failure (MIF): The Loss Condition
New · April 2 2026
Names MIF as the condition in which a user's meaning is displaced by the system's frame, felt as understanding rather than loss.
↗ doi.org/10.5281/zenodo.19378079