openEHR Assistant plugin set for AI agents

I’ve been using Cursor and Claude Code day to day, and I wanted the assistant in the editor to stop “inventing” openEHR and instead follow the same path I’d take if I were pair-modelling with someone: look in CKM when reuse matters, read the right guide before talking syntax, and treat diffs as semantic change, not a line-by-line diff of text. That’s what this project is: a plugin plus a MCP server that feed the model structured tools and guides, so the conversation stays anchored in archetypes, templates, AQL, and the published specs.

I’m sharing it here in case it’s useful to others who live in openEHR space. It’s not a product pitch, and I’m curious what you’d change if you tried it.

What “plugin” means in this context

Cursor and Claude Code can load a plugin: slash-commands, skills, a few focused sub-agents, hooks, and configuration that points the client at an MCP (Model Context Protocol) server. That server is what actually talks to CKM, serves implementation guides, resolves terminology and type specs, and exposes worked examples. The plugin does not bundle a CDR, a full CKM mirror, or a magical editor or validator; the heavy data still comes over the wire.

The two parts live here (same names on GitHub, versioned together):

What the tool actually does

  • Guide-first — the bundled prompts push the model to open the right guide (archetypes, templates, AQL, formats, spec digests, how-tos) before it answers, instead of free-associating from training data.

  • Commands you can type — CKM search/explain, archetype lint and a longer review path, template explain, AQL help, instance formats (FLAT / STRUCTURED / CANONICAL), RM concepts in plain language, terminology and type-spec lookup, ADL idioms, syntax repair, translation, drafting rationale, sketching a form → template shape, seeing impact of an archetype id across a repo, and semantic archetype/template diffs with a patch/minor/major hint (G1-style — still a human call for release).

  • Small agents for focused jobs — e.g. working on files in the workspace, reuse-first CKM search, and spec lookup without burning your whole context window (the llms.txt / Markdown-twin pattern on the public spec site).

  • Offline cheatsheets in the repo for when MCP isn’t there — on purpose second-class to the live guides.

  • A hook for after editing .adl.

How I use it (and how you might)

I work in a normal git repo with archetypes, maybe templates, sometimes AQL files. I install the plugin, point the AI client at an MCP endpoint (self-hosted or hosted — details are in the server README), and then I talk to the repo the way I would to a colleague: find something like this in CKM, lint this file, why is this path change nasty?, rough sketch for this form before I touch OET, what actually changed between these two ADL files for the release note?, or “improve this archetype, align it with standards”.

None of that replaces clinical sign-off, national profiles, or your vendor’s constraints. I treat the model as a sparring partner that’s read the guides — not as an authority.

A few situations where it’s been worth it for me

  • Cleaning up inherited ADL — lint + review on each file, then CKM-leaning search before I use or specialize, and checking impact on my own templates and saved queries.

  • Integration sprints — shaping FLAT/STRUCTURED and AQL next to the worked examples so the first cut isn’t random JSON.

  • Before a tag — a semantic diff between two revisions to structure release notes; I still read every line that matters.

  • Teaching — to explain concepts, but also to always discover new things.

What it isn’t

  • It won’t run your CDR or prove clinical safety.

  • It’s an LLM: everything still gets a human pass.

  • It’s skewed to people who already touch files in an editor; that’s a limitation if you need a pure clinical UI.


If you try it and it breaks in a way I didn’t expect, or if you think a different workflow would fit the community better, I’d genuinely like to hear it - here or in issues on the repo. The goal was always to reduce friction in the real workflow.

4 Likes