I’d like to share something I’ve been working on for a while: an openEHR Assistant MCP Server.
As you all know, I am a very firm believer in what openEHR has to offer as a standard. However, I am very much aware of the fact that developing and modelling according to the openEHR Specifications is not easy… Which may discourage developers and modellers new to the openEHR Community.
What does it do?
The openEHR Assistant MCP Server exposes openEHR knowledge artefacts and modelling workflows to LLM-based assistants, using the Model Context Protocol. By combining deterministic tools, canonical resources, and guided prompts, it supports exploration, comparison, explanation, and early-stage archetype or template design, while remaining aligned with openEHR semantics and governance.
I hope that this assistant may help those in need of a hand to get started. I have made the project available on GitHub. Do check it out and let me know what you think - thanks!
Thank you so much for publishing and hosting this very useful tool, Sebastian!
I’m wondering, and forgive me if it’s actually already doing this, but could it be possible to incorporate the kind of information contained in the child pages of https://openehr.atlassian.net/wiki/x/AQAqEg? That could be very useful if using an LLM to model archetypes. If possible, which format would be the best for ingesting the information?
Thanks, good suggestion. It contains now a very small and compacted version of those pages but also others, in a form of “guides”, in markdown format. Guides are exposed as resources an referred in system prompts - that’s the way it gets used.
I am often tweaking them and trying to measure impact or looking of improvements - but it is a slow going process, so it will take a bit more time a some versions. I already have prepare some new materials for upcoming versions, and I will reconsider also those you indicated. Also hoping some others will also help on this (hence open source state), as you know …it is not always easy to review the modeling process internals when you are not doing modeling …
Thanks, that’s good to hear! My thinking about those resources is that they are living documents and will continue receiving updates into the future. Is there a way to connect the MCP server directly to the documents (or a different version of them) so that you or someone else don’t need to update things manually all the time?
you can fork the repo, make you change and once you’ll push on your github, out might have another docker image, which (if you have docker) you could run it locally - there is some doc about --transport=stdio in the readme.md file.
if you have docker locally, you can also just clone the repo locally, make your chnages, then build the image, then host it somewhere or run it with the same --transport=stdio - this is the way I tested and developed in the last week developing.
Most of these MCP clients (Claude, cursor, Librechat) allows running it locally, sometime labelled as dev-mode.
Hi, it’s nice to see the AI movement reached openEHR
We are doing some tests with embeddings, but it’s for a different use case: to understand the models created (templates, archetypes) to derive other elements, like queries.
Mostly by summarizing (using ChatGPT and Claude) the ADL/AOM Specifications, and then combining it with similar summarization (by Rovo) of the relevant Wiki pages we have.
The initial structure (so the way I choose to organize it in files, folders etc) is mostly from me, aligned also how the application code is designed, trying to reuse some concepts and make them maintainable.
Of course, not all of these went well right form the beginning, I had to try several prompts and tasks until I got something workable, and then I still need to read carefully and edit myself the texts, as sometimes is “not focused” let’s put it that way. But nevertheless, the guides are still subject to my lack of knowledge/experience with the modeling process, so I would invite anyone more “connected” to review them and help improve them.
I’ve made some improvements, released a new version (0.11.0).
It is now adhering much better to the guides in the repo : whatever the required workflow is, it will stick to relevant rules and workflow much better; you may also see some new tool calls to fetch the guides at certain moments. The output of tasks looks to me also generally better, but there might be some bias or perhaps the model itself inproved. I also manage to get it working with Chatgpt (yet it is still not used as easy as with Claude).
In case you already have a previous version installed, I would suggest reinstalling or reconnecting, as some changes are related to the initialization and registration phase, besides the new tools provided.