What would you like to see in a new openEHR CDR?

Hi all, most here might not know we are working on Atomik, a new openEHR CDR.

Soon we will release a set of demos, tutorials and documentation that would help as educational materials for anyone interested. We would like to know if you have anything in particular you would like to see on those educational materials, for instance in the video demos.

About Atomik, it’s derived from EHRServer which was the first open source implementation of an openEHR CDR. In Atomik we improved and optimized many areas of EHRServer, and did some changes. For instance EHRServer is multi-tenant, and Atomik is single-tenant, which removes complexity and improves performance.

One interesting feature is Atomik will support storing and querying demographic data, so it can be used just as a CDR, or just as a DDR (demographic data repository) or as both. So the CDR and the MPI could be in the same system. This configuration might be useful for some small projects, PoC or quick prototypes.

Atomik will support integration with the openEHR Toolkit as the storage of Operational Templates, since the Toolkit has a better internal management of OPTs than EHRServer (with semver versioning inside that is useful when developing OPTs). We might also use some tools from the Toolkit, like the data generators to be able to load some test data automatically when trying the server out.

In EHRServer, clusters of servers where supported by a sync REST API, which allows HA and automatic backups for all data (EHRs, CONTRIBUTIONs, COMPOSITIONs, FOLDERs, templates and even queries). In Atomik we will have a faster TCP sync process, removing the HTTP delays and increasing throughput, so servers can be synchronized in a fraction of the time it takes when using HTTP. Note this is system to system not database to database!

Finally we will offer integrations with other formats, APIs and standards via Mirth Connect, that is a very popular integration engine with support for many communication protocols and message formats, it can even be used for ETL though it’s not the main use case for Mirth.

One last thing to clarify is Atomik will be licensed and is not open source, we are trying a new business model for this tool to be offered in parallel to EHRServer which is open source and the business model is based on support.

I personally want to hear what the community is looking for, even if you are not interested in this server itself, in general, what would you like to see in educational materials, because even though the materials will be focused on Atomik, we will talk about openEHR implementation in general, which I think will be valuable for the whole community.

Get in touch!

Than you,


One thing I would be interested in is a simple way of running queries on a combination of linked Demographic/MPI and EHR data. For example: list all patients with breast cancer diagnosis of a certain type registered in the EHR that also have a biological mother with a breast cancer diagnosis registered in the EHR.

Perhaps simething like AGQL hinted at in EHRs with different system_id in the same server? - #37 by erik.sundvall

1 Like

Thanks @erik.sundvall that is a wonderful idea, in fact is a mid-term goal for our platform. Though it is not “simple” to implement, I think we can offer an easy interface for those kinds of queries, simplicity to the user, complexity hidden :slight_smile:

The EHRServer has something like that though not fully demographic queries: it can combine queries that filter COMPOSITIONs by certain criteria, then select all EHRs that have at least one COMPOSITION matching the criteria, that way with minimal demographics stored in the EHR we can select:

  • male patients
  • between 40 and 60 years old
  • with any type of diabetes (this uses SNOMED CT expressions to get all possible diabetes types)
  • with obesity
  • etc

Then get all EHRs that match all those criteria. Certainly full demographic queries extending that approach could be helpful for many use cases, from family inherited diseases to selection of patients for clinical trials.

1 Like

A while ago I had some thoughts/suggestion related to this subject: https://openehr.atlassian.net/wiki/spaces/spec/pages/1914732619/AQL+support+for+demographic+queries.
Have you seen it? It uses Demographic RM and AQL style query (although not supported in current specs) - can we conceptually work it out further?

1 Like

Interesting @sebastian.iancu I didn’t see that page before.

From that I understand that spec-wise, SYSTEM is the global openEHR spec level, so it can see/use/reference EHR and DEMOGRAPHIC classes, right?

In terms of architecture SYSTEM could be a virtual component with access to both CDR and DDR, or even an integration component allowed to access CDR and DDR, and that could process those queries and do all the internal magic for distributing queries for CDR and DDR and integrating results to send back one single result structure to the client/user.

I would love to have more analysis at a conceptual level, though I would prefer AQL not to be involved on that, because syntax can restrict what could actually be possible/necessary to do to accomplish such complex processing/query distribution/result aggregation/etc.

For instance, I’m specially interested on querying families and navigating family trees to find patterns that would be useful for CDS, research and education. Or other complex things like getting all the people involved in a certain episode of care and under which roles, etc, etc. which is related to related parties like PERSON and GROUP or ORGANIZATION, and ROLE.

Modeling is also important in this area, though demographic models are not so developed in comparison to ENTRY models available.

BTW I also raised an issue to think about the demographic REST API, maybe we can draft something quick, make some reference implementations, test them and then refine.

1 Like

The idea with Demographic RM in AQL is to leverage archetype aspects of demographic models in query expression. We already have that technology on EHR side, we just need to “extend” it to the other part of RM. There are indeed too few demographic archetypes available in CKM, but on the other hand it they will not evolve if we don’t create a use-case and if we don’t support them with tooling and technology (REST & AQL).

About family trees: interesting topic, and I’m pretty sure a future AQL might be able to coop with that, but consider also that at least in EU there are quite strong privacy and security regulations which might prevent or obstruct you building such trees (in an absence of a patient and family consent).

1 Like

I know, I’m referring to analyzing demographic query requirements leaving AQL out, since those are two problems in one:

  1. Get demographic query requirements for different classes and associations of the DEMOGRAPHIC model.
  2. Mapping/representing those requirements in AQL syntax and in result sets.

For instance, with AQL we are stuck with the semantics of CONTAINS, and in DEMOGRAPHIC you can have a graph of associations, that could be navigated up or down in the structure, and CONTAINS navigates only down in a hierarchical structure. Is like the case of querying LINKs which is difficult in current AQL, because LINKs could represent graphs of things. That is part of the reason to separate the two problems, the other part is 1. should be implementable in any formalism, AQL among others. I know Code24 has a different query formalism, we also have a different query formalism for EHRServer and Atomik.

I think that is a requirement that should be on top of 1. and 2. I mean, it should be technically possible to do that, then if some policy prevents to do it, that should be a different component checking for that, like consents, etc. That is a problem/requirement that could be analyzed separately from #1 and #2, maybe #3 on the list above?

I think adding consent checks directly at the query formalism level would have serious performance implications, and that could be checked before or after executing queries as pre or post processors.

I’m currently changing how our queries solve SNOMED expressions, when that is done, I’ll get back to demographic queries, then I might have analysis to discuss on.

I’d like an example using Mirth or otherwise to ETL data out. It’s always a principle requirement in Wales to be able to drop data into some sort of DB (invariably MS SQL). If you could describe a few ways of accomplishing this, it would be useful.

Thanks @johnmeredith that is not directly related to the CDR itself but on the integration/migration/data extraction part, for which the CDR could or not have some feature to support it, since that is something that could be implemented externally.

My idea in terms of integration is to provide full integration examples for specific cases like HL7 and DICOM to/from openEHR using Mirth Connect, so people can actually test them by using the new CDR (Atomik) and their own Mirth Connect instance. That will take some time to build, I have many bits but I need to put all together and put the guides together too (any help is welcome!).

About ETL, Mirth Connect is good for transactional communications / message oriented, and ETL in general is a bulk process, it could be implemented in Mirth, it might take more time to process. Though you can make your transactions or messages to include multiple data sets at a time and use threading to overcome performance issues, and it works really well with HL7 v2.x, v3, FHIR and DICOM.

We are doing this for openEHR to OMOP and openEHR to CDISC with very simple pentaho scripts for the Madrid Infobanco project. Also I think you will be very interested in the EOS project by @SevKohler , which uses AQL directly to populate an OMOP database


Yep, the kettle component of pentaho is more focused on ETL than Mirth. When I worked with it many years ago it was a good tool, though the small transformations/processing of source data was really difficult to maintain when the complexity grew. Maybe today they improved it.

it can be a bit too much for some transformations, luckily also supports java and javascript for very complex data processing (e.g. generating XML is far easier with a Java call than doing the same in 30+ steps)

@pablo Thanks, great news. I would like to see more about integrations, what other formats are supported, the APIs, and any difficulties you encountered.

1 Like

Thanks @chunlan.ma once we have demographics file implemented I’ll figure in integrations. Since we use the Mirth Connect integration engine, integrations between different messaging or document formats, using different communication protocols is very easy to do (it reduces the bed of writing code).