Conformance to CKM models?

Hi all,

Following on from some recent discussions about reuse of the CKM published models, I’m curious about whether we can explore conformance to the CKM published archetypes…

  1. Would vendors consider making public which archetypes they use? eg by some way of marking their use on CKM. This could be construed as endorsement for some models (and good from a CKM POV) . In addition, using lots of the models may be seen as a marketing advantage to the vendor. Obviously if not reusing many of the models, this could also be considered a threat.
  2. If this is worth pursuing, how can we make sure that this is kept up-to-date? Sustainability of this kind of data is often the point of failure for any index or directory. If it is not up-to-date, it potentially becomes meaningless.
  3. If we are still on this journey, is it possible to devise some kind of audit process to verify and certify the archetype reuse?

Clearly, conformance to the specifications is important as part of quality control to support openEHR procurement processes.

In parallel, as a community we publicly boast of the archetypes and the advantage of reuse and interop etc. How do we substantiate the rhetoric? If the archetypes in vendor systems are predominantly local or 'quick and dirty, currently hidden behind the openEHR branding, but essentially replicating their previous non-openEHR systems, how can we benchmark this in order to provide some measure of ‘quality control’ or capacity for potential interop with other model-compliant openEHR systems?

Anticipating this might be an awkward conversation, but I haven’t seen this raised or discussed previously.

Any thoughts?


An increasingly important topic, Heather. I recall circulating a questionnaire a long while back, to try to tease out what seemed relevant issues. It was clearly too early, then, but looks timely now.


Should also this conformance apply to CKM models? I remember Jesualdo team from Murcia University did some research on the conformance of models, and their methodology could also be applied to test conformance of local models with the CKM ones

I’m curious to understand more @yampeku . Can you share more about the methodology and what the analysis means?

Theoretically, the tooling should pick up issues related to conformance to the RM. I’m no longer sure if this is the case or not, to be honest. We rely on each archetype editor to ensure that any archetype built is compliant to the RM. CKM is a second point of checking - it CKM used to throw up a huge number of technical errors, but I can’t find them anymore. This is an example of a current CKM validation report - Family history validation - demonstrating a licence and included archetypes that need updating.

Conformance to clinical requirements including scope, requirements, data types, constraints etc is done during the review process, combined with change requests etc.

Happy to explore this further…

I think this is the paper where this comes from. The cool thing about it is that they actually transformed the archetypes with their terminology bindings into OWL, actually validating that both the archetypes and their bindings (in all the hierarchy) made sense. Adding local sets of archetypes doesn’t really change the overall process.

1 Like

I have no strong feelings one way or another. If others, with more technical understanding than I, see this as contributing further to improving the technical aspects of archetype quality then it should be explored.

Maybe it can be used to confirm the current tooling validation or identify issues. My preference would always be that the tooling will support the modellers’ efforts so that we can’t build invalid archetypes or templates.

Hi @DavidIngram,

I’m curious to learn more about your conclusions at the time.

We considered building this into CKM in the earliest days but couldn’t work out how to ensure currency or confirm vendor claims.

Any vendors willing to comment?

I’ll have a hunt through my archive to see if I can unearth anything useful there. So long ago - don’t have any recollection now but the questionnaire itself might be useful. I do seem to recall that you were one who responded to it but maybe that’s my imagination…

1 Like

I’m actually working on the conformance area and I’ve seen some misalignment between models created from different tools, some issues are created by ambiguous details in the core specs, others by implementation decisions, some might come from different interpretations of the spec details, some from local technical decisions.

It’s difficult to ask vendors to fix their tools if the core specs are not clear enough, so my first concern is in fixing the specs from the SEC, which could take some time.

What I don’t think is practically possible, is to harmonize models used by different vendors or projects between them and between the openEHR published ones, IMHO that has a huge scope and cost with very little return.

At the same time, we are working on conformance for systems, mainly clinical data repositories, for which we need a fixed set of archetypes and templates as the basis of the test. So far I’ve created such artifacts for testing EHRBASE, and it seems the same artifacts will be used to test CDRs from other vendors.

Though, when we talked with Thomas about conformance, I proposed different kinds of systems that could be validated and certified as “openEHR conformant” beside CDRs, like modeling tools and knowledge managers, because we need to ensure the output of any modeling tool generates valid artifacts, and knowledge managers can consume and handle those artifacts.

There is another item that adds complexity to this whole conformance thing: version management. Right now we have several releases of the RM specs, AOM specs, and so on. Some CDRs and modeling tools can handle some RM and AOM versions. In the conformance architecture we are building, we need to consider different implementations might be compatible with the openEHR specs at different baselines.

As soon as we have a full spec for Service Model conformance and Data Validation conformance, I would like to move to Archetype/Template conformance. With those three elements, and the issues we are finding along the way, we might be able to have a good conformance architecture that includes validating several areas of several types of systems. I guess we will have more to show next year., this is all WIP.

1 Like

Thanks Pablo,


Not suggesting this, especially in retrospect. It just isn’t going to happen. The more important question for the Board is about how the clinical modelling program can be enhanced to accelerate the publication of more models that vendors can implement is an ‘elephant in the room’ issue that is not being addressed.

I trust that the other aspects of technical validation are being addressed by the Specifications group & others, and we’ll see those discussions evolve here

1 Like

The Archetype Designer certainly does; it is RM-driven.

One small request I might make on this thread: can we define what ‘conformance’ means in this discussoin?

Conformance can be verified be between:

  • software to specifications, e.g. APIs
  • specialised archetypes to parent archetypes
  • templates and their underlying archetypes
  • data and generating templates, archetypes and the RM

Or is it something looser and less formal?

Initially I guessed Heather was referring to alignment/harmonization of vendor models to the openEHR CKM models, but she confirmed that wasn’t the case. That is why I mentioned the other part, related to formal conformance of models:

  • correctness of archetypes and templates against the RM they constraint, their object models, invariants, and serialization formalisms (artifact conformance)
  • tools dealing with those artifacts, services, features, etc. (system conformance, related to SM spec)

And as a side note, dealing with different version baselines.

BTW to provide some details on things that IMO should be part of conformance validation of OPTs 1.4 (maybe as hard rules/invariants or as technical template quality rules):

  1. is_integral shouldn’t appear in archetypes or templates where a DV_PROPORTION is used (this is an error in the AE/TD suite)

  2. DV_CODED_TEXT constraint code_list can’t be empty if the terminology_id is “local” (this could be valid in the archetype but not in the template)

  3. DV_CODED_TEXT constraint ref (acNNNN) should have a constraint_definition entry to be valid in ADL/OPT, and a constraint_binding to be valid in an OPT (without this there is no terminology associated with the DV_CODED_TEXT in the template, which is problematic for some implementations)

  4. Interesting: a CONSTRAINT_REF (acNNNN) in ADL is transformed to a C_CODE_REFERENCE in the OPT by the Template Designer, and if there were more than one constraint_binding in the ADL, in the OPT only the first one appears. Worst: the C_CODE_REFERENCE type is not defined in any spec (AOM or AOP 1.4 :scream:). I needed to check how I handle this in my own implementation (openEHR-OPT/OperationalTemplateParser.groovy at 24db995e0a5fe7984a14724191c1bb38641dcb46 · ppazos/openEHR-OPT · GitHub) and I manage C_CODE_REFERENCE as a C_CODE_PHRASE with an extra attribute referenceSetUri (not an elegant solution since the types should be specified somewhere in UML not only in a schema openEHR-OPT/OperationalTemplateExtra.xsd at 24db995e0a5fe7984a14724191c1bb38641dcb46 · ppazos/openEHR-OPT · GitHub)

  5. DV_BOOLEAN constraint C_BOOLEAN shouldn’t have true_valid and false_valid both in false ([SPECPR-376] Add invarianto to C_BOOLEAN: true_valid OR false_valid - openEHR JIRA)

  6. C_STRING pattern should comply with some regex flavor ([SPECPR-377] AOM C_STRING.pattern needs clarification on the regular expression language - openEHR JIRA)

  7. Missing formal definition of correspondence between the patterns defined in C_DATE/C_TIME/C_DATE_TIME constraints and the values contained in DV_DATE/DV_TIME/DV_DATE_TIME, which are ISO 8601, leaving this area to interpretation of the implementer ([SPECPR-374] Missing information about encoding C_DATE_TIME/C_DATE/C_TIME validity constraints and clarification about supported formats - openEHR JIRA)

These are some points that came out by reviewing different specs from the point of view of creating a conformance specification definition. My concern is, since everything is based on archetypes and templates, the rest of the conformance validation and testing depends in those artifacts, and if there are tiny little details that make these artifacts incompatible between different implementations, we could have tests that pass for one vendor but don’t pass for another.

I know these details might be the 5% of the whole thing and that we have a common view of most of the openEHR platform components and how those should work, is just that in my daily work I need to deal with these little details, make decisions based on those, while trying to be platform independent…

We’d certainly be willing to share which archetypes we use. But it’s not too simple to find out for us. Especially since our (hundreds) of customers each can upload their own CKM archetypes, outside of our control. I often adopt archetypes we use in CKM. Maybe that could be a way to go?
If you want detailed info from us, a zoomcall would be easiest I think, so I can show what data I can find on this:)
btw. interop based on openEHR is of very little importance to us. Since in our market it’s all ZIBs on FHIR. Where it does help is we can share the mapping working with openEHR NL.

1 Like