How to make mappings easier to query and to document in openEHR

Follow up for: [EHRbase] Storing and querying data without an standalone archetype - #34 by ian.mcnicoll

1 Like

We have the following:

  • Codings in DV_CODED_TEXT, which we are all familiar with.
  • Term mappings.

These are essential components, and we require a query logic/implementation that can process both of them to simplify formulating corresponding AQL queries.

Also there is the thing with terminology queries contained in the AQL, which needs some standardization.
Maybe its also of use we agree on using FHIR terminologies as standard in openEHR?!

Secondly, there is the issue of validation. I believe that if a term mapping is derived from the bindings (as it should ideally be), it should also undergo validation. For instance, if the term mapping covers LOINC, the LOINC code should be validated to ensure that people don’t mistakenly assign a systolic code to a diastolic one, for example.

Looking ahead, I think it would be wise to also enable binding possibilities within templates. This would allow individuals to annotate their own local terminologies without solely relying on pure archetype bindings. Adding these bindings directly to compositions is quite cumbersome, and I believe that such logic should typically be contained within the template.

Terminology queries are already standardised via AQL and supporting FHIR -see Support for AQL MATCHES and TERMINOOLGY in EhrBase? - #4 by vidi42

and is supported in both EhrBase and Better AFAIK.

Though note some slight confusion on exact syntax in the specs!

We can query both defining_code and mappings in AQL but when trying to query both for e.g as LOINC or SNOMED code, it is cumbersome.

I think treating the defining_code and mappings list as a single list for matching purposes, possibly with an AQL function, might be helpful, especially if we can combine the TerminologyID and term in a single token.

e.g. hasTerm('SNOMED-CT::123456", "SNOMED-CT::45345") which searches both defining_code and the mappings list.

The other issue is how to document term mappings that the modeller wants enforced to be carried at run-time. Archetype term bindings are design-time suggestions only, and may not actually be correct in local template use. There may be a place for template -level bindings but these would still be different from mapping constraints i.e design-time recommendations .not actual mapping constraints.

@damoca - have you ever tried constraining mappings in LinkEHR or do we need a new kind of construct (especially as some of the mappings may be conditional or lookup driven. Is this a seperate layer even?

1 Like

I think this has to change.
The at00* we suggest and the fields we provide in an archetype are also not suggestions, so why should the code bindings be ?
Codings should be equally handled as the rest of the model, otherwise, AQL “conformance” for terminologies (which is quite important) will never happen.
I think a shift is required here, other standards also enforce these codings out of obvious reasons.
Template bindings should give the option to add not-yet-provided codings to the term mappings (e.g. local ones).
Otherwise, we will have the same problems all over, but in a light-weight version.
So far terminologies are the worst thing in openEHR compared to e.g. FHIR, there is bad consistency between platforms and how people persist them, from my experience.
Betters solution is so far the best take on that, that i have seen.

1 Like

I think they have to remain as suggestions because there will often be a difference of opinion on the ‘correct’ bindings at archetype level and those that are regarded as correct by e.g a national standards organisation or individual app developer. @siljelb may have information on this kind of issue raising in Norway and we see the same thing in UK re SNOMED.

The other issue is that the exact context of use of an archetype may potentially change the exact run-time mapping required.

I think there is general agreement that FHIR TS support is the way forward and AFAIK both Better and Ehrbase support FHIR
Valueset based queries.

Exact terminology use is mostly driven by clients requirement TBH and yes it is a mess!!

Simplifying multiple term handling both in constraints and queries is a good first step to making this easier though.

1 Like

But isn’t that the same problem we have with archetype fields, descriptions etc. and why we have modeling discussions ?
I think for some fields its quite clear, e.g. if you look at Observation.status in FHIR.
I dont say that all openEHR archetypes should have bindings, but for specific ones it should be quite clear.
Anyways there should be at least a constrainment option on template level as bindings, with the option to overwrite archetype bindings if necessary (cause of national requirements or else).
I think some real-world samples would be nice, since you could also bind on archetype level also e.g. parent codings.
I mean in the end snomed and loinc where created to have one exact code for X lol.

I think the recommendations should be rendered into the composition, if no other constrainment on template level e.g. overwrites them.
That would at least provide a common base of term used for most of the platforms.
Maybe also multiple codings could be allowed of the same termset for a template, like a specialization kind of thing ?
So interoperability term-wise is given, but also the national required code is added.

So the FHIR Observation codes are local FHIR codes so semantically identical to atCodes. I’m not against alignment with these but OTOH there are still very many systems using other coding systems v2 etc.

I’m also not against adding more LOINC bindings to existing archetypes but this is not always that easy. We recently did some work for openEHR Finland on LOINC mappings that we can share and there is a big piece of work happening in the US that will help but again these many not exactly equate to local use.

Yes to template-level bindings but we need IMO a separate mechanism to actually constrain the mappings at run-time - definitely informed by any underlying bindings but not enforced by them.

The critical question for me is whether ‘mappings’ objects can be constrained to enforce population via ADL2, or whether separate mechanism is preferable.

Does FHIR ConceptMap have a place here?

Basically we need to apply some lookup tables, and perhaps some conditional logic in places

1 Like

Sure sure, i wanted just to point out that some code bindings are easier and more obvious than others.

What do you mean with seperate mechanism ?
I think the archetype bindings should act as default, as long as nothing else is specified on the template level.
For templates if binded they should be enforced (thats anyways something you do locally, so you have full control).
Could be a named e.g. constrained bindings for templates or whatever.
They also dont have to necessarily overwrite existing bindings, but im not sure about that, may cause quite some problems (for edge cases where the national requirement differs).
In general both should be validated (if there is a termMapping/Coded_Text provided in the composition), if people want other codings this should be done as part of the template.

Very good question.

Dont we have this logic in the term mappings with < = > etc. ? or what do you exactly mean ?

Then we should add this as a recommendation to the spec ?

Adl2 allows you to do that.


https://archetype-editor.nedap.healthcare/advanced/joost/archetypes/nl.joostholslag::openEHR-EHR-COMPOSITION.Template_with_termapping.v0.0.1/edit_adl

1 Like

In LinkEHR you can certainly constrain both mappings or the defining code, although I think It has never been used.

That would be in modeling time. During the instance generation (remember that LinkEHR is in fact an ETL tool), if we want to fill for example some mappings for the main code, we can call a REST method and select a result. This was implemented before FHIR was a thing, so it is just a generic call. Then in the expression you can add any addition ETL logic you need.

I agree. This is a very important subject if at some point we want to have a federated network of openEHR systems, as in OMOP. And that brings another element to the discussion: the role of terminology servers.

We are talking of representing mappings between SNOMED CT and LOINC, but as far as we talk of standard codes (lets keep extensions aside), probably we should not bother on defining those mapping in templates or archetypes, they should be publicly available (and more importantly, be managed) by external and specialized systems. An maybe just store them as mappings in the instances for future reference, if there is a risk that they may change in the future. A terminology server is a fundamental piece of any EHR architecture.

On the other hand, if the mapping or defining code is made to just local codes, they might be only useful for local uses and for defining local AQLs. If those codes have to be used for interoperability purposes, they can’t be local codes. In the model definitions they should not be seen as just suggestions, but as part of the shared concept definition.

1 Like

I mean you could link a Termserver via URL (what you usually do), but in general i think as a first step we should first agree on a common ground.
A governance how we want to approach terminologies overall and in the future, since currently, everyone does that differently: name, locatable, replacing at** codes …

I like Bindings for that, they are already there and can be binded to values (which coded_text cannot) and we can annotate them (at least for ADL2) in the template.
DV_CODED_TEXT name is also nice, but its missing tooling support + you need to agree which code system we use + add term mappings on top (or not).
Yet, including a term server URL here is easier, also i think its a more elegant solution, but not a established one.

For the rest we need to agree on a governance, for mappings i would e.g. suggest that we use the vocabs of OMOP for now, as for the future i think we should maybe even reference vocabulary codes directly if there are some, or else.

I’m conscious that the title of this topic may be a bit misleading ,and that we are in danger of confusing scope which is really about how/where to record/query additional term codes, alongside internal codes like atcodes. It is not about mappings in general

This is (IMO) about how to be able to define and record additional multiple termCodes, to support querying via those additional terms

In scope

  1. Where to add additional terms e.g LOINC and/or SNOMED to
  • a Node name (LOCATABLE.name/value), alongside the archetypeNodeId
  • a Node value, usually DV_CODED_TEXT, in addition to any defining_code e.g an internal atCode is used as the value defining_code but we also want to carry the SNOMED equivalent code in the patient record
  1. How do we constrain archetypes/templates to ‘force’ any additional codes to be carried?

  2. Can and should we constrain these additional codes as mandatory, or impose validation?

  3. Can we improve AQL to simplifying the querying of terms?

Out of scope

  1. Structural mappings e.g FHIR observation <> openEHR archetype. I think there is general agreement that this is best handled outside archetypes/templates.

However there is overlap in that those structural mappings will almost certainly include term mappings, so we need to make sure we that broader requirement in mind.

  1. Complex conditional term mappings e.g. how to handle a pre-coordinated code like “Sitting systolic blood pressure” This probably needs to sit within the structural mapping space, but we do need to consider imported data with pre-coordinated LOINC term like “Sitting systolic blood pressure” - where does that get stored ,if anywhere?
2 Likes

First question Where to store additional terms on ELEMENTs

A. ELEMENT Values

For ELEMENT values this is pretty straightforward - we can use DV_CODED_TEXT.mappings

This example (Example of openEHR term mappings · GitHub) adds term mappings for an internal atCode to a DV_CODED_TEXT but the approach would equally apply to DV_TEXT, DV_ORDINAL and DV_SCALE.

See NeoEHR for description of the class and attributes.

B. ELEMENT Names

If we need to add an additional term to an ELEMENT Name we can again use the mappings attribute but his time on the ELEMENT name/value attribute.

See (Example of openEHR term mappings · GitHub)

image

However there is an alternative approach which is to sub-class the name/value DV_TEXT attribute to carry a DV_CODED_TEXT.

image

It is something we looked at in relation to LOINC-based lab tests but @borut.fabjan persuaded me it was bad idea.

I now agree, although legal in the RM, it starts to get very confusing, especially as the name/value (as text) may well be overwritten in templating. It also cannot support more than 1 coded_term, unless you use mappings for the other

Much simpler to say ‘use mappings’ for this purpose IMO

1 Like

Bah. I thought we were discussing how to implement semantic transformations from archetypes to some other formalism :man_facepalming: I’ll need to re-read this. Thanks to everyone for their understanding during the SEC meeting but I prefer that you ask me “what the hell are you talking about?” the next time I talk about completely irrelevant stuff!

‘Term bindings’ vs. ‘term mapping directives’

Term bindings are possible to record in archetypes but are suggested mappings that you might want to apply at runtime as term_mappings or to help with integration. Arriving at ‘consensus agreed’ bindings is actually very difficult in many cases, especially with SNOMED CT but we are seeing mor bindings being added now.

Even when they exist, local national mapping standard term usage may conflict, need new bindings or the exact context of use of an archetype may mean that a more appropriate external term should be used.

In a local context we almost certainly do not want to apply every binding suggestion as term_mappings.

So we need the ability to add or overwrite ‘bindings’ at template level but also the ability to decide whether or not to apply those bindings on particular nodes, in particular templates.

We might be able to say that term_bindings applied (or not constrained out) at template level should be regarded as mapping_directives i.e actually force these terms to be carried additional term_mappings but I’m not sure if we this kind of constraint on bindings is possible, or whether it is sufficient e.g to force validation.

1 Like

Nah - we framed it badly!! Easy mis-understanding to make, and there is, of course, huge overlap, in that many of the challenging semantic transforms involve term mis-matches

1 Like

Could you give an example of where we wouldn’t please?

1 Like

Simples!!

  1. I might want to use an archetype that has LOINC bindings but I don’t want LOINC mappings in patient data as >OINC is not used in my country

  2. I do use LOINC but the LOINC bindings in the archetype have never been quality assured/ published.

1 Like