# How to make mappings easier to query and to document in openEHR **Category:** [Terminology](https://discourse.openehr.org/c/terminology/59) **Created:** 2024-04-03 08:49 UTC **Views:** 599 **Replies:** 33 **URL:** https://discourse.openehr.org/t/how-to-make-mappings-easier-to-query-and-to-document-in-openehr/5063 --- ## Post #1 by @SevKohler Follow up for: https://discourse.openehr.org/t/ehrbase-storing-and-querying-data-without-an-standalone-archetype/5058/34 --- ## Post #2 by @SevKohler We have the following: * Codings in DV_CODED_TEXT, which we are all familiar with. * Term mappings. These are essential components, and we require a query logic/implementation that can process both of them to simplify formulating corresponding AQL queries. Also there is the thing with terminology queries contained in the AQL, which needs some standardization. Maybe its also of use we agree on using FHIR terminologies as standard in openEHR?! Secondly, there is the issue of validation. I believe that if a term mapping is derived from the bindings (as it should ideally be), it should also undergo validation. For instance, if the term mapping covers LOINC, the LOINC code should be validated to ensure that people don't mistakenly assign a systolic code to a diastolic one, for example. Looking ahead, I think it would be wise to also enable binding possibilities within templates. This would allow individuals to annotate their own local terminologies without solely relying on pure archetype bindings. Adding these bindings directly to compositions is quite cumbersome, and I believe that such logic should typically be contained within the template. --- ## Post #3 by @ian.mcnicoll Terminology queries are already standardised via AQL and supporting FHIR -see https://discourse.openehr.org/t/support-for-aql-matches-and-terminoolgy-in-ehrbase/3789/4 and is supported in both EhrBase and Better AFAIK. Though note some slight confusion on exact syntax in the specs! We can query both defining_code and mappings in AQL but when trying to query both for e.g as LOINC or SNOMED code, it is cumbersome. I think treating the defining_code and mappings list as a single list for matching purposes, possibly with an AQL function, might be helpful, especially if we can combine the TerminologyID and term in a single token. e.g. `hasTerm('SNOMED-CT::123456", "SNOMED-CT::45345")` which searches both defining_code and the mappings list. The other issue is how to document term mappings that the modeller wants enforced to be carried at run-time. Archetype term bindings are design-time suggestions only, and may not actually be correct in local template use. There may be a place for template -level bindings but these would still be different from mapping constraints i.e design-time recommendations .not actual mapping constraints. @damoca - have you ever tried constraining mappings in LinkEHR or do we need a new kind of construct (especially as some of the mappings may be conditional or lookup driven. Is this a seperate layer even? --- ## Post #4 by @SevKohler [quote="ian.mcnicoll, post:3, topic:5063"] The other issue is how to document term mappings that the modeller wants enforced to be carried at run-time. Archetype term bindings are design-time suggestions only, and may not actually be correct in local template use. There may be a place for template -level bindings but these would still be different from mapping constraints i.e design-time recommendations .not actual mapping constraints. [/quote] I think this has to change. The at00* we suggest and the fields we provide in an archetype are also not suggestions, so why should the code bindings be ? Codings should be equally handled as the rest of the model, otherwise, AQL "conformance" for terminologies (which is quite important) will never happen. I think a shift is required here, other standards also enforce these codings out of obvious reasons. Template bindings should give the option to add not-yet-provided codings to the term mappings (e.g. local ones). Otherwise, we will have the same problems all over, but in a light-weight version. So far terminologies are the worst thing in openEHR compared to e.g. FHIR, there is bad consistency between platforms and how people persist them, from my experience. Betters solution is so far the best take on that, that i have seen. --- ## Post #5 by @ian.mcnicoll [quote="SevKohler, post:4, topic:5063"] The at00* we suggest and the fields we provide in an archetype are also not suggestions, so why should the code bindings be ? [/quote] I think they have to remain as suggestions because there will often be a difference of opinion on the 'correct' bindings at archetype level and those that are regarded as correct by e.g a national standards organisation or individual app developer. @siljelb may have information on this kind of issue raising in Norway and we see the same thing in UK re SNOMED. The other issue is that the exact context of use of an archetype may potentially change the exact run-time mapping required. I think there is general agreement that FHIR TS support is the way forward and AFAIK both Better and Ehrbase support FHIR Valueset based queries. Exact terminology use is mostly driven by clients requirement TBH and yes it is a mess!! Simplifying multiple term handling both in constraints and queries is a good first step to making this easier though. --- ## Post #6 by @SevKohler [quote="ian.mcnicoll, post:5, topic:5063"] I think they have to remain as suggestions because there will often be a difference of opinion on the ‘correct’ bindings at archetype level and those that are regarded as correct by e.g a national standards organisation or individual app developer. @siljelb may have information on this kind of issue raising in Norway and we see the same thing in UK re SNOMED. [/quote] But isn't that the same problem we have with archetype fields, descriptions etc. and why we have modeling discussions ? I think for some fields its quite clear, e.g. if you look at Observation.status in FHIR. I dont say that all openEHR archetypes should have bindings, but for specific ones it should be quite clear. Anyways there should be at least a constrainment option on template level as bindings, with the option to overwrite archetype bindings if necessary (cause of national requirements or else). I think some real-world samples would be nice, since you could also bind on archetype level also e.g. parent codings. I mean in the end snomed and loinc where created to have one exact code for X lol. I think the recommendations should be rendered into the composition, if no other constrainment on template level e.g. overwrites them. That would at least provide a common base of term used for most of the platforms. Maybe also multiple codings could be allowed of the same termset for a template, like a specialization kind of thing ? So interoperability term-wise is given, but also the national required code is added. --- ## Post #7 by @ian.mcnicoll So the FHIR Observation codes are local FHIR codes so semantically identical to atCodes. I'm not against alignment with these but OTOH there are still very many systems using other coding systems v2 etc. I'm also not against adding more LOINC bindings to existing archetypes but this is not always that easy. We recently did some work for openEHR Finland on LOINC mappings that we can share and there is a big piece of work happening in the US that will help but again these many not exactly equate to local use. Yes to template-level bindings but we need IMO a separate mechanism to actually constrain the mappings at run-time - definitely informed by any underlying bindings but not enforced by them. The critical question for me is whether 'mappings' objects can be constrained to enforce population via ADL2, or whether separate mechanism is preferable. Does FHIR ConceptMap have a place here? Basically we need to apply some lookup tables, and perhaps some conditional logic in places --- ## Post #8 by @SevKohler [quote="ian.mcnicoll, post:5, topic:5063"] So the FHIR Observation codes are local FHIR codes so semantically identical to atCodes. I’m not against alignment with these but OTOH there are still very many systems using other coding systems v2 etc. [/quote] Sure sure, i wanted just to point out that some code bindings are easier and more obvious than others. [quote="ian.mcnicoll, post:5, topic:5063"] I’m also not against adding more LOINC bindings to existing archetypes but this is not always that easy. We recently did some work for openEHR Finland on LOINC mappings that we can share and there is a big piece of work happening in the US that will help but again these many not exactly equate to local use. Yes to template-level bindings but we need IMO a separate mechanism to actually constrain the mappings at run-time - definitely informed by any underlying bindings but not enforced by them. [/quote] What do you mean with seperate mechanism ? I think the archetype bindings should act as default, as long as nothing else is specified on the template level. For templates if binded they should be enforced (thats anyways something you do locally, so you have full control). Could be a named e.g. constrained bindings for templates or whatever. They also dont have to necessarily overwrite existing bindings, but im not sure about that, may cause quite some problems (for edge cases where the national requirement differs). In general both should be validated (if there is a termMapping/Coded_Text provided in the composition), if people want other codings this should be done as part of the template. [quote="ian.mcnicoll, post:5, topic:5063"] The critical question for me is whether ‘mappings’ objects can be constrained to enforce population via ADL2, or whether separate mechanism is preferable. [/quote] Very good question. [quote="ian.mcnicoll, post:5, topic:5063"] Does FHIR ConceptMap have a place here? [/quote] Dont we have this logic in the term mappings with < = > etc. ? or what do you exactly mean ? --- ## Post #9 by @SevKohler [quote="ian.mcnicoll, post:5, topic:5063"] I think there is general agreement that FHIR TS support is the way forward and AFAIK both Better and Ehrbase support FHIR Valueset based queries. [/quote] Then we should add this as a recommendation to the spec ? --- ## Post #10 by @joostholslag [quote="SevKohler, post:2, topic:5063"] I think it would be wise to also enable binding possibilities within templates [/quote] Adl2 allows you to do that. ![image|690x345](upload://mDzSJo8m3iLuc0GqZpMZ6GE6arW.png) https://archetype-editor.nedap.healthcare/advanced/joost/archetypes/nl.joostholslag::openEHR-EHR-COMPOSITION.Template_with_termapping.v0.0.1/edit_adl --- ## Post #11 by @damoca [quote="ian.mcnicoll, post:3, topic:5063"] @damoca - have you ever tried constraining mappings in LinkEHR or do we need a new kind of construct (especially as some of the mappings may be conditional or lookup driven. Is this a seperate layer even? [/quote] In LinkEHR you can certainly constrain both mappings or the defining code, although I think It has never been used. ![image|690x305](upload://dZcwBk3YaOgByJf8ZnFCpBNYTj4.png) That would be in modeling time. During the instance generation (remember that LinkEHR is in fact an ETL tool), if we want to fill for example some mappings for the main code, we can call a REST method and select a result. This was implemented before FHIR was a thing, so it is just a generic call. Then in the expression you can add any addition ETL logic you need. ![image|690x377](upload://A0BFM4D5mJmPznUCzeRKbDvRLn6.png) --- ## Post #12 by @damoca [quote="SevKohler, post:4, topic:5063"] Codings should be equally handled as the rest of the model, otherwise, AQL “conformance” for terminologies (which is quite important) will never happen. [/quote] I agree. This is a very important subject if at some point we want to have a federated network of openEHR systems, as in OMOP. And that brings another element to the discussion: the role of terminology servers. We are talking of representing mappings between SNOMED CT and LOINC, but as far as we talk of standard codes (lets keep extensions aside), probably we should not bother on defining those mapping in templates or archetypes, they should be publicly available (and more importantly, be managed) by external and specialized systems. An maybe just store them as mappings in the instances for future reference, if there is a risk that they may change in the future. A terminology server is a fundamental piece of any EHR architecture. On the other hand, if the mapping or defining code is made to just local codes, they might be only useful for local uses and for defining local AQLs. If those codes have to be used for interoperability purposes, they can't be local codes. In the model definitions they should not be seen as just suggestions, but as part of the shared concept definition. --- ## Post #13 by @SevKohler I mean you could link a Termserver via URL (what you usually do), but in general i think as a first step we should first agree on a common ground. A governance how we want to approach terminologies overall and in the future, since currently, everyone does that differently: name, locatable, replacing at** codes ... I like Bindings for that, they are already there and can be binded to values (which coded_text cannot) and we can annotate them (at least for ADL2) in the template. DV_CODED_TEXT name is also nice, but its missing tooling support + you need to agree which code system we use + add term mappings on top (or not). Yet, including a term server URL here is easier, also i think its a more elegant solution, but not a established one. For the rest we need to agree on a governance, for mappings i would e.g. suggest that we use the vocabs of OMOP for now, as for the future i think we should maybe even reference vocabulary codes directly if there are some, or else. --- ## Post #14 by @ian.mcnicoll I'm conscious that the title of this topic may be a bit misleading ,and that we are in danger of confusing scope which is really about how/where to record/query additional term codes, alongside internal codes like atcodes. It is not about mappings in general This is (IMO) about how to be able to define and record additional multiple termCodes, to support querying via those additional terms In scope 1. Where to add additional terms e.g LOINC and/or SNOMED to - a Node name (LOCATABLE.name/value), alongside the archetypeNodeId - a Node value, usually DV_CODED_TEXT, in addition to any defining_code e.g an internal atCode is used as the value defining_code but we also want to carry the SNOMED equivalent code in the patient record 2. How do we constrain archetypes/templates to 'force' any additional codes to be carried? 3. Can and should we constrain these additional codes as mandatory, or impose validation? 4. Can we improve AQL to simplifying the querying of terms? Out of scope 1. Structural mappings e.g FHIR observation <> openEHR archetype. I think there is general agreement that this is best handled outside archetypes/templates. However there is overlap in that those structural mappings will almost certainly include term mappings, so we need to make sure we that broader requirement in mind. 2. Complex conditional term mappings e.g. how to handle a pre-coordinated code like "Sitting systolic blood pressure" This probably needs to sit within the structural mapping space, but we do need to consider imported data with pre-coordinated LOINC term like "Sitting systolic blood pressure" - where does that get stored ,if anywhere? --- ## Post #15 by @ian.mcnicoll First question **Where** to store additional terms on ELEMENTs ### A. ELEMENT Values For ELEMENT values this is pretty straightforward - we can use DV_CODED_TEXT.mappings This example (https://gist.github.com/freshehr/7f33e3621812b5ae6ca99edbbdab3332) adds term mappings for an internal atCode to a DV_CODED_TEXT but the approach would equally apply to DV_TEXT, DV_ORDINAL and DV_SCALE. See https://neoehr.com/openehr/uml/rm110/c/dv-coded-text for description of the class and attributes. ### B. ELEMENT Names If we need to add an additional term to an ELEMENT Name we can again use the `mappings` attribute but his time on the ELEMENT `name/value` attribute. See (https://gist.github.com/freshehr/7f33e3621812b5ae6ca99edbbdab3332) ![image|363x399](upload://1QlJ75P09BuAfZ7YmSzt5toPWqQ.png) However there is an alternative approach which is to sub-class the name/value DV_TEXT attribute to carry a DV_CODED_TEXT. ![image|367x232](upload://oIThAkYh9n1s0blrsSV0f3iak8i.png) It is something we looked at in relation to LOINC-based lab tests but @borut.fabjan persuaded me it was bad idea. I now agree, although legal in the RM, it starts to get very confusing, especially as the name/value (as text) may well be overwritten in templating. It also cannot support more than 1 coded_term, unless you use mappings for the other Much simpler to say 'use mappings' for this purpose IMO --- ## Post #16 by @Seref Bah. I thought we were discussing how to implement semantic transformations from archetypes to some other formalism :man_facepalming: I'll need to re-read this. Thanks to everyone for their understanding during the SEC meeting but I prefer that you ask me "what the hell are you talking about?" the next time I talk about completely irrelevant stuff! --- ## Post #17 by @ian.mcnicoll ### 'Term bindings' vs. 'term mapping directives' Term bindings are possible to record in archetypes but are suggested mappings that you might want to apply at runtime as term_mappings or to help with integration. Arriving at 'consensus agreed' bindings is actually very difficult in many cases, especially with SNOMED CT but we are seeing mor bindings being added now. Even when they exist, local national mapping standard term usage may conflict, need new bindings or the exact context of use of an archetype may mean that a more appropriate external term should be used. In a local context we almost certainly do not want to apply **every** binding suggestion as term_mappings. So we need the ability to add or overwrite 'bindings' at template level but also the ability to decide whether or not to apply those bindings on particular nodes, in particular templates. We might be able to say that term_bindings applied (or not constrained out) at template level should be regarded as mapping_directives i.e actually force these terms to be carried additional term_mappings but I'm not sure if we this kind of constraint on bindings is possible, or whether it is sufficient e.g to force validation. --- ## Post #18 by @ian.mcnicoll Nah - we framed it badly!! Easy mis-understanding to make, and there is, of course, huge overlap, in that many of the challenging semantic transforms involve term mis-matches --- ## Post #19 by @joostholslag [quote="ian.mcnicoll, post:17, topic:5063"] In a local context we almost certainly do not want to apply **every** binding suggestion as term_mappings. [/quote] Could you give an example of where we wouldn’t please? --- ## Post #20 by @ian.mcnicoll Simples!! 1. I might want to use an archetype that has LOINC bindings but I don't want LOINC mappings in patient data as >OINC is not used in my country 2. I do use LOINC but the LOINC bindings in the archetype have never been quality assured/ published. --- ## Post #21 by @joostholslag [quote="ian.mcnicoll, post:20, topic:5063"] 1. I might want to use an archetype that has LOINC bindings but I don’t want LOINC mappings in patient data as >OINC is not used in my country 2. I do use LOINC but the LOINC bindings in the archetype have never been quality assured/ published. [/quote] Ad1 there not much against storing it anyway, right? For example, policy might change to do work with loinc, and historical data with loinc codes would then be a pro. Or you might share data internationally with an institution that does appreciate loinc codings. Ad2 I think loinc bindings that have not been published (you mean the code itself right, not the binding? how does that work in loinc?), should then probably not end up in archetypes. My gut says you’re right that it’s very useful to have term bindings as suggestions only. But I’d like to get a better feel for the cases. And I also expect we could do more term mappings at archetype level. (At least if you disregard the licensing problem) --- ## Post #22 by @SevKohler Big thanks for all the sum up @ian.mcnicoll !!!! I think the term mappings are the way since they cover also the values as ian pointed out. We should make this a best practice (some vendors already do that). @damoca so instead of defining code, using the term mapping seems to be a better fit. --- ## Post #23 by @SevKohler [quote="ian.mcnicoll, post:17, topic:5063"] We might be able to say that term_bindings applied (or not constrained out) at template level should be regarded as mapping_directives i.e actually force these terms to be carried additional term_mappings but I’m not sure if we this kind of constraint on bindings is possible, or whether it is sufficient e.g to force validation. [/quote] I think the important point here is, in general, that we should find a way to apply terms (e.g. term-mapping) as part of the model, otherwise it will be hard to maintain and work with if this is always added later on or e.g. as part of the application that creates the compositions. I think this will lead to quite some inconsistencies. Terminbindings are something we have already established and carry the potential to do that, at least in theory, I think we either have to come up with something similar or extend their usage as said above. About the constrains, ADL2 supports bindings on template level already ? @joostholslag i mean if we have the option in the end (mapping_directives), using these bindings as suggestions or not is for me a simple setting in platform config. Also with directives you can edit them. --- ## Post #24 by @damoca [quote="SevKohler, post:22, topic:5063"] @damoca so instead of defining code, using the term mapping seems to be a better fit. [/quote] In fact, according to the specifications, defining_code should only be used if the DV_CODED_TEXT.value is exactly the rubric of the terminology code. Otherwise, DV_TEXT and mappings must be used. ![image|690x236](upload://7XMZDFd1lIS05Ev59dhfoQxPsZP.png) --- ## Post #25 by @SevKohler Exactly, for e.g. Problem/Diagnose name we go for the DV_CODED_TEXT, dont think makes sense here to save a plain text and add a term_mapping to it. Maybe attach a term_mapping for LOINC e.g. For anythings else we go with term_mappings, since they cover that, e.g. Systolic BP, DV_ORDINALs etc. Ah, misunderstood what you wrote. --- ## Post #26 by @ian.mcnicoll > In fact, according to the specifications, defining_code should only be used if the DV_CODED_TEXT.value is exactly the rubric of the terminology code. Otherwise, DV_TEXT and mappings must be used. Actually I think that is incorrect, especially now that we can also carry preferred term. We frequently have situations where we have modelled e.g a list of SNOMED terms but the clinicians want a slightly different UI. As long as that is done under 'informatics control' I think ti is more correct to carry the SNOMED code as the defining_code but alongside the text value. That is also identical (IMO!) to the CodeableConcept approach in FHIR --- ## Post #27 by @damoca Yes, it is probably too strict for most terminologies, but in SNOMED CT tend to be more permissive, and not think only in the FSN or the preferred term, but accepting any synonym (that could be in the national/local extension) as the value. But of course the specifications could be more clear around this. --- ## Post #28 by @sebastian.iancu [quote="ian.mcnicoll, post:26, topic:5063, full:true"] > In fact, according to the specifications, defining_code should only be used if the DV_CODED_TEXT.value is exactly the rubric of the terminology code. Otherwise, DV_TEXT and mappings must be used. Actually I think that is incorrect, especially now that we can also carry preferred term. We frequently have situations where we have modelled e.g a list of SNOMED terms but the clinicians want a slightly different UI. As long as that is done under ‘informatics control’ I think ti is more correct to carry the SNOMED code as the defining_code but alongside the text value. That is also identical (IMO!) to the CodeableConcept approach in FHIR [/quote] I don't see exactly where is that so explicit defined that the `value` corresponds the `code`. I do agree with @ian.mcnicoll that it should also consider preferred term - in fact I read in specs (at https://specifications.openehr.org/releases/RM/latest/data_types.html#_design_2) that: > The model of `DV_CODED_TEXT` is designed to capture the *actual* term chosen by the user or software at runtime, i.e. preferred term or synonym (for terminologies supporting synonyms), or a post-coordination of underlying distinct terms if an expression was chosen as the term (such as an expression supported by SNOMED CT). A `DV_CODED_TEXT` instance is used if the final textual value chosen by the user is the exact term text (preferred or other) returned by the terminology service for the key (i.e. `code_string` value). If the user makes even the slightest modification during data entry, a mapping (see [Section 5.1.5](https://specifications.openehr.org/releases/RM/latest/data_types.html#_mappings)) to a `DV_TEXT` should be used instead. --- ## Post #29 by @sebastian.iancu [quote="ian.mcnicoll, post:14, topic:5063"] In scope ... 4. Can we improve AQL to simplifying the querying of terms? [/quote] For readability, I moved some AQL relevant posts to discourse.openehr.org/t/improve-aql-to-simplifying-the-querying-of-terms/5120 to continue there on AQL specifics. --- ## Post #30 by @ian.mcnicoll >> If the user makes even the slightest modification during data entry, a mapping (see [Section 5.1.5](https://specifications.openehr.org/releases/RM/latest/data_types.html#_mappings)) to a `DV_TEXT` should be used instead. That is the part of the spec that I think should be reviewed. --- ## Post #31 by @sebastian.iancu I am not sure I understand how you see this used then. Are you suggesting to 'decouple' the value from the code (or preferred term)? How should we ensure that the meaning of the code matches semantically the value entered? how should any validation work if the code is saying 'A' but the value is rather a 'B' - which one is 'wrong', if you get to a point of validating? --- ## Post #32 by @ian.mcnicoll I think that decision on whether the local UI is a semantic match for any associated underlying term, has to be the responsibility of the implementer. Particularly as we make increasing use of SNOMED and LOINC, (a good thing) there will also be a need to localise text terms in the UI. We obviously need to record exactly what the user entered i.e. the DV_TEXT.value but I would argue that if the the system closely couples the UI value and an associated term, then it is legitimate for that to be regarded as the defining_code. The user has no control over this. So in a End of Life scenario The UI has a field: ' Does the patient wish to be resuscitated Y/N ' but under the hood we use a CPR decision element in the Advance Intervention archetype, and let's say the answer is 'No' ![image|631x500](upload://vfBS6KDfDguTvia7XAD3dqhdEDM.png) So I definitely need to record 'No' as the text value but where to put the SNOMED-CT code which indirectly was selected by the user, because of system design issues. I would argue that the template designer is forcing selection of the SNOMED code and therefore this is legitimately the defining_code and not just another mapping. I would also like to be able to assert those local terms in the template i.e constrain the local terms so that it is part of the templated definition, not left to agreement with the UI devs. --- ## Post #33 by @SevKohler [quote="ian.mcnicoll, post:26, topic:5063"] modelled e.g a list of SNOMED terms but the clinicians want a slightly different UI. As long as that is done under ‘informatics control’ I think ti is more correct to carry the SNOMED code as the defining_code but alongside the text value. That is also identical (IMO!) to the CodeableConcept approach in FHIR [/quote] Yeah, we had this problem in several projects, where the terminology server in the openEHR platform did validate the value and failed since we .e.g. missed from SNOMED the fever **(finding)**. In the FHIR Bridge when we map a FHIR coding we always GET the term server and replace the existing displays provided by FHIR, because of that. --- ## Post #34 by @SevKohler [quote="sebastian.iancu, post:31, topic:5063"] How should we ensure that the meaning of the code matches semantically the value entered? how should any validation work if the code is saying ‘A’ but the value is rather a ‘B’ - which one is ‘wrong’, if you get to a point of validating? [/quote] Usually you have a fixed set of answers for this questions. These ones are provided by the model, if modelers make a mistake here matching the No String to an Yes Snomed code, thats i think a human error that we cannot prevent. I think that's a risk that we can take. Human error on the modeling side, isnt something we can anyways prevent. Just some dump theory: What if we use term_mappings instead, lol. When we annotate a text like NO with a SNOMED code, in the end that the exact same like a termmapping isnt it ? Would that not be more consistent + make the AQL part easier + we can use the model bindings (as suggested above) + annotate also LOINC ? --- **Canonical:** https://discourse.openehr.org/t/how-to-make-mappings-easier-to-query-and-to-document-in-openehr/5063 **Original content:** https://discourse.openehr.org/t/how-to-make-mappings-easier-to-query-and-to-document-in-openehr/5063