DV_CODED_TEXT with open/extensible set of codes (value set)

Aha I see. In those cases, you would modify the archetype to:

  • remove the DV_TEXT
  • change the DV_CODED_TEXT terminology constraint to have strength = extensible or strength = preferred

But any cases where the DV_TEXT is meant to stand for an actual text, it needs to be retained.

Note that it will take me a while to make the change to AOM1.4 and AOM2, and then come up with a way to represent it in ADL1.4, and ADL2, but I’ll work on it ASAP.

1 Like

Agree! :+1::smile:

Hi Silje, yes but also commonly needed where there is a need to fall back to free text where a coded entry is not available - one example being the causative agent in a n allergy where the drug that caused the allergy is not yet on the drug database (trial drug, foreign product, drug database not yet updated).

You are right that we can currently sub-class a DV_TEXT to DV_CODED_TEXT to make the list extensible but this does get messy and we should find different approaches if we can.

So agree with Thomas’s last statement.

Where we only require DV_CODED_TEXT then ‘strength’ will clarify that purpose. More importantly the same rules should apply in templates. We have a current block in AD where if a DV_TEXT is sub-classed to DV_CODED_TEXT, the DV_TEXT ‘choice’ is removed, so essentially we lose the ‘trick’ to allow is to extend the coded_text list or add free text (where appropriate).

True, but those kinds of elements (almost?) never have internal codes in the archetype. Causative agent of an adverse reaction is a good example of this. This is a pure DV_TEXT element where we would like to code it, but it’s perfectly open for free text.

The cases I’m talking about above are the ones where we always want the elements to be coded, but we’d like templaters to be able to choose between the codes in the archetypes, or their own codes.

Sure but we also need to understand how these rules flow through to specialisations and templates. and where external codes/valuesets have been defined in the archetype.

I’m not sure I understand the points here, but looking at the initial issue https://openehr.atlassian.net/browse/SPECPR-302 what I see is a modeling problem, not an RM issue. If the codes initially set in the archetype are not for general use, then the node should have an ACNNNN constraint to use an external terminology/subset. If the name of the subset is generic, implementers could use any subset they want just by setting the right API accessing the codes. At design time, what modelers could do is to set the ACNNNN code for the coded text and provide a sample subset, instead of a local list of codes inside the archetype itself. The CKM allows to create subsets/termsets I think, so that could be a good use case to use that feature in the CKM.

Of course I might not understand all the requirements in detail, so I might be missing something here.

@pablo It is both a modelling issue and an RM issue.

This is a good example of where there is a well-established real-world valueset (HL7v2-based) which is widely adopted, so to give maximum value we want to supply that to potentials user as-is and easily translatable without any need to rely on an external API or terminology service. I agree there is a case for making more use of external termsets and support for FHIR Valuesets is coming up which may be useful but … there will still be a place of internal codelists, some of which , like this may be highly standardised but not universally so. In any case, the same rules would need to be applied to any external valueset - required, extensible etc.

So I agree that in some places we might make more use in the future of external valuesets but the same ‘semantic issue’ remains - is it acceptable to extend or even replace the list of terms that we have supplied?


1 Like

Yeah, you are right about this. It doesn’t really matter if the list is defined within or outside the archetype/template. The need is to be able to express the suggestions, and to allow them to be expanded when used.

1 Like

First there should be some type of design criteria on what to be modeled as a local term list and what as an external one, because the issue starts at design time.

Then on the cases with local term lists, if there is still the need of modifying the term list, I think:

  1. the archetype should be specialized, and the new term list should be defined in the specialization
  2. the modeler should be sure the new term list applies to the node semantics, and document that in the archetype (like in an ontology description or comment)
  3. then templates will use the specialized archetypes

I don’t see why this can’t be done with the current RM and modeling tools, that is why I don’t understand the proposed changes to the RM.

On both case the complete term list for a coded text could be replaced or expanded.

The problem is that if you specialise a terminology constraint, just like for any other constraint, you can only narrow / specialise it. That means only:

  • reducing it, e.g. going from {at1, at2, at3} to {at1, at2} in the child
  • adding proper child codes that are specialisations of existing codes, which has an effect like making the value set bigger, but only with more specialised codes {at1, at1.1, at1.2, at1.3, at2, at3}

The need the modellers have is to sometimes be able to treat the initial code set as a suggestion, or preference, without it being a true constraint. In case you feel discomfort at this, join the club :wink: However, there is an unavoidable reality that this does happen pretty frequently.

So we need a way of dealing with it, which is the proposed modification of the C_TERMINOLOGY_CODE constraint type, to mark a value set as being required | extensible | preferred | example (the FHIR settings). If you analyse it carefully, it’s difficult to show how these would even work that well in reality - e.g. what happens if you set something to ‘required’, and later on, you want to break the constraint? And the difference between extensible and preferred sounds ok informally, but in terms of real modelling and real processing… not so much.

However, having read a lot of these requirements over the years, I don’t have a better proposal (well, I would probably collapse extensible and preferred, but that’s a detail), so I guess we will do this change.