DV_CODED_TEXT with open/extensible set of codes (value set)

last bit was related to your second paragraph, explicit id4 specializations could be created, and even id4 be prohibited. This would also give some actual meaning to the occurrences in DV (i.e. if it is defined as alternatives of 1…1 objects it wouldn’t be able to remove “terms” from the subset, but if they are defined as 0…1 then you probably can)

In any case I think the proposed change would be enough :slight_smile:

Sounds good to me.
It has the additional advantage (over my initial proposal with code_list_open in 1.4) that 1.4 and 2.0 are more aligned. (My intention to suggest code_list_open was to keep the 1.4 change minimal and aligned with C_STRING’s list_open)

OET’s “limit-to-list=false” in the described case would then probably best match “required = false / strength=extensible” (unless someone thinks strength=“preferred” is better?)

I think the default if not specified for existing archetypes should be required=true / strength=required.
That’s my understanding of what it means at the moment and I wouldn’t change that. No migration needed, no change in meaning.

1 Like

Well that’s certainly what the formal meaning is. But is that compatible with @ian.mcnicoll’s requirements that internal code-sets can be ‘expanded’?

1 Like

Yes, I 'm ok with that - it reflects the current position in ADL1.4. It can be the default for existing archetypes where not expressly stated but we can always change existing archetypes (I assume going from required = false to true would be a breaking change) or decide what is needed for new archetypes.

1 Like

I agree on this.

As an alternative, could all elements with the combination DV_CODED_TEXT with a value set and DV_TEXT, have required=false as default?

I can see why you want this, but technically it would require interpreting a DV_CODED_TEXT with a certain definition differently depending on whether there was a DV_TEXT next to it. But I don’t think we need to do it quite like that - consider that the presence of the DV_TEXT enables you to use the DV_TEXT anyway, regardless of what the DV_CODED_TEXT says. So if the DV_CODED_TEXT says ‘required’ but there is a DV_TEXT as well, then I think the correct interpretation is:

  • if you code this, it must be from these codes
  • or you can use text
  • (but you can’t use some other code)

Does this achieve what you want?

But… the entire point of making this change was to get away from the DV_CODED_TEXT + DV_TEXT pattern?

Then I am confused: I read your previous question to be referring to archetypes that do contain a constraint of the form DV_CODED_TEXT + DV_TEXT (i.e. as currently exists)?

Technically (i.e. from a software processing point of view), if there is no DV_TEXT at all, then it means that you can’t create a DV_TEXT instance for that data item, and satisfy the archetype. I don’t see any way out of that…

But maybe I misunderstood the requirement.

The DV_CODED_TEXT + DV_TEXT pattern is generally (only?) used when we identify that an element needs to be coded and there is a set of terms that are generally used, but we can’t be sure that the codes we include in the DV_CODED_TEXT are the only ones for every use case. The idea is that the DV_TEXT would be made into a DV_CODED_TEXT, and new terms/codes added, in the template. We’ve been told that’s not a good pattern to use, for Technical Reasons.

Aha I see. In those cases, you would modify the archetype to:

  • remove the DV_TEXT
  • change the DV_CODED_TEXT terminology constraint to have strength = extensible or strength = preferred

But any cases where the DV_TEXT is meant to stand for an actual text, it needs to be retained.

Note that it will take me a while to make the change to AOM1.4 and AOM2, and then come up with a way to represent it in ADL1.4, and ADL2, but I’ll work on it ASAP.

1 Like

Agree! :+1::smile:

Hi Silje, yes but also commonly needed where there is a need to fall back to free text where a coded entry is not available - one example being the causative agent in a n allergy where the drug that caused the allergy is not yet on the drug database (trial drug, foreign product, drug database not yet updated).

You are right that we can currently sub-class a DV_TEXT to DV_CODED_TEXT to make the list extensible but this does get messy and we should find different approaches if we can.

So agree with Thomas’s last statement.

Where we only require DV_CODED_TEXT then ‘strength’ will clarify that purpose. More importantly the same rules should apply in templates. We have a current block in AD where if a DV_TEXT is sub-classed to DV_CODED_TEXT, the DV_TEXT ‘choice’ is removed, so essentially we lose the ‘trick’ to allow is to extend the coded_text list or add free text (where appropriate).

True, but those kinds of elements (almost?) never have internal codes in the archetype. Causative agent of an adverse reaction is a good example of this. This is a pure DV_TEXT element where we would like to code it, but it’s perfectly open for free text.

The cases I’m talking about above are the ones where we always want the elements to be coded, but we’d like templaters to be able to choose between the codes in the archetypes, or their own codes.

Sure but we also need to understand how these rules flow through to specialisations and templates. and where external codes/valuesets have been defined in the archetype.

I’m not sure I understand the points here, but looking at the initial issue https://openehr.atlassian.net/browse/SPECPR-302 what I see is a modeling problem, not an RM issue. If the codes initially set in the archetype are not for general use, then the node should have an ACNNNN constraint to use an external terminology/subset. If the name of the subset is generic, implementers could use any subset they want just by setting the right API accessing the codes. At design time, what modelers could do is to set the ACNNNN code for the coded text and provide a sample subset, instead of a local list of codes inside the archetype itself. The CKM allows to create subsets/termsets I think, so that could be a good use case to use that feature in the CKM.

Of course I might not understand all the requirements in detail, so I might be missing something here.

@pablo It is both a modelling issue and an RM issue.

This is a good example of where there is a well-established real-world valueset (HL7v2-based) which is widely adopted, so to give maximum value we want to supply that to potentials user as-is and easily translatable without any need to rely on an external API or terminology service. I agree there is a case for making more use of external termsets and support for FHIR Valuesets is coming up which may be useful but … there will still be a place of internal codelists, some of which , like this may be highly standardised but not universally so. In any case, the same rules would need to be applied to any external valueset - required, extensible etc.

So I agree that in some places we might make more use in the future of external valuesets but the same ‘semantic issue’ remains - is it acceptable to extend or even replace the list of terms that we have supplied?

Ian

1 Like

Yeah, you are right about this. It doesn’t really matter if the list is defined within or outside the archetype/template. The need is to be able to express the suggestions, and to allow them to be expanded when used.

1 Like

First there should be some type of design criteria on what to be modeled as a local term list and what as an external one, because the issue starts at design time.

Then on the cases with local term lists, if there is still the need of modifying the term list, I think:

  1. the archetype should be specialized, and the new term list should be defined in the specialization
  2. the modeler should be sure the new term list applies to the node semantics, and document that in the archetype (like in an ontology description or comment)
  3. then templates will use the specialized archetypes

I don’t see why this can’t be done with the current RM and modeling tools, that is why I don’t understand the proposed changes to the RM.

On both case the complete term list for a coded text could be replaced or expanded.

The problem is that if you specialise a terminology constraint, just like for any other constraint, you can only narrow / specialise it. That means only:

  • reducing it, e.g. going from {at1, at2, at3} to {at1, at2} in the child
  • adding proper child codes that are specialisations of existing codes, which has an effect like making the value set bigger, but only with more specialised codes {at1, at1.1, at1.2, at1.3, at2, at3}

The need the modellers have is to sometimes be able to treat the initial code set as a suggestion, or preference, without it being a true constraint. In case you feel discomfort at this, join the club :wink: However, there is an unavoidable reality that this does happen pretty frequently.

So we need a way of dealing with it, which is the proposed modification of the C_TERMINOLOGY_CODE constraint type, to mark a value set as being required | extensible | preferred | example (the FHIR settings). If you analyse it carefully, it’s difficult to show how these would even work that well in reality - e.g. what happens if you set something to ‘required’, and later on, you want to break the constraint? And the difference between extensible and preferred sounds ok informally, but in terms of real modelling and real processing… not so much.

However, having read a lot of these requirements over the years, I don’t have a better proposal (well, I would probably collapse extensible and preferred, but that’s a detail), so I guess we will do this change.

2 Likes