Validation of coded_text values esp. with external terminology

I am seeing a difference in interpretation of if and how a coded_text/value should be validated

e.g. ehrBase does full validation on


Better does only

but NOT value

Full validation, particularly against a default value can be problematic in international templates because we generally want to add the coded value (rubric) for human meaning ubt international settings this text may well be translated or not!!).


The only issue in EHRBASE is if you don’t have the translations for the terminology to the current language, the value validation will fail. I face that issue while defining test cases and data sets for it.

That is not really an issue, but generates a big overhead of work, because if this is used in China or Uruguay, we need to translate every single terminology to chinese or spanish. But on the other hand, you still need to do that on any implementation because at some GUI you need to actually select a value that will be in your local language. I guess this is only an issue for testing purposes, not for a production environment, and is a more strict approach IMO.

1 Like

I would say value validation is desirable, but much harder than it seems at first sight. As you say, translations may occur, and doesn’t mean it’s wrong. Same happens if an synonym is used: it’s not wrong, but some systems may want to enforce a preferred term. In theory all this is doable in ADL, but needs to have solved terms beforehand (and declare explicitly alternatives), which not every tool out there can do. Not sure if it can be done with domain types though.
So for me it’s more a tooling thing than a specification thing.

Agree - it is the internationalisation that starts to create issues, particularly if we add local SNOMED-coded termsets and default values. For human sense making, we really have to add the text values but I think we should suggest that the rubrics of any non ‘local’ codes should not be validated. This is really the job of an external TS if is can be done at all

I don’t think EHRBASE is actually validating against “external” terminologies, I think is only validating values on internal terminologies, that is the openehr terminology and coded texts defined in archetypes/templates. It actually forced me to fix some test data sets I’ve created that were no so strict :slight_smile:

For that scope, I don’t see an issue. External terminologies is a different problem.

It validating where we set default values or hard-wired SNOMED-CT valuesets in the template - see

As said: if codes are defined in the template, it will validate the value, though I don’t think hardwired codes should be in the OPT, but reference to an external subset via a terminology constraint node acNNNN, even if that is a one code subset.

I guess we need to argue if that is a good modeling practice or not and what is preferred. I guess current OPT was a quick solution since defining subsets might require other systems/services. But for testing purposes those could be just defined as CSV files that are distributed alongside with the OPT, so implementers can load all the files on their correspondent platforms without problems.