Archetypes for modelling yes/no/unknown questions

Hello everyone. I would like to take this opportunity to share our experience.

We are also working on modelling a clinical registry in OpenEHR. Many of the questions are “yes” or “no” questions. Our intention is that the registry can be fed from other standard OpenEHR databases, such as the patient’s medical record.
So we are trying to use the same archetypes that a medical record would use.

We are finding it difficult to model simple questions, such as the variable “XYZ diagnosis: yes/no”.

We think that if we want to feed this variable from a previously collected data in a patient’s medical record, we should use the archetypes EVALUATION, “Problem/Diagnosis” to specify the answer “yes”, and the archetype “Exclusion” to specify the answer “no”. But this seems a very convoluted solution.

On the other hand, using OBSERVATION archetypes, such as “Problem/Diagnosis Detection Questionnaire” makes modelling much easier, but they don’t seem to have the meaning we want, and perhaps involve difficulties in the registry feed, if the medical record records diagnoses as EVALUATION, and our registry stores them as OBSERVATION.

Any ideas or comments are welcome!

1 Like

Great question Hugo.

We struggled quite lot with this in the early days of openEHR , as we came at the issue with openEHR’s prime function to support the primary ‘frontline’ recording of patient records, and in truth, with a bit of a bias towards doctor’s ways of recording.

In that world, we generally do not record negatives like ‘No diabetes’ as part of, for instance, a Problem list. We only record the positive diagnoses, other than occasional statements of Exclusion like ‘No evidence of diabetes’ after some investigations. But this is about a ‘ruled-out’ diagnosis, and even then woulds not appear on a problem list.

However, Yes/No questions do appear in front-line records, especially in nursing records or ‘clerking’ sheets by junior docs or protocol driven procedures.

In these cases the statement Diabetes - Yes is generally not ‘diagnostic’ , in the sense of a patient being suddenly being given that Diagnosis for the first time. It is about information gathering and sometimes safety checking, where often the person recording the information is not trained or empowered to ‘make a diagnosis’. It also not where/ how would expect decision support to be triggered. If a nursing admission document said Diabetes: yes, and there was no Problem/Diagnosis: Diabetes mellitus type 2 entry in the patient’s problem list, I would expect hat to be reviewed by a more senior staff member and entered.

So that is why after battling for some time to try to make these 2 modes of recording seamless, we recognised that they are actually quite different and do need to be represented differently and importantly not confused when querying . So we have the Problem/diagnosis archetype and an equivalent Screening Yes/No archetype.

Registries generally use the screening pattern because historically they have been disconnected from any source EHR data, so the information has to be curated from multiple sources mostly paper. Because the registry has a specific focus, the explicit negation is important as it tells the research community definitively (at the point of registry update) that the person completing that record could not find any evidence of Diabetes. So there is curation and protocol adherence aspect.

A further issue is that registry type questions often ‘munge’ diagnoses e.g. Angina/ MI or other heart disease Y/N
, or that they ask for further detailed info in response to a Yes e.g a specific Cancer grade.
So there is a gap between primary record and registry which right now is annoying

Can you give an example?

Ultimately, this problem is fixable once the source system data is fully available and queryable, and Registries start asking for positive problems only . In many respects UK GP systems work like this for reporting purposes.

Right now, I would model the Registry using Questionnaires but use AQL on the Problem/Diagnosis to pre-fill/suggest the responses. However you cannot use Exclusions to pre-populate as statement like ‘no evidence of diabetes’ is only true at the time it was made. The patient might develop diabetes the following week.

@siljelb @heather.leslie and @bna may have other views?

Phew - glad I got that ‘off my chest’


I think you covered it good @ian.mcnicoll .

This is a complex problem to solve. Currently screening archetypes is a very popular option in use-cases here in Norway. Many clinical situations have the screening workflow and then finally a statement of some problem definition.

I a clinical session or trajectory the precaution archetype will also be used to cover comorbidity. And there will also be comorbidity indexes involved.

For a registry the screening archetypes will cover most of the semantic needed to model the traditional registry models. It might also be worth it to investigate the need for domain specific archetypes to describe the legacy information model. Sometimes you dont have the time, resources or money to model a perfect solution.

As Ian said: the exclusion archetypes is not that much used. They tend to have a to strong statement.

1 Like

I agree with Ian and Bjørn, and I’m also curious about in what way the current screening questionnaire archetypes don’t match your semantics. They’re intentionally very broad in order to cover most semantic variations.


Thank you all for your feedback, it is very useful for inexperienced people like me.

In our case, the registry we are modelling aims to collect real life data from patients prescribed “high complexity” drugs: for example HIV, HCV, chemotherapies, immunotherapies…

These are patients who already have a diagnosis and treatment recorded in their medical records. Doctors or pharmacists consult the data in the medical record and fill in the register. Therefore, the questions asked do not fit into the concept of “screening”.

In the recommended use for the archetype “Problem/Diagnosis screening questionnaire” it is mentioned that “…has been designed to be used as a screening tool or to record simple questionnaire-format data for use in situations such as a disease registry. If the screening questionnaire identifies the presence of a problem or diagnosis, it is recommended that clinical system record and persist the specific details about the problem or diagnosis (such as the date of clinical recognition) using the EVALUATION.problem_diagnosis archetype”.

And specifically, its use is not recommended for “…record details about the presence or absence of a problem or diagnosis, outside of a screening context. Use EVALUATION.problem_diagnosis or EVALUATION.exclusion_specific for these purposes”.

This is why I commented that this archetype does not seem to be aligned with the meaning of what we intend to model.


This is an example of how we are thinking about modelling “yes/no” variables, but it seems confusing.

1 Like

Okay, I think the problem may be the word “screening”. The word isn’t in this context intended to mean only screening for a particular disease for example, but any use case where you’re asking this kind of “yes/no/unknown” question. In the Norwegian translation, the archetypes are called “Kartleggingsspørsmål om XYZ”, literally “Survey questions about XYZ”. We use the screening questionnaire archetypes for all sorts of use cases where a “yes/no/unknown” question is required.

You can also combine the screening questionnaire archetype with its corresponding “positive presence” archetype in a template, for example to more properly record a problem or diagnosis in the event of a “yes”:


Agree Silje.

Also the mix of problem/diagnosis archetype plus a screening archetype might work the other way around i.e if diabetes has been recorded in problem/diagnosis then record Diabetes Y, otherwise record in the Observation

We did something similar but split between 2 templates - an ‘input’ problem list using the EVALUATION to record the patients known positive problems and then a different template using the OBSERVATION to capture the necessary registry output Y/N format, partly because we envisaged that theoriginal problem list might be incomplete and that second step of checking and curation was necessary, after querying the EVALUATIONS and pre-populating the screening OBSERVATION.

I would definitely not use the EVALUATION exclusion archetype in this context.


Ah yes @silje - and I now see how you were using the EVALUATION to capture more detail in the event of a ‘Yes’ - good plan!!


We had a wide discussion in Norway on the semantic of the screening archetypes. The name screening is to specific for many of the use-cases we wanted that pattern for. When we translated into the Norwegian term “Kartlegging” we found it very useful. The Norwegian word is a metaphor (more or less). Directly translated it is “to build a map”. When building a map you seek knowledge through either surveys, screenings and so on.


We think it is very important that we model the registry in a way that preserves the meaning of the electronic health record data, as the electronic health record is the source of the data.
It is also very important to us that the registry has fields with answers pre-filled with the electronic health record data, ready to be validated by the pharmacist or physician.

If the community agrees that screening questionnaire archetypes can be used in an expanded way, beyond screening situations,

and there is experience in pairing OBSERVATION archetypes with EVALUATION archetypes to pre-populate form fields,

then I think this is how we should proceed.

This approach seems smart and elegant:

Thanks again everyone, it has been very helpful!


Glad to help. If this seems messy it just reflects a practical reality.

For quite a long time we tried to find more elegant approaches but eventually figured out that the yes/no pattrrn does reflect a different style of recording which is correct in many circumstances.

So we are now happy to use both patterns though it does sometimes need quite detailed conversations to figure out optimal use.


Just as an addendum, there are actual use cases for the exclusion and absence of information archetypes too, but they aren’t the yes/no/unknown questions. One example we’ve seen is an allergy list where we use Adverse reaction risk to record the positive presence of an allergy, Global exclusion to record the absence of any allergy, Specific exclusion for exclusions of specific allergies, and Absence of information to record a lack of information about any allergies.

Note that the currency of an exclusion statement will always be unknown once its been committed. To reflect this, we’re looking at remodeling the exclusion archetypes as OBSERVATIONs at some point in the future.


Thanks Silje,

As part of our training course we were explaining this and TBH I think I would now find it hard to justify the use od te Specific Exclusion archetype, other than for a Diagnostic exclusion ’ No evidence of meningitis’ , or perhaps an ‘No evidence of penicillin allergy’ i.e an actual evidenced ruled-out evaluation.

I think the Y/N pattern is very likely ot be more appropriate for Penicillin allergy Y/N or Hip surgery Y/N

Yup - I agree. If this whole area is being reviewed, I do like the idea of Global exclusions as part of a Section / list structure. FHIR has this idea. I also wonder if we can simplify the split between Global Exclusion and No infomation, into a single archetype esp[ ecially for ‘empty list’ scenarios.