Is a Score an Observation or an Evaluation (philosophy fun)?

In recently reviewing the various obstetric archetypes from @danielle.santosalves, and many other archetypes over the years, and perusing CKM, I had a thought about whether scores are Observations or Evaluations.

Just to start a little fun controversy, I thought I would suggest how this should be decided and see what modellers think.

Firstly, the interesting thing about a score is that it is intended to generate some level of ‘assessment’ beyond the phenomena it observes. An Apgar generates a number that is quickly understood to mean the newborn is good to go home, or needs to go to the ICU. GCS gives an initial assessment of head injury severity. Etc.

Since scores have a built-in interpretation, they seem like they could be openEHR Evaluations, i.e. the same general category as diagnoses and other kinds of assessments. However, there is a difference. The general category of ‘assessment’ (including diagnosis) is intended for assessments that are understood as true for some time (often permanently, as in the case of a Dx for CF or diabetes type I). Even a diagnosis of severe strep throat remains true for a few days until the penicillin has kicked in.

I would argue that many scores provide more a real-time interpretation of the patient situation rather than a stable assessment of underlying disease or other process. They mostly convert underlying patient observables to a severity number, i.e. they provide an immediate interpretation of the significance of what is being observed.

I would argue that this isn’t the same thing as a diagnosis of underlying causal process. An Apgar of 5 doesn’t tell you what is wrong with the newborn, only that something is wrong right now. In some cases, the situation can be sorted quickly with a simple intervention, and the baby goes back to being ‘healthy’. A low GCS doesn’t tell you anything about what’s going on with the patient, only that it is probably urgent.

This is an argument as to why scores generally should not be Evaluations.

On the other hand, most scores are time-linked. You have to keep doing them to get the current interpretation. Barthel is an example - it might be good this month, and bad next month.

By that argument, scores should mostly be Observations. Nearly all scores in CKM are in fact Observations, so either the authors agree with the above, or have other reasons.

As I said, a bit of philosophy fun…!

4 Likes

That is pretty well my view. Many Observation may have an interpretive aspect or data point but it is an interpretation of the observed ‘raw data’ not something that is about the patient as whole

OBSERVATION
systolic 180
diastolic 110

interpretation 'blood pressure high ’

and after several confirmatory readings

EVALUATION diagnosis ‘Essential hypertension’

Scores are ? always Observations

https://pubmed.ncbi.nlm.nih.gov/21440086/ is a must-read

3 Likes

Scores are always #openehr OBSERVATION. The execution of a score protocol is an ACTION, and you might order a score using an INSTRUCTION. Based on the score you might interpret it somehow and make some recommendations as EVALUATION entries.

Whether you use the whole battery of classes or not depends on the use-case.

That’s my view on this topic.

5 Likes

Observation: all that documents that is observed using the senses (seeing, hearing, touching, smelling, tasting)
Assessment/evaluation: all that documents that is assessed/evaluated as interpretation of Observations.

An APGAR score can be both.
When an APGAR score is reported, read, and documented it is clearly an Observation.
When the APGAR score is calculated based on a series of Observations then it is an Assessment/evaluation

1 Like

Hi Gerard

“I’ve been expecting you , Dr Freriks” :wink:

For everyone lurking - Gerard takes a perspective that is not generally how model archetypes . He and I and others have and will continue to spend many happy hours in philosophical (preferably beer-fuelled) sessions debating this . but …

When the APGAR score is calculated based on a series of Observations then it is an Assessment/evaluation

is not how we model a calculated score or interpretation of an Observation , would still be an observation.

I don’t mind having the debate (again!!) . Just want those new to the discussion to be clear that, at least for those of us doing most of the practical modelling work, it is a settled argument.

An interpretation of a score (calculated or not) would be modelled (pretty well universally, I think) within the same Observation, not as a separate Evaluation archetype.

Just have a look at the way that things are modelled in CKM, for examples.

3 Likes

Dear Ian,
I fully agree with you. I’m from an other ‘planet’.
Thomas was expecting some philosophical fun. It triggered me and felt my response fitting that.
I know that OpenEHR modelling practices evolved over time. Some practices were followed and we can find them in CKM.
But must it stay the same just because of some not well thought thru ways to model? I think not.
I provided some definitions at the start of my small reaction and exposed the consequences.
Just for philosophical reasons they are useful to be considered.

I wonder what the contra-arguments are to my way of thinking. Just for fun!

2 Likes

Thank you @thomas.beale , @ian.mcnicoll and everybody for your explanations.

I was about to start modelling some ‘score’ archetypes, reviewing others at CKM and having this philosophical debate with Thomas (unfortunately not beer/cider-fuelled - yet).
So, I’ll send to the CKM Editors a suggestion for a new score archetype (OBSERVATION.Capurro).
Thank you for the explanation!

2 Likes

Are you sure that Capurro is a score? - it sounds like a cocktail!!

3 Likes

I thought it was a pizza ingredient at first, but no, it’s a real thing. And might not even be an Observation…

1 Like

It is definitely an Observation :slight_smile:

I keep quoting this paper by [Feblowiz - Aortis] (https://core.ac.uk/download/pdf/82455827.pdf). It really helped me understand that all observational data goes through a set of potentially tranformative lenses but are still ‘observations’ until that very last stage that they call ‘Synthesis’ , when the finding is applied to the patient not the observation itself.

2 Likes

Ah - first of all I realise I was talking about another ‘score’ (some kind of satisfaction-with-antenatal-care thing) that I wondered about being an Evaluation. @danielle.santosalves will probably explain at some point.

Secondly, I agree with your explanation of why scores are observations.

Yep, great quote.

2 Likes

It’s exactly this @ian.mcnicoll .
Unfortunately is not a cocktail or pizza ingredient!! kkkkk
Obstetrics has funny names like this. Most of them because the men who ‘discover’ them ( Hodge, De Lee, Falópio and so on).
At least we have Apgar to save the girls! :wink:

Anyway…
Soon, I’ll send you the Capurro’s method one!

But what @thomas.beale is talking about was another doubt (about another guy) - Kotelchuck scale.

  1. Kotelchuck M. The Adequacy of Prenatal Care Utilization Index: Its US distribution and association with low birthweight. American Journal of Public Health. 1994;84(9):1486-1489. doi:10.2105/AJPH.84.9.1486
  2. Milton Kotelchuck. OVERVIEW OF ADEQUACY OF PRENATAL CARE UTILIZATION INDEX. The University of North Carolina at Chapel Hill. https://www.mchlibrary.org/databases/HSNRCPDFs/Overview_APCUIndex.pdf. Published 1994. Accessed July 12, 2020.

In fact, this is not a score, (maybe) but a scale (I think).

Another doubt related was that: OBSERVATION X EVALUATION when we are talking about the quality evaluation.
And also about Patient’s satisfaction (Survey). How to model that? Evaluation? Observation? E.g. Pregnant women’s satisfaction with prenatal care.

Here is the mind map with this two together:

Any suggestion?

2 Likes

As a general observation (pun intended) about this topic, almost all scores and scales are currently modelled as OBSERVATION archetypes. This is because they need to be repeated at multiple points in time, and in some cases also graphed. Some also need to be performed for an interval of time.

The few which aren’t OBSERVATIONs are CLUSTERs, rather than EVALUATIONs. This is because they’re classifications that apply to lab results, diagnoses, etc, so the intent is for the CLUSTER archetypes to be nested within an appropriate parent ENTRY archetype.

4 Likes

Hi All,
Sorry to bring this up again.

Wanted to check where do we store the score of a scored survey i.e a questionnaire in which we assign points or scores to answer options.

Is this an observation or evaluation. If evaluation can the Health risk assessment archetype be used?

Hi @chaya ! Is this a standardised score or scale you want to represent? If so, it should probably be its own OBSERVATION archetype, including all of the questions etc.

2 Likes

@siljelb It is a simple score calculated on the questions and answers. We have designed the questionnaire according to option 3 in this wiki link except used the questionnaire archetypes instead of the actual clinical archetypes.
https://openehr.atlassian.net/wiki/spaces/spec/pages/284590129/Questionnaires+and+the+RM

But the option 3 does not mention how to capture the scores. So can we create an observation archetype (like option 5) with the scores and and link it to the original questionnaire composition using the LINK CLASS instead of including all the questions.

Generally, when modelling scores and scales, we model them as separate archetypes, including using DV_ORDINAL or DV_SCALE data types where we need to associate a score component with a question/answer, and one or more “total score” or similar element(s) as needed. The Mayo score archetype is a good example of this.

Using the LINK class mentioned in the Mayo score discussion, was referring to the potential connection of the original recording of the clinical findings with their corresponding score components in the Mayo score.

Thanks Chaya,

Could you share a screenshot or original documentation? I agree with Silje but am struggling slightly to understand what you are working with.

2 Likes

@ian.mcnicoll the use case is as below:

  1. capture all the questionnaire as part of template according to option 3. Attached is a sample (sample_questionnaire.en.v0.opt). Note we cannot use DV_ORDINAL or DV_SCALE data types as we do not want to use name/value for the questions.
  2. capture the score for the above questionnaire (health_risk_score.en.v0.opt) As part of this link it to the questionnaire composition.

This can be a general question - can we link two compositions using LINK class and if so can we get some examples for AQL for this usecase?

sample_questionnaire.en.v0.opt (42.4 KB)
sample_questionnaire-composition-1.json (7.0 KB)
health_risk_score.en.v0.opt (18.6 KB)
health_risk_score-flat.json (1.2 KB)

Thanks,

This makes more sense now!!

So a baesline questionairre of the type

Do you have depression Y/N?
Do you have anxiety Y/N?

and then a separate score based on positive responses
and using LINKS to document the source of the original questionnaire responses.

I’m pretty sure no-one supports joining compositions in AQL via LINK

I suspect most folks are retrieving the LINK details via 2 separate queries.