Extra data elements in archetypes representing clinical scores and scales

Might be interesting to share that our product (Nedap ons clinimetrics) adds a free text evaluation archetype at the bottom of our composition template. Our app is currently mainly used for ‘questionnaire style’ validated measurement instruments.
I think this is a reasonable compromise to not change the original instrument, but to offer the opportunity to the user to describe the circumstances under which the measurement has taken place.
Otherwise I would also err on the side of caution and not add to much opportunity to taint the validity of the measurement. Because of unforeseen consequences.

@nuno.abreu There is a RM element to use to document information provider. It has to be made available in the user interface, though. Other option is to add another “comment” archetype in the template, which then will be linked to the scale through the same composition. Or… we can allow additional comments within the scale archetype and mislead users to document irrelevant info or info that should be documented elsewhere.

I don’t have the answer. My mind is split.

@joostholslag You mean at the bottom of the template?

1 Like

@varntzen I thought that RM element only could be use to record all data points included in the archetype, not in an individual way. So its possible to record diferent sources of information in the same archetype within the same composition using RM provider? I agree that the concept “Comment” is very generic, and is dificult to define what type of information is relevant. Maybe a content analises in data record in this element would help.

@joostholslag Evaluation archetype? Why Evaluation?

My 2c. The value of being able to annotate a score may be critical in reviewing an incident that led to harm. With the raw score, you don’t necessarily get the detail alluded to above - the example of recording GCS in a patient who is blind is a good one. Would it really upset the authors of NEWS2 to know that in an implementation, there was an option to add comments or qualifiers ? I doubt it. More likely they will be delighted that their score has been adopted as an archetype. Whether the comments would ever be used is another question, but I can imagine instances where they might be invaluable to a reviewer.

@nuno.abreu Sorry, you are right. It’s not possible to define who’s the source of information on element level.

Looking at things from a semantic and structural point of view, if I saw say GCS, Barthel, Apgar etc modelled with some extra fields for things like ‘confounding factors’ etc, I would not automatically think that such fields were part of the score design, since they are general medicine concepts. Some scores may document such things anyway, but ultimately, what matters in modelling any score is how they tend to be used in real life. So, if GCS typically has fields added to it in (say, for example) NHS EDs, that tells you that a usable form of GCS may need such fields.

Additional fields can always be added with documentation indicating whether they were part of the original design, but again, every score/scale I have ever seen, the formal part of the thing is the set of items with numbers and classifiers attached, and a method of generating the final score (if applicable). So it seems safe enough to add extra fields that are not part of that structure. Adding more score/numeric items on the other hand would definitely be ‘messing with the design’ (and maybe the copyright in some way).

Having said all that, I’m not stating a view on what to do (that’s for the clinicians out there), and it may well be that most HCPs agree with Heather’s last statement that you would just use the score as intended and move on. The whole point of a score in nearly all situations after all is to classify (i.e. triage) patients quickly and send those with certain scores to ICU, refer others internally and send the rest home (or whatever the options are). They are also intended as tools that well trained non-MDs can use to get a pretty correct answer and know what to do next, e.g. even a junior nurse can probably get Waterlow, Barthel right (I’m reasonably sure even I could, which tells you they are pretty well designed).

In terms of a structural approach to modelling, we could think of making all scores & scales specialise a generic score archetype that has e.g. a Cluster called (say) ‘additional items’ and under that we put all the ‘confounding factors’ etc. This will obviously break a certain amount of existing models / UI / applications that assume that scores are top level archetypes, but it may be worth exploring the real costs of that, if there is any consensus that there is a real need.

If there isn’t a real general need, I’d probably stick to adding additional fields only to any specific scores where there is a clamour for such fields, and leave the rest alone.

Or just specialize the “pure” scale archetype with suitable additional fields if needed. Make these Clusters like semantic patterns that you add when you need. This breaks nothing as far as I understand

Wow, thanks for the excellent input everyone! :star_struck:

To try and summarise, I’m reading this as:

  • It’s often necessary to add free text annotations to add context to scores and scales. In paper documentation (which for some reason still seems to be the default mode for most scores), these would be jotted down in the margins or on the back of the sheet, which of course is impossible in structured electronic documentation.
  • It’s unlikely that adding free text elements would harm the validity of the scores and scales, but adding structured elements more likely would.
  • One way of adding annotations could be to add a free text archetype to the same template as the score archetype. This would keep the score archetype “clean”, but could lead to separation of the score and its annotation, especially if there are other archetypes in the same template.
  • Adding the annotation elements to the score archetypes runs a relatively minor risk of “tainting” the score, but ensures that the annotation is kept in context of the score or scale to which it belongs.
  • A data element can have a comment added to it, denoting that it isn’t part of the original score or scale, but was added to allow free text annotations to be made.

Based on this, I’m inclined to go for my original gut feeling, which was to add additional elements as we have been till now, but maybe add comments as outlined in the final bullet point.

3 Likes

Nope, there are tons of scores to be performed multiple times, during the day or days/weeks/months apart. Comparing the score between them and follow a trend is a possible (and likely) use case for many of them.

Most of the scores are made for paper, and do not contain extra fields for confounding factors, comments or information provider. But practice is that that info is documented somewhere. Somewhere else in the Health record, or in the margin of the paper form.

The structure of scores are extremely heterogenous. Having a wide open generic one, and specialize is not a very good idea, and will break all design of scores so far. Please don’t. Scores are entitled to be concepts of their own, and some of them are so familiar in the naming that they’ve becomed nouns.

Ah, good to know. I am mainly familiar with <10 very well known scores which I think are of the triage variety (Apgar, GCS etc). What are some examples of scores used repeatedly to generate a trend over time (now that I think of it, there are some in obstetrics …).

My suggestion about a generic score archetype a) wouldn’t contain anything specific to score structure or content in the parent (only the possible additional meta-data), but b) would break existing archetypes to some extent. So I am not proposing it in a practical sense, just pointing out how you could model these kind of requirements if starting fresh. Generally we should be using specialisation much more in archetypes, but doing so routinely requires moving to ADL2…

Hi Silje,

Great summary.

I agree - If people find they are having to add comments, external to the score archetype (whether by specialisation/a different template) they are ‘tainting it’ in any case. Let;'s have that discussion ‘up front’ in the core archetype review - do we need comments or not, does adding a comment introduce risk - if so we leave it out of the archetype and explain why.

I think I stand at

  1. It probably remains good advice to add a comment field to a scale or score, and an extension slot (often needed for alignments with other standards, local infrastructure, or indeed local variants (local responsibility)

  2. Confounding factors is probably generally not needed, at state /event level.

  3. There will be rare situations where the original copyright holders may have very strict rules about representation, or where inclusion of a comment makes no sense clinically or clearly adds a safety risk - in which case leave it out, or argue our case. I only know of one archetype so far where this might be an issue (and it remains unpublished for that reason!!).

So, by default add a comment and extension slot.

I don’t think we need anything new in terms of RM/Spec.

A lot of the ones for psychiatry is meant to show how the patient is doing over time. MADRS is one, only in Norwegian for now Observation Archetype: MADRS [Nasjonal IKT Clinical Knowledge Manager]

1 Like

There is massive increasing use of Patient Recorded Outcome Scores (PROMS) in every field.

e.g Orthopaedics

https://ckm.apperta.org/ckm/projects/1051.61.33

1 Like

Time for me to get up to date :wink:

I think it largely depends what you are trying to achieve.

If you are trying to document the actual information provider / provenance for a single element, you could use the RM feeder_audit attribute which is available to every Element.

https://specifications.openehr.org/releases/UML/latest/#Architecture___18_1_83e026d_1433773263740_183956_5413

However my reading of Nuno’s use-case is that this is much more about commenting on the interpretation of the score "Diet information provided by elderly mother - not sure this is entirely reliable’, rather than “Information provider: Vera McNicoll” (Both may be used, of course).

After further discussing this issue, and referencing our current modelling guidelines, we’ve decided on the guidance that free text elements like Comment or Confounding factors should not be included in archetypes for scores and scales unless there are very specific real world use cases where they are needed.

This is in accordance with the modelling principle that data elements should be added to an archetype only if a real world use case for it can be presented. The examples that we’ve been able to come up with for the use of free text additions to scores and scales are almost exclusively information that belong in other documentation, not alongside every instance of a score or scale.

However, as a compromise, we will leave the Extension SLOT in these archetypes to allow for annotations to be added in CLUSTER archetypes.

As removing these elements would constitute a breaking change, we will not make this change to currently published archetypes unless there are other breaking changes that need to be made.

The Archetype content & style guide will be updated accordingly.

Thank you all for good arguments and discussion! :raised_hands:

6 Likes

Had to check with a colleague: Evaluation because it’s meant for conclusions or interpretation of the observer. Not for comments. Although it’s probably being misused for that at the moment. This discussion is making us rethink this process. Thanks for that:)

1 Like

Hi @joostholslag. This has always been a tricky area and in a technical sense whether you choose Evaluation or Observation is ultimately not critical.

My own policy is to regard an ‘opinion’ about a single observation as being about the test not the person, and therefore an Observation. We tend to use the term ‘Interpretation’ . So, for example, some versions of BMI, allow the raw calculation to be graded as 'underweight, normal, overweight and obese - to me these are all just reworkings of the original data. Just because the BMI interpretation in my scales App says I am obse, it does not mean I am obese. That requires someone to evaluate the whole person and decide if I should /or not be labelled as Obese as a problem.

One can see the same issue with ‘blood pressure hypertensive’ (based on the observation alone) vs. ‘Essential hypertension’ as an evaluation, taking into account all the circumstances, other readings, confounding factors.

To come back to the scales/ scores issues, quite a few of these scores have some sort of grading/interpretation based on the raw score. These should definitely be in the score Observation IMO. e.g Clinical risk category in NEWS2.

3 Likes