Extra data elements in archetypes representing clinical scores and scales

In the recent review of the ACVPU scale archetype (https://ckm.openehr.org/ckm/archetypes/1013.1.3317), a question has arisen about whether we should add any extra data elements to this archetype or any other archetypes that represent clinical scores or scales. Other examples of these are Glasgow Coma Scale (https://ckm.openehr.org/ckm/archetypes/1013.1.137), qSOFA score (https://ckm.openehr.org/ckm/archetypes/1013.1.3813) and ECOG performance status (https://ckm.openehr.org/ckm/archetypes/1013.1.1317).

The argument against adding elements like “Comment” or “Confounding factors” is that these are generally not part of the original, often validated, scores or scales, and therefore we are misrepresenting the score or scale in the archetype.

The argument for adding them is that there may be use cases where it’s necessary to add qualifying free text to the recording of these scores and scales.

Since this is a question of a larger modelling pattern, we’d like to get some wider community input.

I’m very much in favour of adding ‘standard’ elements like confounding factors, comment and slots as although we need to stay true to the original score, we are also about modelling the data capture contexts of those scores, and we know in practice that people do need to add comments. or other aspects covered by the RM like timing and composer etc. Exclusions ‘Not done/reason’ etc.

We will also find that we need to adapt to sensible adaptations e.g the original GCS was helpfully adapted to take account of missing data e.g patient blind. This is now in the latest GCS spec. but the clinical requirement moved ahead of formal review. We absolutely must be sensitive to the original spec. but I think we have ot be true to our own mission.

2 Likes

We need to make a decision and record it as an ongoing editorial decision for modellers and ensure that all scales/scores are aligned with the final decision.

I’ve been becoming increasingly uncomfortable with adding these additional data elements as it is effectively changing the score as it was designed. If the score/scale has been validated, we are messing with it. Are there consequences? Probably not, but do we know? Do we know for sure that adding these will not add confusion in users (especially if these archetypes are used as the source for other work) or affect it’s validity? No.

Adding a ‘Comment’ will likely contribute little harm, but every unnecessary data element adds overheads to governance and implementation and we teach modellers only to add data elements to the archetypes if there is a use case. The notion of maximal data set is not relevant here because at the moment we are imagining the data element might be useful one day to someone, rather than adding a data element that we know is underpinned by a clinical requirement.

There are no/none/zero scores that already contain ‘Confounding factors’ to my knowledge - it is a pure openEHR archetype construct.

If the score/scale is copyrighted, we are messing with someone else’s copyrighted material. In this situation we have agreed that we will represent the material faithfully and incorporate the copyright requirements in the archetype. In this case the archetype is just the means to implement the copyrighted material in an EHR but it is up to the user/vendor/recorder to use it according to the copyright. I think it is fair to assume that we won’t be adding the confounding factors or comments here.

I feel strongly that we have a responsibility to represent other people’s work faithfully in our archetypes, questioning our proposed ‘enhancements’ as we transition it from paper records into the digital environment. In fact, we increasingly have that responsibility as our model library evolves towards becoming a defacto standard for atomic data. Others may take our models and represent Comment and Confounding factors etc and we don’t understand the downstream effect, if any.

I think we should err on the side of not adding them to other people’s work, including the scores/scales, but with the caveat that if we identify clinical requirements for them, then these should be considered through investigation and peer review process, and if added we justify them and annotate the model to indicate how we have modified it.

Personally I can’t imagine why someone filling out an AVPU/ACVPU in amongst the chaos of an acute emergency assessment involving many other parameters and data recording, would also NEED to record a comment against a single, deliberately simple/quick assessment. Just saying.

My 2c

Cheers

Heather

1 Like

Hi
I can definitely see both sides here.

In Glasgow Coma Scale, if the patient’s eye or face is swollen, that should be recorded elsewhere in the EHR. But will it? In paper that would probably be handwritten somewhere in the margins (and by this not be true the copyrighted version, perhaps).

So maybe this is more a question of should we replicate the practice in the paper world, or should we be pure to the scales and let implementers apply other archetypes in the template to document confounding factors, the fact and reason why one element in the score wasn’t performed, etc?

Happy to get others input on this!

Vebjørn

1 Like

I also see both sides.

I think we can all agree that we should not be interfering with the representation , meaning or algorithm of the published score.

However I think it is important to be clear that we are not re-publishing the score, we are developing health record components that faithfully represent that score in an implementable way, that is conformant to the openEHR eco-system. So already we are adding all sorts of data-points to the score via the RM - null flavours, not done exclusions vua the exclusion archetype, multiple events etc.

Sometimes we have to re-build the published ‘paper’ score, to make it sensibly implementable, in a way that might not be easily recognisable to the original author. The prime example is Waterlow but even NEWS2 has been (IMO sensibly) reworked to split e.g high and low pulse rate scores into separate ordinal values when in the original design these are combined.

I agree @heather.leslie that we should think carefully about including comments, confounding factors or indeed even extension slots into Score type archetypes. It need not be routine but for the reasons that @varntzen gave I also don’t think we should routinely ‘ban’ their use. In practice, paper forms often have scribbled comments or local adaptations.

There is definitely a discussion to be had about best practice but I don’t think this should be conflated with copyright/representational issues. We are not doing anything different to any software development of a score or scale - under the hood a representation of NEWS2 is going to look very different from the paper original.

There are some score authors who place very specific requirements on implementers on look and feel in an application but that’s not our problem.

I’ll come back to the core philosophy that we are not publishing the NEWS2 score, we are publishing a technical representation of NEWS2 that allows it to be easily implemented and used in a real system, inside the context of the RM which introduces a whole lot more ‘stuff’ by default. We know from real systems that it is very common to need to add some sort of comment. I doubt if this will ever conflict with the veracity of a score, other than clarifying some aspect of the patient’s circumstance or context that makes interpretation of that score more nuanced, and therefore actually helps.

2 Likes

Just to add to @varntzen comment “should we replicate what is in the paper world” - absolutely not as a routine but adding comments which reflect confounding factors or methodological problems , simply reflect the reality of clinical care, nothing to do with paper. This is often useful information. The problem is that system designers only see the original paper forms, not the completed paper forms, with all of the annotations, and score designers rarely have any informatics input that might suggest adding these to formal record component definition. We are filling that gap - I’d much prefer if the original score designers thought through real-world usage a little more, and had proper informatics input.

2 Likes

Just to clarify, I didn’t intend to ask whether to replicate the paper world or not, but the practice in the paper world. Which we know is dirty and full of local adjustments, scribbled notes, parts of a forms simply erased by XXXX’es and replaced with handwritten stuff.

(We found notes written on sandwich paper in the archived paper health record once… Very dirty :slight_smile: )

1 Like

If I use a scale, its because is suitable for that clinical condition, but i think there are information related to the context that might explain some variations in the score during a period of time. For example, using braden scale in a patient admission, there are elements that i can observe (mobility) , and others that i need to ask the patient or cargegivers, like nutrition. In this case , comments would be useful. In my opinion , doesn´t interfere with scale validity , but would help to interpret why a variation in score exists.

1 Like

@nuno.abreu If you use the Comment element in Braden scale to document the patients nutrition, how will others find that information? I’m not saying using the comment element is wrong per se, but if so we must be sure to document what’s relevant to the score in that particular comment, and not something that’s more relevant (or anyone expect to find) somwhere else in the EHR. This is what is problematic with the design per now.

@varntzen not to document nutrition, but only to record that the source of information is diferent from the other elements, and there for, possibily not so consistent . The final score is based on the best information avaibel, but that comment add value to clinical intepretation.

Might be interesting to share that our product (Nedap ons clinimetrics) adds a free text evaluation archetype at the bottom of our composition template. Our app is currently mainly used for ‘questionnaire style’ validated measurement instruments.
I think this is a reasonable compromise to not change the original instrument, but to offer the opportunity to the user to describe the circumstances under which the measurement has taken place.
Otherwise I would also err on the side of caution and not add to much opportunity to taint the validity of the measurement. Because of unforeseen consequences.

@nuno.abreu There is a RM element to use to document information provider. It has to be made available in the user interface, though. Other option is to add another “comment” archetype in the template, which then will be linked to the scale through the same composition. Or… we can allow additional comments within the scale archetype and mislead users to document irrelevant info or info that should be documented elsewhere.

I don’t have the answer. My mind is split.

@joostholslag You mean at the bottom of the template?

1 Like

@varntzen I thought that RM element only could be use to record all data points included in the archetype, not in an individual way. So its possible to record diferent sources of information in the same archetype within the same composition using RM provider? I agree that the concept “Comment” is very generic, and is dificult to define what type of information is relevant. Maybe a content analises in data record in this element would help.

@joostholslag Evaluation archetype? Why Evaluation?

My 2c. The value of being able to annotate a score may be critical in reviewing an incident that led to harm. With the raw score, you don’t necessarily get the detail alluded to above - the example of recording GCS in a patient who is blind is a good one. Would it really upset the authors of NEWS2 to know that in an implementation, there was an option to add comments or qualifiers ? I doubt it. More likely they will be delighted that their score has been adopted as an archetype. Whether the comments would ever be used is another question, but I can imagine instances where they might be invaluable to a reviewer.

@nuno.abreu Sorry, you are right. It’s not possible to define who’s the source of information on element level.

Looking at things from a semantic and structural point of view, if I saw say GCS, Barthel, Apgar etc modelled with some extra fields for things like ‘confounding factors’ etc, I would not automatically think that such fields were part of the score design, since they are general medicine concepts. Some scores may document such things anyway, but ultimately, what matters in modelling any score is how they tend to be used in real life. So, if GCS typically has fields added to it in (say, for example) NHS EDs, that tells you that a usable form of GCS may need such fields.

Additional fields can always be added with documentation indicating whether they were part of the original design, but again, every score/scale I have ever seen, the formal part of the thing is the set of items with numbers and classifiers attached, and a method of generating the final score (if applicable). So it seems safe enough to add extra fields that are not part of that structure. Adding more score/numeric items on the other hand would definitely be ‘messing with the design’ (and maybe the copyright in some way).

Having said all that, I’m not stating a view on what to do (that’s for the clinicians out there), and it may well be that most HCPs agree with Heather’s last statement that you would just use the score as intended and move on. The whole point of a score in nearly all situations after all is to classify (i.e. triage) patients quickly and send those with certain scores to ICU, refer others internally and send the rest home (or whatever the options are). They are also intended as tools that well trained non-MDs can use to get a pretty correct answer and know what to do next, e.g. even a junior nurse can probably get Waterlow, Barthel right (I’m reasonably sure even I could, which tells you they are pretty well designed).

In terms of a structural approach to modelling, we could think of making all scores & scales specialise a generic score archetype that has e.g. a Cluster called (say) ‘additional items’ and under that we put all the ‘confounding factors’ etc. This will obviously break a certain amount of existing models / UI / applications that assume that scores are top level archetypes, but it may be worth exploring the real costs of that, if there is any consensus that there is a real need.

If there isn’t a real general need, I’d probably stick to adding additional fields only to any specific scores where there is a clamour for such fields, and leave the rest alone.

Or just specialize the “pure” scale archetype with suitable additional fields if needed. Make these Clusters like semantic patterns that you add when you need. This breaks nothing as far as I understand

Wow, thanks for the excellent input everyone! :star_struck:

To try and summarise, I’m reading this as:

  • It’s often necessary to add free text annotations to add context to scores and scales. In paper documentation (which for some reason still seems to be the default mode for most scores), these would be jotted down in the margins or on the back of the sheet, which of course is impossible in structured electronic documentation.
  • It’s unlikely that adding free text elements would harm the validity of the scores and scales, but adding structured elements more likely would.
  • One way of adding annotations could be to add a free text archetype to the same template as the score archetype. This would keep the score archetype “clean”, but could lead to separation of the score and its annotation, especially if there are other archetypes in the same template.
  • Adding the annotation elements to the score archetypes runs a relatively minor risk of “tainting” the score, but ensures that the annotation is kept in context of the score or scale to which it belongs.
  • A data element can have a comment added to it, denoting that it isn’t part of the original score or scale, but was added to allow free text annotations to be made.

Based on this, I’m inclined to go for my original gut feeling, which was to add additional elements as we have been till now, but maybe add comments as outlined in the final bullet point.

3 Likes