We need to make a decision and record it as an ongoing editorial decision for modellers and ensure that all scales/scores are aligned with the final decision.
I’ve been becoming increasingly uncomfortable with adding these additional data elements as it is effectively changing the score as it was designed. If the score/scale has been validated, we are messing with it. Are there consequences? Probably not, but do we know? Do we know for sure that adding these will not add confusion in users (especially if these archetypes are used as the source for other work) or affect it’s validity? No.
Adding a ‘Comment’ will likely contribute little harm, but every unnecessary data element adds overheads to governance and implementation and we teach modellers only to add data elements to the archetypes if there is a use case. The notion of maximal data set is not relevant here because at the moment we are imagining the data element might be useful one day to someone, rather than adding a data element that we know is underpinned by a clinical requirement.
There are no/none/zero scores that already contain ‘Confounding factors’ to my knowledge - it is a pure openEHR archetype construct.
If the score/scale is copyrighted, we are messing with someone else’s copyrighted material. In this situation we have agreed that we will represent the material faithfully and incorporate the copyright requirements in the archetype. In this case the archetype is just the means to implement the copyrighted material in an EHR but it is up to the user/vendor/recorder to use it according to the copyright. I think it is fair to assume that we won’t be adding the confounding factors or comments here.
I feel strongly that we have a responsibility to represent other people’s work faithfully in our archetypes, questioning our proposed ‘enhancements’ as we transition it from paper records into the digital environment. In fact, we increasingly have that responsibility as our model library evolves towards becoming a defacto standard for atomic data. Others may take our models and represent Comment and Confounding factors etc and we don’t understand the downstream effect, if any.
I think we should err on the side of not adding them to other people’s work, including the scores/scales, but with the caveat that if we identify clinical requirements for them, then these should be considered through investigation and peer review process, and if added we justify them and annotate the model to indicate how we have modified it.
Personally I can’t imagine why someone filling out an AVPU/ACVPU in amongst the chaos of an acute emergency assessment involving many other parameters and data recording, would also NEED to record a comment against a single, deliberately simple/quick assessment. Just saying.