Nope, there are tons of scores to be performed multiple times, during the day or days/weeks/months apart. Comparing the score between them and follow a trend is a possible (and likely) use case for many of them.
Most of the scores are made for paper, and do not contain extra fields for confounding factors, comments or information provider. But practice is that that info is documented somewhere. Somewhere else in the Health record, or in the margin of the paper form.
The structure of scores are extremely heterogenous. Having a wide open generic one, and specialize is not a very good idea, and will break all design of scores so far. Please donât. Scores are entitled to be concepts of their own, and some of them are so familiar in the naming that theyâve becomed nouns.
Ah, good to know. I am mainly familiar with <10 very well known scores which I think are of the triage variety (Apgar, GCS etc). What are some examples of scores used repeatedly to generate a trend over time (now that I think of it, there are some in obstetrics âŚ).
My suggestion about a generic score archetype a) wouldnât contain anything specific to score structure or content in the parent (only the possible additional meta-data), but b) would break existing archetypes to some extent. So I am not proposing it in a practical sense, just pointing out how you could model these kind of requirements if starting fresh. Generally we should be using specialisation much more in archetypes, but doing so routinely requires moving to ADL2âŚ
I agree - If people find they are having to add comments, external to the score archetype (whether by specialisation/a different template) they are âtainting itâ in any case. Let;'s have that discussion âup frontâ in the core archetype review - do we need comments or not, does adding a comment introduce risk - if so we leave it out of the archetype and explain why.
I think I stand at
It probably remains good advice to add a comment field to a scale or score, and an extension slot (often needed for alignments with other standards, local infrastructure, or indeed local variants (local responsibility)
Confounding factors is probably generally not needed, at state /event level.
There will be rare situations where the original copyright holders may have very strict rules about representation, or where inclusion of a comment makes no sense clinically or clearly adds a safety risk - in which case leave it out, or argue our case. I only know of one archetype so far where this might be an issue (and it remains unpublished for that reason!!).
So, by default add a comment and extension slot.
I donât think we need anything new in terms of RM/Spec.
I think it largely depends what you are trying to achieve.
If you are trying to document the actual information provider / provenance for a single element, you could use the RM feeder_audit attribute which is available to every Element.
However my reading of Nunoâs use-case is that this is much more about commenting on the interpretation of the score "Diet information provided by elderly mother - not sure this is entirely reliableâ, rather than âInformation provider: Vera McNicollâ (Both may be used, of course).
After further discussing this issue, and referencing our current modelling guidelines, weâve decided on the guidance that free text elements like Comment or Confounding factors should not be included in archetypes for scores and scales unless there are very specific real world use cases where they are needed.
This is in accordance with the modelling principle that data elements should be added to an archetype only if a real world use case for it can be presented. The examples that weâve been able to come up with for the use of free text additions to scores and scales are almost exclusively information that belong in other documentation, not alongside every instance of a score or scale.
However, as a compromise, we will leave the Extension SLOT in these archetypes to allow for annotations to be added in CLUSTER archetypes.
As removing these elements would constitute a breaking change, we will not make this change to currently published archetypes unless there are other breaking changes that need to be made.
Had to check with a colleague: Evaluation because itâs meant for conclusions or interpretation of the observer. Not for comments. Although itâs probably being misused for that at the moment. This discussion is making us rethink this process. Thanks for that:)
Hi @joostholslag. This has always been a tricky area and in a technical sense whether you choose Evaluation or Observation is ultimately not critical.
My own policy is to regard an âopinionâ about a single observation as being about the test not the person, and therefore an Observation. We tend to use the term âInterpretationâ . So, for example, some versions of BMI, allow the raw calculation to be graded as 'underweight, normal, overweight and obese - to me these are all just reworkings of the original data. Just because the BMI interpretation in my scales App says I am obse, it does not mean I am obese. That requires someone to evaluate the whole person and decide if I should /or not be labelled as Obese as a problem.
One can see the same issue with âblood pressure hypertensiveâ (based on the observation alone) vs. âEssential hypertensionâ as an evaluation, taking into account all the circumstances, other readings, confounding factors.
To come back to the scales/ scores issues, quite a few of these scores have some sort of grading/interpretation based on the raw score. These should definitely be in the score Observation IMO. e.g Clinical risk category in NEWS2.