# Revisiting symptom/sign **Category:** [Clinical](https://discourse.openehr.org/c/clinical/5) **Created:** 2021-09-15 09:16 UTC **Views:** 3788 **Replies:** 48 **URL:** https://discourse.openehr.org/t/revisiting-symptom-sign/1867 --- ## Post #1 by @siljelb The current [Symptom/sign](https://ckm.openehr.org/ckm/archetypes/1013.1.195) archetype was first published in October 2015 following a seven month review process. Since then the archetype has had several breaking changes applied, and has been sitting in the 'Reassess' state since April 2018. The COVID-19 related work of early 2020 led to the creation of the new 'screening questionnaire' family of archetypes, of which [Symptom/sign screening questionnaire](https://ckm.openehr.org/ckm/archetypes/1013.1.4432) is one of the most central members. The creation of this archetype also means we can rectify one of the most awkward modelling choices of the original Symptom/sign archetype; the 'Nil significant' element. This was intended to support structured questioning where the subject could respond whether they experienced a symptom or not. In effect it was a kind of negation element in an otherwise positive presence archetype, which leads to potential safety issues when querying the data. In the latest revision of the 'Symptom/sign' archetype the 'Nil significant' element has been removed, to be replaced by the 'Symptom/sign screening questionnaire' archetype where appropriate. Other significant changes include: * Revision of the 'Occurrence' element (breaking change) * Separation of the 'Precipitating/Resolving factor' cluster into two separate clusters, removing the need for the run-time name constraint (breaking change). * Addition of the 'Character' element (non-breaking change) * Various non-breaking updates and corrections * Corrections to SNOMED CT bindings We'd like to publish the current trunk revision as v2 of the archetype, but would like community input first, to make sure we catch any errors or other suggestions that would need breaking changes. --- ## Post #2 by @ian.mcnicoll I think these are good changes. 'Nil significant' was always a bit tricky and the new screening questionnaire is better suited for that 'closed questioning' type of situation. The only other change to Symptom (I have just submitted a CR) is to widen the Severity rating limit to 0.100 from 0..10 as we have come across a few places where 0 to 100 is the range used. --- ## Post #3 by @siljelb [quote="ian.mcnicoll, post:2, topic:1867"] The only other change to Symptom (I have just submitted a CR) is to widen the Severity rating limit to 0.100 from 0…10 as we have come across a few places where 0 to 100 is the range used. [/quote] Could leaving it unconstrained be another option? It's not inconceivable that other requirements may pop up? --- ## Post #4 by @ian.mcnicoll I'd be happy with that - you can be sure someone will want 0..101 :frowning: --- ## Post #5 by @thomas.beale [quote="siljelb, post:3, topic:1867"] Could leaving it unconstrained be another option? It’s not inconceivable that other requirements may pop up? [/quote] Which means values in data will not be computably comparable, e.g. graphing over time of severity of symptoms of chronic arthritis would no longer work (well, it might work by accident, but it would not be reliable). If it had to be unconstrained I'd say you should require a numerator and a denominator, i.e. a ratio. Then you can compare severities across Observations... --- ## Post #6 by @ian.mcnicoll In theory that should be possible (though probably not as useful as you might think) but it requires everyone to be working of the same hymn sheet and like it or not there are a million different approaches to recording 'severity/ The best we can do is mandate ate national/ regional/ condition-specific contexts where we can get clinical agreement. The use of proportion might be worth considering but will be counter-intuitive to a lot of clinical folks - I can see the value but but I do know of one score which allows for >100%. --- ## Post #7 by @siljelb How about keeping the DV_QUANTITY at 0..10, and adding DV_PROPORTION as an alternative data type for other ranges? --- ## Post #8 by @ian.mcnicoll That would work technically but It would need quite a good explanation of how to use proportion, let's say to model a score from 0 to 153, just to be awkward! That could be done in description and comment, of course. --- ## Post #9 by @colinbro Support strongly that values in an individual are reliably computable: time-serialisation is central to practice. I have researched 20 commonly-used scoring systems, and would observe: - those that are indexes are expressed as %, so values never exceed 100. - of the others, max value range is 27, for PHQ9, except for the Minnesota Multiphasic Personality Inventory 2 with a max score of 567! Resolution is another matter: can a MH measure really be valid to 3 decimal places? Such high-resolution data is very rare in clinical practice IMO. So suggest that nearly all use cases can be quantified as up to 100, and edge cases such as MMPI could be converted to % of max value limited to 2 decimal places? Or is there another way to directly model those few scales that use higher-resolution data? --- ## Post #10 by @siljelb [quote="colinbro, post:9, topic:1867"] Or is there another way to directly model those few scales that use higher-resolution data? [/quote] Most standardised scores and scales would be modelled as separate OBSERVATION archetypes, I think. --- ## Post #11 by @ian.mcnicoll Agree they are their own entities with their own very specific definitions --- ## Post #12 by @colinbro So for generic "severity" there are currently are 3 semantically-progressive texts that align with SCT codes, and also refer externally to "activity." That is low resolution. Likert scales are most often 5-point, ranging from 2 to 10 - so resolution is to 1 decimal. The downside of hi resolution is to magnify inter-user variation, whereas lo resolution forces choices that may be wrong. % is suggested, and @ian.mcnicoll wrote "we have come across a few places where 0 to 100 is the range used." Did these use this generic archetype but ideally would have a separate OBSERVATION archetype? As others have suggested SCT has 2 further terms, for Trivial 162466003 , and for Very Severe 162471005. So adding these would give a 5-point text scale at 1-decimal resolution. As these extend the current range, data already recorded would be compatible? Suggest to define Trivial as "The intensity ... is *not present during normal* activity" For Very Severe "The intensity ... *prevents any* activity" (In Severe, the def seems to have a typo: "The intensity ... *causes prevents* normal activity." but "causes" is redundant) --- ## Post #13 by @siljelb Could this work? ![image|666x269](upload://qTeXpdLOrJP0gM8T9cIWCcppUHj.png) (from this branch: https://ckm.openehr.org/ckm/archetypes/1013.1.5770) --- ## Post #14 by @heather.leslie [quote="ian.mcnicoll, post:2, topic:1867"] The only other change to Symptom (I have just submitted a CR) is to widen the Severity rating limit to 0.100 from 0…10 as we have come across a few places where 0 to 100 is the range used. [/quote] Hi Ian, Can you share those examples? I'm curious to understand the requirement better. If we are recording the result of asking a patient to describe the severity of a symptom they are experiencing, I don't understand how they can differentiate between 70 and 71 out of 100 with any precision. Thanks Heather --- ## Post #15 by @ian.mcnicoll THi Heather, This cam from the Dutch ZIB models https://zibs.nl/wiki/PainScore-v4.0(2020EN) > The score is a general measurement of pain experience, not a description of specific, localized pain. > > Depending on the measuring method used, it indicates the level of pain experienced by the patient on a scale of 0 to 10: 0 = no pain and 10 = the worst pain imaginable. No descriptions are used for the intermediate values, so that the value is displayed as a number and not as a code. > > Sometimes a value range of 0-100 is used instead of 0-10." and digging a little further https://www.britishpainsociety.org/static/uploads/resources/files/Outcome_Measures_January_2019.pdf For what it's worth I share your clinical view that 71 vs 72 is pretty meaningless, nevertheless 0..100 does seem to be fairly established. Tom's suggestion of using Proportion initially seemed counter-intuitive but I can see how it does solve the problem, though I would question whether comparing different VAS scores with different ranges is going to be a real-world requirement. A lot will depend on the content of the question, not just the response. --- ## Post #16 by @heather.leslie Thanks. Very helpful The archetype has constraints of 0-10 to support the Numerical Pain Rating Scale (NPRS) exactly as described in the BPS document. The range of 0-100 requirements seem to come into play with the VAS - measuring a mark made by the patient somewhere along a 10 cm scale. Interestingly it's not described as a 100mm scale. From memory, we did discuss this at the time of archetype review and publication but decided that it was measuring the same thing but in a slightly different way. In response, we changed the data type from a DV_COUNT (which is all the NPRS requires) to DV_QUANTITY (using a Qualified real property allowing a single decimal point) so that a point that was recorded at 7.1 cm along the 10cm scale could be recorded as '7.1' rather than '71'. In this way, both rating scales could utilise the same data element. We clearly haven't captured this adequately in the archetype to explain that logic. Rather than changing anything is it possible to cater for the ZIBS/VAS requirement in this way? It makes more sense to me to record the actual '7.1' rather than recording '71'. The level of accuracy of where a patient makes that mark on the 10cm scale will never be significant to the millimetre but the clinical intent is really to achieve the same outcome. My less favoured alternative would be to extend the existing data element constraints to 100. I don't see any reason to go down the Proportion path. While I doubt that we would ever query this directly, there may be situations where we want to graph pain levels eg an NPRS scale in a patient app followed by a VAS while in hospital. Keeping constraints at 0-10 means we could graph these measurements without transformation. My 10c worth... BTW [quote="ian.mcnicoll, post:15, topic:1867"] https://zibs.nl/wiki/PainScore-v4.0(2020EN) [/quote] It might be worth letting the ZIBS people know that the link references a specialised archetype for pain that no longer exists :face_with_raised_eyebrow: --- ## Post #17 by @heather.leslie Hi Colin, This data element is intended to carry the patient's estimate of the severity for a single symptom, usually as part of ad hoc history-taking in a consultation or perhaps the context of a screening questionnaire for any symptoms. I agree that many of the thousands of indexes and scores have different ways of doing things related to recording symptom severity including text categories and specific timeframes or contexts eg 'rate the severity of x during the past 3 nights'. We have example archetypes faithfully representing the exact content of many scores and scales as standalone OBSERVATIONs. I'd suggest that the MMPI should be best represented as it's own archetype or family of archetypes that carefully represents all of the questionnaire, including the validated value options, rather than trying to reuse this CLUSTER for that purpose. Regards Heather --- ## Post #18 by @joostholslag [quote="heather.leslie, post:16, topic:1867"] t makes more sense to me to record the actual ‘7.1’ rather than recording ‘71’. [/quote] The challenge is zibs are data exchange models. So every implementer would have to support processing a received zib to fit the archetype. So every implementer would have to build the logic to transform 71 to 7,1. Currently with 2-3 vendors that’s manageable. But I’m not sure it’s desirable from a safety perspective. Especially since the zib is usually already transformed by another vendor with a (non openehr) datamodel. Unifying for graphing is interesting but also has its risks. The data is probably not comparable enough to present as first order data with the care organisations own data. --- ## Post #19 by @joostholslag [quote="heather.leslie, post:16, topic:1867"] It might be worth letting the ZIBS people know that the link references a specialised archetype for pain that no longer exists [/quote] A lot of their ckm references are way outdated. I’m sad to think they lost interest in openEHR some time ago. (And even sadder they don’t just use the openEHR stuff) --- ## Post #20 by @siljelb [quote="ian.mcnicoll, post:2, topic:1867"] The only other change to Symptom (I have just submitted a CR) is to widen the Severity rating limit to 0.100 from 0…10 as we have come across a few places where 0 to 100 is the range used. [/quote] We're looking for concrete examples of where 0-100 (or other ranges) are used. Can you help us? :smile: --- ## Post #21 by @ian.mcnicoll Hi Silje, See this post https://discourse.openehr.org/t/revisiting-symptom-sign/1867/15?u=ian.mcnicoll for some examples. --- ## Post #22 by @siljelb Okay, reading the British Pain Society document, I can find the numerical representation of VAS (p 9), where the distance between the "no pain" end of the line and the patient's mark is measured in mm, leading to a total of 100 mm. Do we have any others? I see 0-20 referenced a lot regarding NRS (although not in this document), but I haven't found any actual examples of use. --- ## Post #23 by @erik.sundvall Regarding the combined use of [Symptom/sign](https://ckm.openehr.org/ckm/archetypes/1013.1.195) and [Symptom/sign screening questionnaire ](https://ckm.openehr.org/ckm/archetypes/1013.1.4432) could it be a good idea to encourage reusing exactly the same content (use the same text or coded text) in the symptom name fields when the two archetypes are used together? (See images below.) If so an AQL query for any of those fields in an EHR would give a hit. In a GUI/form the good thing to do would then be to hide one of the fields from the user and automatically "under the hood" fill it with a copy of the content from the other. Perhaps such advice is best placed as a comment in the [Symptom/sign screening questionnaire](https://ckm.openehr.org/ckm/archetypes/1013.1.4432) archetype (by the "Symptom or sign name" field). ![image|690x342](upload://SteLS2ESyuQfU0DrUkOgeQ3ICB.png) and ![image|690x167](upload://wm3WEHYavfPoFzEzTmWGJQSEO3r.png) --- ## Post #24 by @siljelb [quote="erik.sundvall, post:23, topic:1867"] could it be a good idea to encourage reusing exactly the same content (use the same text or coded text) in the symptom name fields when the two archetypes are used together? [/quote] Definitely. Could you add change requests about this? --- ## Post #25 by @siljelb Would this work? ![image|690x268](upload://mTc44hWaFZtvahhvLEGj1adA5Pv.png) (Edit: Disregard the comment, it needs an update after the addition of the 1-100 mm unit) --- ## Post #26 by @ian.mcnicoll That's cunning!! Yup it would work for the Zibs example. TBH I'm not all that bothered about proportion for now - if someone does come up with a 0 .. 95.6 range then it can be added later!! --- ## Post #27 by @siljelb [quote="ian.mcnicoll, post:26, topic:1867"] TBH I’m not all that bothered about proportion for now - if someone does come up with a 0 … 95.6 range then it can be added later!! [/quote] The trouble is if someone tries to use the 0..100 mm Quantity, which is intended for a 100 mm visual analog scale, for a 0..20 numerical rating scale. I've read about the 0..20 NRS in several places, but I haven't seen an actual use case presented. --- ## Post #28 by @joostholslag The element description reads “numeric rating scale”. The zib has 3 potential valuesets for the equivalent element 0-10 (integers I assume) NRS 0-10 (cm I assume) VAS 0-100 (mm I assume) VAS So the concepts don’t match currently. I’m also unsure about the effect of having 0-10 without units and 0-100 with mm units. What are your thoughts here? I’m leaning towards making the element unconstrained. And make specialised archetypes/templates for pain score. That would indeed break computability across observations. But I think the concept of symptom severity is currently too broad to allow for that anyways. But maybe I’m missing the usefulness of that computability? https://discourse.openehr.org/t/revisiting-symptom-sign/1867/5?u=joostholslag --- ## Post #29 by @siljelb [quote="joostholslag, post:28, topic:1867"] I’m also unsure about the effect of having 0-10 without units and 0-100 with mm units. What are your thoughts here? [/quote] My idea was to allow the 0-10 without units for "normal" NRS recording, but on second thought I guess that would be better as a DV_COUNT. I'd be happy to make it a choice of * DV_COUNT 0..10 (for NRS) * DV_QUANTITY 0.0..10.0 cm and 0..100 mm (for VAS) * DV_PROPORTION (unconstrained) for any other NRS scales, such as 0..20 or 42..111 or whatever. --- ## Post #30 by @heather.leslie [quote="siljelb, post:29, topic:1867"] DV_PROPORTION (unconstrained) for any other NRS scales, such as 0…20 or 42…111 or whatever. [/quote] I'm still not clear we have an actual use case for Proportion, or is it only theoretical? If theoretical, it could be added when a concrete use case is identified. It feels quite uncomfortable in principle. Can DV_COUNT be left with max unconstrained? --- ## Post #31 by @siljelb [quote="heather.leslie, post:30, topic:1867"] I’m still not clear we have an actual use case for Proportion, or is it only theoretical? If theoretical, it could be added when a concrete use case is identified. It feels quite uncomfortable in principle. [/quote] To my knowledge it's theoretical. I'm happy to leave it out for now. [quote="heather.leslie, post:30, topic:1867"] Can DV_COUNT be left with max unconstrained? [/quote] Sure, but why? --- ## Post #32 by @heather.leslie [quote="siljelb, post:31, topic:1867"] Sure, but why? [/quote] We have a definite use case for an integer-based score, without units, usually 0..10. Leaving it unconstrained would enable anyone to make any integer score and while we know that many are 0..10, a lot of this thread is about there possibly being other scores that will exceed 10. If we leave it open, even open at both min and max, then we support maximal reuse, even (as yet imaginary) scores of -10 to +10. At the moment we have it modelled for the majority use case that we know about, which is 0..10 and is what you possibly should expect in a template. --- ## Post #33 by @siljelb [quote="heather.leslie, post:32, topic:1867"] We have a definite use case for an integer-based score, without units, usually 0…10. [/quote] Agreed. [quote="heather.leslie, post:32, topic:1867"] Leaving it unconstrained would enable anyone to make any integer score and while we know that many are 0…10, a lot of this thread is about there possibly being other scores that will exceed 10. If we leave it open, even open at both min and max, then we support maximal reuse, even (as yet imaginary) scores of -10 to +10. [/quote] My thinking was that this would be covered by the DV_PROPORTION data type. Otherwise, you'll never know what what you find in the DV_COUNT means. What does this '9' which is recorded represent? 9/10? 9/20? 9/100? --- ## Post #34 by @thomas.beale You can have more than one DV_COUNT alternative; showing what the range was would rely on tools at runtime displaying the archetype interval (i.e. the 0..10 or whatever) on the form, or using it somehow to visualise a form control. If data are being committed via forms not driven by the archetype (i.e. the OPT) then there can be problems. Clearly, it must be the case that the data enterer knows that '2' is in a range of '10' or '100', since it is valid in both. --- ## Post #35 by @ian.mcnicoll [quote="thomas.beale, post:34, topic:1867"] You can have more than one DV_COUNT alternative [/quote] Not in ADL1.4 AFAIK --- ## Post #36 by @thomas.beale They are alternatives - [see here in spec](https://specifications.openehr.org/releases/AM/latest/ADL1.4.html#_single_valued_attributes). --- ## Post #37 by @siljelb This doesn't work for DV_COUNT in tools, only for DV_QUANTITY.. Edit: It doesn't work for DV_QUANTITY either. Adding units work, but it ends up as a single DV_QUANTITY with a set of units, not a single AT code with a set of DV_QUANTITYs. --- ## Post #38 by @colinbro Thank you Heather (belatedly) I had misunderstood. So the proposal for a 10-point, single-digit scale looks good. Is that intended to replace, or would it co-exist with the 5-point text scale suggested previously to align with 5 x SCT codes, with the intermediate values intended to be interpolated? If so, I’d suggest that “interpolation” is troublesome for ordinal values, and we should craft text statements for the intermediate ranks too. I think there is a problem with ordinal ranks being marked by number, as if integers: they are not rational numbers so arithmetic is not valid. For instance, a rank of 2 is not half as bad as a rank of 4, nor is 3 halfway between them. We “know” this, but the use of numbers traps people into thinking like this. Textbooks state that ordinal numbers always need statistical processing, not arithmetic. [The Statistical Evaluation of Medical Tests for Classification and Predicti... ](https://www.google.co.uk/books/edition/The_Statistical_Evaluation_of_Medical_Te/UHQoAgAAQBAJ?hl=en&gbpv=1&pg=PP1&printsec=frontcover) I have researched this cognitive risk e.g. see [The ABC of cardinal and ordinal number representations: Trends in Cognitive Sciences (cell.com)](https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(07)00333-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1364661307003336%3Fshowall%3Dtrue) and real-world misuse of ordinal numbers e.g. at [Use of the Palliative Performance Scale (PPS) ... ](https://www.jpsmjournal.com/article/S0885-3924(08)00660-X/fulltext). I suggest that other serial symbols such as alphabetic should be used: if the above 3 ranks were denoted as a/b/c there would be no quantitative inference of meaning from the different symbols. However this does not seem compatible with the discussion at [Clinical scales - ordinal or coded text? - Clinical - openEHR](https://discourse.openehr.org/t/clinical-scales-ordinal-or-coded-text/709/32) . Sorry to say further confused by [Text, descriptions and comments for value set items - mandatory or optional? - Clinical - openEHR](https://discourse.openehr.org/t/text-descriptions-and-comments-for-value-set-items-mandatory-or-optional/728/37) Is it that this is not our problem, but that of all those clinical scale authors that have (mis)used numbers as ordinals, which openEHR can only seek to represent. Or should openEHR at least alert developers that ordinal numbers are trouble? As a beginner in openEHR please excuse if this is off-point , but would be grateful for your advice and corrections --- ## Post #39 by @heather.leslie Hi Colin, This is a tricky space indeed. There is no absolutely right answer. In reality, we are all bumbling along as best we can in the circumstances, and trying to model these concepts faithful to the original, often well-validated, scores & scales, but most of all ensuring that each archetype is as clinically safe for implementation as possible. All contributions are welcome, especially ideas that come from a slightly different direction to provide checks and balances to our assumptions. The notion of a generic representation of severity scale (outside of the formal Score/Scale territory) is tricky. My advice when modelling severity and using a SNOMED as a drop-down list (not an ordinal for reasons that you outline quite rightly) is to keep it to 3 values - mild/moderate/severe. Back in the day, I saw lists that included trivial/mild/mild-to-moderate/moderate/moderate-to-severe/severe/very severe/fatal. Yes, a fatal symptom! And with a list like that, it is absolutely not possible to get any inter-rater consistency because everyone's definitions of/criteria for each term will be different. My severe could well be someone else's mild-to-moderate. So with the KISS principle in mind, we usually strip down this kind of subjective severity assessment towards the 3 values, hoping that clinicians can reasonably differentiate between them - unless there is good reason and explicit definitions to justify otherwise. In the past, we have dabbled in interpolation for some models but it felt quite unsafe, and now avoid it as a CKM modelling approach nowadays, again for the reasons that you outline. You may well be right that the authors are designing scores without a correct understanding of how they should be used in statistics. I'm certainly no expert on that and you've clearly explored this area more than I. However, it is a CKM Editor responsibility to represent the Score/Scale faithfully, according to copyright etc. If an existing, validated, frequently used score or scale represents values with a score, which are often used as part of a calculation for a total score or for graphing trends etc, it will often be modelled as a DV_ORDINAL BUT if it is not clinically safe to use (which is why clinical informaticians should be modelling archetypes) then I'd advise a different data type be used, usually DV_CODED_TEXT to supply the list of options alone. Ordering a value set, without numeric ranking - hmmm, not sure we've seen a use case yet. So we do try to model ordinals where ordinals are appropriate - otherwise, we run the risk of inappropriate implementation. Is there a use case/archetype you think we have modelled incorrectly or inappropriately? This thread is a natural follow on to content but not aligned with the topic. Perhaps we should look at creating another thread for the purpose if this conversation continues. I don't see incompatibility with the other threads that you mention, but curious to understand more of what you are thinking. Perhaps we should continue discussions in each of those respective threads if you are seeking further clarity? Cheers Heather --- ## Post #40 by @thomas.beale [quote="siljelb, post:37, topic:1867"] This doesn’t work for DV_COUNT in tools, only for DV_QUANTITY… Edit: It doesn’t work for DV_QUANTITY either [/quote] ok so this means the tools don't implement that part of the spec. [quote="colinbro, post:38, topic:1867"] I think there is a problem with ordinal ranks being marked by number, as if integers: they are not rational numbers so arithmetic is not valid. For instance, a rank of 2 is not half as bad as a rank of 4, nor is 3 halfway between them. [/quote] the reason they are numbers isn't based on the assumption that their values accurately reflect comparable magnitudes of real things (or maybe natural log or other transform...) but just to treat them as being 'ordered', which is the whole point of using ordinal scales - they are a ranking mechanism for a cohort, which is then usually sliced up into a few groups for the purpose of triage. That slicing up is almost always done on the basis of the '>' operator, i.e. numeric comparison. So Apgar >=8 = baby is fine, 7 = observe for a while, <=6 = send to PICU. [quote="colinbro, post:38, topic:1867"] I suggest that other serial symbols such as alphabetic should be used: if the above 3 ranks were denoted as a/b/c there would be no quantitative inference of meaning from the different symbols. [/quote] I'm not sure how this would work - Apgar numbers 0-2 and 0-10 for the total were not invented by us, they are international definitions. Same for ?all scales I have ever seen, e.g. Barthel, Waterlow etc. I think it is better to educate people that ordinal = 'ordered', and is not quantitative - which is why we have another type for this. The [reference models illustrates this pretty well](https://specifications.openehr.org/releases/UML/latest/index.html#Diagrams___18_1_83e026d_1433773263789_448306_5573) in fact. If clinical modellers and developers alike understand that, I think there will be no problems. Agree with all your other comments about ordinals only being statistically analysable etc. --- ## Post #41 by @colinbro Heather, thanks for full reply. Yes, looks like its the scale/score authors that I am criticising, such as the Borg scale. But the new DV_SCALE I am unclear about, as it "allows us to handle real number rather than just integers" https://discourse.openehr.org/t/clinical-scales-ordinal-or-coded-text/709/18 I understand that integers are already real numbers , the issue was to support fractional numbers like 2.5. But I think that both 2 and 2.5 here (in Borg) are not real numbers either: these ordinal numbers are all not simply calculable despite appearances. So should the datatype be non-calculable i.e. DV_CODED_TEXT ? If the scale author is using ordinal numbers badly, should openEHR persist this error by allowing it to compute as if a real number? --- ## Post #42 by @colinbro @thomas.beale Yes, the (mis)use of numbers for ordinal ranks is thoroughly embedded now. Thank you for posting https://specifications.openehr.org/releases/UML/latest/diagrams/diagram_Diagrams___18_1_83e026d_1433773265063_214646_8610.svg I see that DV_ORDINAL shows value Integer, and that DV_SCALE shows value Real. These *display* the "number" from the original scale correctly, but don't they also make both these calculable - which for ordinals they should not be? --- ## Post #43 by @ian.mcnicoll >>>If the scale author is using ordinal numbers badly, should openEHR persist this error by allowing it to compute as if a real number? Yes - we are here to build systems and applications. The most common use of ordinals is as the sub-element of a score - calculated from these sub-scores. Is there an argument that these are pseudo-science - sure but not our job (or within our gift) to correct that by imposing technical constraints. Heather summed up the realities nicely. DV_SCALE was introduced precisely because of Borg scale and some others - I agree it is dumb but what do we do when we are asked to implement it? A better approach IMO is to embed informatics when these things are created or added to guidance like NICE. Stop the nonsense at source --- ## Post #44 by @thomas.beale [quote="colinbro, post:42, topic:1867"] I see that DV_ORDINAL shows value Integer, and that DV_SCALE shows value Real. These *display* the “number” from the original scale correctly, but don’t they also make both these calculable - which for ordinals they should not be? [/quote] Well they do, if you don't know what a 'scale' or 'score' or an 'ordinal' is in health / healthcare IT ;) Which of course, no-one knows until someone tells them at coffee at some conference... So, it's sort of arcane knowledge in our domain. I don't think we can do much about it. BTW, we only have both DV_SCORE and DV_SCALE because originally we didn't realise there were scores with decimal numbers in them. Obviously the authors of such things should have been arrested for crimes against mathematics and health informatices, but since they weren't we live with a) 'scales' as well, and b) the implication that because real numbers are used, they really could be computed with in a quantitative sense - neither of which are necessary or desirable. [quote="ian.mcnicoll, post:43, topic:1867"] Is there an argument that these are pseudo-science - sure but not our job (or within our gift) to correct that by imposing technical constraints. [/quote] Right... this is what we have to live with. [quote="ian.mcnicoll, post:43, topic:1867"] A better approach IMO is to embed informatics when these things are created or added to guidance like NICE. Stop the nonsense at source [/quote] As I said, arrests should have been made ... --- ## Post #45 by @colinbro OK, thanks for discussion. So if we acknowledge that some of these scales are garbage, it's a GIGO issue. The scores are valid if using scales as designed and validated i.e. low-resolution scales (not more than 5 datapoints so at 1 digit) suitable for mental arithmetic by clinicians in live context for safety-checking. Is it agreed that the inadvertent transform of these ordinal symbols into computable real numbers may be harmful? Thinking of further "calculations" using incorrect maths so the essentially unpredictable output is loss of precision - as we all suspect for these scores. I will need to ponder if a Clinical Risk Assessment approach is feasible. Please do move this elsewhere as you would know best. --- ## Post #46 by @thomas.beale [quote="colinbro, post:45, topic:1867"] Is it agreed that the inadvertent transform of these ordinal symbols into computable real numbers may be harmful? [/quote] It possibly could be. I think those managing operational clinical contexts need to take come responsibility for procuring or otherwise developing solutions, apps etc that don't contain 'inadvertent' misuse of data. That would be a clinical safety issue. So vendors and devs need to know some basic health informatics and have access to health informaticians and clinicians at various points in the development process. --- ## Post #47 by @ian.mcnicoll Whilst I agree that there is a lot of pseudo-science involved in many of these scores and scales, and the clinical community definitely needs to impose more rigour and informatics input, I'm not sure they represent a true patient risk, unless there is an attempt to over-engineer and make assumptions that their individual or computed scores have some kind of real independent biological meaning . They are (or so we are led to believe) useful compressions of more real-world inputs that guide treatment and decision. I don't see the use of numerics inside as being inherently risky. --- ## Post #48 by @heather.leslie The Editorial work and community review process has input from a clinical safety POV. If a score or scale has been validated with numerics we represent this as per the validated paper/evidence. We do endeavour to understand the academic intent and represent it in the archetype to support appropriate and safe implementation. Should they be represented using the new data type, or retrofitted to published ones? Maybe. That is a decision for Editors as a policy if and when the data type is made available in the tooling. --- ## Post #49 by @siljelb 8 posts were split to a new topic: [How to use the "Symptom/sign screening questionnaire" archetype](/t/how-to-use-the-symptom-sign-screening-questionnaire-archetype/1984) --- **Canonical:** https://discourse.openehr.org/t/revisiting-symptom-sign/1867 **Original content:** https://discourse.openehr.org/t/revisiting-symptom-sign/1867