NEWS2 is made up of a series of categories assigned from real-life, raw data. There is actually no 1:1 duplication of vital signs measurements. Maybe you’re thinking of another score?
In any case, if we follow the idea of creating reusable CLUSTERs for every data element that might be represented in some way in multiple archetypes then we also have to ask where we stop. A CKM full of ELEMENT archetypes that can be configured in unlimited permutations and combinations? For example, we haven’t modelled a generic and reusable CLUSTER/SLOT pair for each instance of ‘Description’ or ‘Comment’, despite the fact that these data elements exist in almost every archetype. We need to avoid reuse for reuse’s sake, instead opting for utility, clarity and simplicity of data representation. Filling SLOTs to include a description or a comment in every archetype is adding zero value, only modelling and governance overheads. We need the capacity for a free text description and a comment in most archetypes - but I can assure you that there is also zero clinical value in querying ‘all descriptions’ or ‘all comments’.
In addition, if we start to create a multitude of single data element archetypes then we quickly end up with a governance and querying hell, where each measurement may have been recorded in a variety of ENTRY contexts - with the potential for weights to be recorded within non-OBSERVATION contexts.
Other modelling paradigms have previously gone down this ELEMENT modelling road to maximise the flexibility of reuse but have not been scaleable/governable outside the original system silo - we definitely don’t want to replicate this mistake. My discussions with them at the time were about the nightmare governance problem they had with their artefacts and their difficulty on how to train modellers about appropriate use and, more importantly, what was the wrong way to use the artefact. They agreed that openEHR ENTRY/CLUSTER groupings seemed to provide a sweet spot between reusability, flexibility and governance.
With that background for consideration, BMI is often calculated in isolation from the act of recording an actual weight measurement, so the system will need to reference a pre-existing one as part of its’ calculation and presumably record some kind of traceability to the source measurement. If the BMI/score/scale is calculated at the time of recording of body weight, I think the best utility comes from recording the measurement using the ubiquitous OBSERVATION.body_weight, so it can be queried as part of ‘all body weights’ (including state and event/math contexts), and apply a business/technical/system solution for inclusion of the weight value within other archetype/template contexts - either as a ‘citation’, ‘reference’, copy or whatever technical magic you want to utilise to access the original, single source OBSERVATION. This should not be a design issue within a coherent data ecosystem, but one solved by business logic or technical/engineering artistry.
While not all heart rates are equal due to the dynamic (minute-to-minute) changes in the context of exercise or fear etc, I think the opposite almost universally applies to body weight. I’m firmly in the camp that most weights should be recorded and queried in a simple, ubiquitous way. But I’ll also acknowledge that clearly recording weight measurements during episodes (weeks/months/years) of pregnancy or heart failure may add some complexity, and I’d suggest that these scenarios need to be dealt with slightly differently & definitely not solved by a CLUSTERed representation of body weight.