RM Change request: Add time_asserted to EVALUATION class?

Can we get some feedback on this RM proposal being discussed by SEC.

See [SPECRM-121] - openEHR JIRA

Add time_asserted property to EVALUATION to represent historical (or contemporary) date that the content of the EVALUATION was asserted, e.g. a historical diagnosis.

I commented …

‘Date last updated’ is routinely added to any new EVALUATION archetypes as a routine, so it will be important to get feedback from the clinical community.

One advantage is that it gives us a consistent way of displaying the most recent ‘clinical date’ which is becoming really important in shared care planning where single Evaluations in persistent composition may be updated by multiple users. We have that for OBSERVATIONS and ACTIONS but not for INSTRUCTIONS or EVALUATIONS.

1 Like

The idea of time_asserted (meaning date/time asserted) is to record when some Dx or other eval was originally asserted, e.g Doc receives a new patient who tells her that he has been asthmatic since 16. So time_asserted would be set to whatever (approx) date that would imply. THe value of this field for newly formed Dx is just today’s date/time.
This field is also equivalent to ‘date clinically recognised’ from the Problem/Dx archetype.

Date last updated (which I would have thought would be called ‘date last reviewed’) is supposed to record the most recent time the condition was reviewed, as far as I know.

So these are distinct data items are they not?

This was discussed (I think!) when we decided to add ‘Date Last updated’ routinely to Evaluations.

The assertion is that the Evaluation should exist / and is correct, rather than some kind of date first clinically recognised per-se. That element exist as in the Problem/diagnosis archetype, and not in other evaluations, as it isa very specific requirement for hospital reporting. The ‘date last updated’ is closer in meaning to the composition start_time in a persistent composition i.e. when the record was authored/ updated, not about any specific content of the record as such.

Perhaps ‘time_asserted’ is not the best term to use.

Review date is a somethingelse, because e,g an allergies list could be reviewed but nothing updated

1 Like

Well the literal ‘date last updated’ of anything is just its Version’s AUDIT_DETAILS.time (which is a date/time). Everything has that. It sounds like this attribute is just replicating the system audit… but I am sure I am missing something here…!

Hey,

We are currently looking into registering evaluations of care plans, so the date_asserted would actually be very nice to register when this evaluation happened.

What would be the next step to make this happen?

A persistent composition may get a new version where only some of its contents are updated, ie some of the contents may have a “last updated” time older than its version time.

1 Like

Yup - we are facing this right now in the context of complex care planning where there are a lot of persistent compositions which may be partially updated by different users, ore even different applications and it can become difficult to see exactly which parts of the compositions have actually been updated, particularly as there is a growing mis-match between the templated compositions and the exact UI that handles the data.

‘Date last updated’ does work fairly well, to meet the clinical need, but (a) is not in every Evaluation archetype , (easy-ish to fix) but (b) has a completely different atCode and therefore path in every composition.

This makes it tricky to create UI widgets that can surface this detailed ‘provenance’ info, without some sort of custom build for every composition. It is easy for Observations and Actions as they have a natural/mandatory time that can be exposed. Not so for Instructions and Evaluations.

@thomas.beale makes a good point that we could use feeder_audit time for this purpose, and we did start exploring that as a way of powering a Better Forms widget but it feels like an ommission that neither the Instruction or Evaluation have a natural ‘asserted time’ at Entry level that could be used in the same way as Observations and ACtions, reserving feeder_audit for the rare occasions where we need to capture provencnae at Element level.

If nothing else , the need for ‘Date last updated’ points to something that should be pushed to RM level, and I know @Seref feels that we should have something similar for INSTRUCTION.

1 Like

We need to be very careful on the requirement here. If ‘last updated’ means when last committed to the system, that’s a version control question. A new version of any content may change something small, e.g. adjust a goal in a care plan. Displaying the diff of the change is an application / visualisation question. Storing extra dates in the data is completely unnecessary and will be confusing. Let version control do its job.

If on the other hand, there are pieces of content in say a care plan that are reviewed by different people, and ‘last updated’ really means ‘last (clinically) reviewed’, this would best be done by the addition of attestations that document the event of reviewing (say) goals by Dr xyz or whatever it is. The current attestation model (where attestations are attached to Versions) should accommodate this.

Putting such dates in the data and managing them will head towards reinventing the version control system, inside chunks of content, in a clumsy way.

Independently of all that, a ‘date asserted’ for Evaluation does make sense, since it’s just recording the a clinically relevant time for arriving at an assessment of some kind - the equivalent of the Observation time. This could be added to the RM (I think there is already an issue for this), and also Entry could have a common date/time stamp that picks up the clinically relevant time of any Entry subtype (which @ian.mcnicoll has argued for previously).

My suggestion would be to look at the latter idea, and if it covers the need for identifying time / person for review of segments of a care plan, than we analyse it and solve it properly. I would counsel against trying to recreate versioning features inside the data. That way be dragons…

1 Like

That is indeed from my Pov, the requirement. It’s not about commit provenance, already handled, or about Element =level provenance, it is really about Entry-level provenance , and filling the gap in INSTRUCTION and EVALUATION, probably also GENERIC_ENTRY and ADMIN_ENTRY of not having a natural ‘date asserted’ . It is exactly the same assertion that is handled at compositon level by context_start_time. “Dr X said this was true at xxx” .

I don’t like the term ‘assertion’ but struggle to find something that captures the idea that you are believe something to be true at a particular point in time.

Alos IO donl think we need to worry about total accuracy of that time (as per start_time) - the value of the data is to orientate others in the future about when the statement was made or last reasserted, indeed if empty and not set explicitly, I would expect it to simply mirror the composition start_time.

This fixes the problem that @Silje identified that if a different Entry is updated , the Composition start_time will be updated and give the false impression that every datapoint in the composition now has a different ‘date asserted’

1 Like

‘Assertion’ is what they historically used in the Intermountain CEM model system, and although it sounds a bit technical, it’s pretty much the right term I think.

Need to keep Composition.context.start_time for the start time of the ‘situation’ which is an encounter, or other business activity with / by / for the patient. So it’s not the same as a ‘time asserted’ of an assessment or other kind of Entry anyway.

The last time I analysed this, my conclusion was (from memory) that we should add an attribute called ‘effective_time’ to Entry (but ‘asserted_time’ would be a better name in my view) and use it as ‘the’ time on EVALUATIONs, INSTRUCTION and ADMIN_ENTRY, and for:

  • OBSERVATION we copy in data.origin (from the HISTORY object). NB: for labs, the time to copy is the sample taking time, not the result time.
  • ACTION we copy in the time attribute.

The idea is that this single attribute ENTRY.asserted_time becomes the single easy place to get the time at which the contents of any kind of ENTRY were true.

Does that solution ring a bell?

1 Like

I might sound a little naive, but what about time_evaluated. Sounds like a simple term for the time when the Evaluation is done.

Otherwise, I’m all in favour of starting the implementation of this specific time field. I would love to help with this, but it would be nice if someone more experienced can pick this up and perhaps show me parts of the process.

way too obvious! Yes, indeed, that would be a better match, given the class is called EVALUATION. Assuming the likes of @ian.mcnicoll don’t have issues with it.

‘Evaluation’ is a fraught word in informatics and, in particular, the naming of the class has confused everyone I teach.

For the past 15 years, in every course I have run, I have had to disambiguate the clinical notion of an evaluation or assessment in terms of scores or scales or findings or conclusions from a measurement… the list goes on… and explain that these are recorded in OBSERVATIONs. Head scratching ensues.

If you look at CKM, nearly all EVALUATION archetypes are actually persistent summaries or overviews, designed to be recorded once and to evolve over time. They are not considered an assessment in clinician-speak.

‘Date asserted’ is much more fit for purpose.

1 Like

That made me think… Naming things has been one of the most common and difficult problems in informatics, and we end up in countless hours of discussions trying to find good names. Though I believe at the end, every name assigned is a compromise, an at some point all names are wrong, in the same way all models are wrong. I think the most important part is the definition of a term in a context or domain. So if we agree to call something someway in our context, that is a compromise and we know that term is defined by certain formal definition. In our domain, EVALUATION and other entries have a good part of the specs describing/defining the ontology/taxonomy, I think that’s the important part. I guess if we want to avoid any confusion (which is natural when learning the terminology of a domain that is not familiar) we might need to use more complex phrases and not just one word names, though that will complicate other things, like software development (we tend to prefer the short and meaningful names, which might not be perfect, for development, and longer/formal definitions as documentation).

Yikes – naming!!

I still have nightmares over the early Alkmaar ‘Observation vs. Evaluation’ discussions!!

I think we are creeping towards a better ontological view of the world and at sometime a better openEHR v2 Ontology might emerge!!

Having said that, I would not rush to change the RM just for that purpose, because, as you implied, Heather, the archetypes currently in CKM, are a really useful set of exemplars now for ‘newbies’.

I don’t really like the phrase ‘data asserted’, as it feels a bit ‘infomaticsy’ but I have never been able to find another phrase like ‘date evaluated’ that is not overloaded with other meaning or complexity.

‘Date asserted’ gets my vote.

2 Likes

Exactly. Still makes my toes curl

As I get more experience I have increasingly valued the original ontology. I have been starting to socialise a revision of the ontology, based on the CKM archetype patterns in practice. It is very helpful to teach people about the different types of data. Part of it is moving away from Evaluation as a term. It seems to help. Not suggesting we change the RM, but we have to explain and mitigate the confusion from the current naming.

I used to hate ‘asserted’ too. But we’re starting to add it to the odd archetype with good effect. There really is no synonym that is quite as fit for purpose. Maybe some might suggest ‘attest’ as another alternative but FWIW it’s an even worse option IMO.

3 Likes

I’d be really interested in having a look at that revised ontology.

Finding a better name for EVALUATION is worthwhile, though I think the category itself is valid as currently represented by the published CKM archetypes.

Also feels like we need to better understand and document the various sub-types of Observation - Statements, Summaries, Screenings - all valid but perhaps needs more explanation/ categorisation.

I also feel that the current OBSERVATION class is a bit over-engineered for most of these categories. Not that I’m pushing for an RM change but perhaps we can think in terms of sensible profiling on some sub-categories of OBSERVATION that could be applied in tooling to reduce the modelling complexity.

1 Like

I agree the HISTORY structures in many cases are overkill. But they’re also extremely elegant for the cases where you do need to record time series or information about a specific interval of time. I’m seeing these cases more and more as I get more experienced in template building. and for a lot of archetypes where you wouldn’t necessarily expect them, such as screening.

1 Like

Exactly right.

Just for fun, the equivalent of ‘Evaluation’ (which by the way is a compromise term we arrived at after long discussions with experts like Prof Alan Rector at U Manchester back in the day) we have in our Graphite model is ‘Assessment’. Stan’s group originally wanted ‘Assertion’ (which is what they have in the Intermountain CEMs), but we settled on Assessment as better describing the act that generates the information. ‘Evaluation’ is pretty much an exact synonym, it’s just that these words have cultural overload in clinical medicine.

Personally, and not worrying about the name of the class ‘Evaluation’, I would go for the term ‘time_asserted’ (‘date asserted’ would normally mean just a date; in openEHR, we normally use ‘time’ to mean the date/time). This is the correct term as used in logic, ontology and law, to mention a few domains.

Anyway, if there is to be more debate, I suggest we heed @pablo 's words above :wink:

1 Like

That’s exactly right. The current OBSERVATION/HISTORY describes a very important pattern in a very elegant way, and will be difficult to simplify that in the RM without generating issues for more complex cases of data recording, where time series are needed and patient state is also needed.

1 Like