Equivalent to FHIR Annotation

Hi everybody,

my colleague at vitragroup raised an issue regarding the way how clinicians might annotate data. Any idea how we should handle FHIR’s Annotation data type (e.g. as used in Condition.note)?

I think the main issue is to assign a repeating element like “comment” in the problem/diagnosis archetype (at the moment, it has 0…1 though) some additional information that would also include the provider/author on element level.

We might also work around this using references, but I favor an approach where an entry is really self-contained and does not rely on any outside entries for interpretation.

Hi @birger.haarbrandt it feels like the the extension pattern used lately in archetypes might be a good place for an annotation extension in openEHR. I’m sure @ian.mcnicoll can comment about that.

FHIR Annotation is a priori intended to support post hoc comments on data created earlier, hence its inclusion of its own micro-audit information. The real question is what the clinical status of such annotations is. Are they:

  • unrelated to clinical care, and to do with e.g. research projects done much later?
  • modifications to clinical data not in its original location (e.g. EMR system) but on a created copy (in different format) somewhere else? Is the original EMR data still reliable and complete or not?
  • intended to impact on ongoing care of the patient? Could a modification to a Dx or Rx be included in an ‘annotation’?
  • indicators of errors in (much older) patient data (e.g. due to new science)? Ex: until the link between H Pylori and stomach ulcers was discovered in 1982, all previous Dx of causes of stomach ulcers (and Rx) were ‘wrong’.

It is not clear that in general any of these could be claimed - any of them could be the case in different circumstances. If they are used in FHIR-based systems, my guess is they are used in all kinds of undisciplined ways.

Therefore, in a data integration scenario, mechanically pasting in FHIR annotations into current clinical care data for the patient might be a completely erroneous thing to do, and could even be clinically dangerous.

The real problem is that the term ‘annotation’ has no semantics.

4 Likes

I’m inclined to agree with Thomas here. @Birger - can you give some actual examples of how this might be used?

An obvious example is where perhaps a GP comments on a lab test to let the patient know that a result is normal but potentially the requirement there might be lot more complex than just a little bit of text.

2 Likes

@ian.mcnicoll, @pablo, @thomas.beale thanks a lot! I will ask my colleague to provide an example. This is from a FHIR resource defined by the German government as part of ISIK. EMR vendors will have to comply with the format and we have to be able to digest it.

3 Likes

I suspect the vast majority of these are simple notes or comments made at the same time as the original entry where we almost certainly have a comments field that would do the job as target.

I’d be much more anxious about what to do with these post-recorded annotations. Is this worth an ask in the FHIR community as existing vendors will have exactly the same issues, unless using a FHIR-native CDR. i.e very unlikely to be a good fit for their existing dataset, or at least some per-use case analysis

2 Likes

Theoretically, you could program rules to look at the timestamp on the annotation, and if it were a ‘long time’ after the original encounter or whatever, it could infer that it should not be stored with the original data. But this is very unreliable, dangerous stuff.

This is what happens when standards are issued without properly considering semantics - it could actually worsen the already poor quality data in many main systems (EMRs etc).

That’s an interesting area to look for patterns and best practices!

One option would be to update the compo and create another version with the comment/annotation, though if the update comes after a certain threshold, it might be better to store it in another compo and create a link. But, that opens a bigger philosophical question: in an update, what is data that should be in the same compo vs. what should be in a different but linked compo? I mean semantically. Because taking an annotation over existing data, might be just that, adding notes over existing data but is not part of the existing data, just references it, like a forum conversation, the original post is not updated, it just receives responses or comments that reference it, and each has a timestamp so we can follow the conversation chronologically. Maybe this annotation thing is just that, a conversation.

Another point to consider, right now with the current AQL spec is difficult to get info from LINKs, so it will be difficult to answer to “give me all the compos where … and all the annotations of each compo”, because the annotations might be LINK sources.

This is actually what I assume. This field could be a conversation, maybe in the context of a shared care record. However, this should really be something like a “journal entry”. As FHIR is mainly intended for data exchange, it can “afford” such aggregation of information on the API level and not care where this is stored. However, the fun begins if you have to properly store the FHIR resource (sorry for preaching to the choir :smiley: ). In any case, we might want to use a link to bi-directionally reference between such conversation as it might contain some relevant information.

For our scenario, we might actually create a cluster that we use in the extension slot of the problem/diagnosis archetype. Of all bad ideas, this seems like an acceptable one. This might not provide a general pattern for archetypes, but this might not be needed.

4 Likes

Or between researchers looking at the data, unrelated to the patient care process in their hospital… in which case it should not be added into the target shared EHR.

yes this is the main issue. Some application (you don’t know what) is accreting more data on top of the data sucked out of the EMR, and you don’t know the real status of that added data…

How would you know?

Still makes too many assumptions I think…!

Good discussion - nothing in Zuilp on this so I have asked the question in the Zulip UK channel, as it is highly relevant to some upcoming discussions on allergy Intolerance profiling that I’ve been asked to help with.

Could we also consider possibly using feeder_audit? There is a broader issue we are grappling with around dats/times provenance of element-related synchronisation. e.g via GP systems, which feels related.

1 Like

I am always happy to resort to a local cluster in an emergency but if you go down that road you are pretty well committing yourself to adding it to every single entry archetype in a template where there is a possibility of encountering an Annotation.

1 Like

My colleagues were clearly against using Feeder Audit as we should not “hide” any clinically relevant information (yes, we don’t know but anyway…) there. Thanks for asking in Zulip. I actually got one example which I consider not too helpful:

{
  "resourceType": "Procedure",
  "id": "HCBS",
  "text": {
    "status": "generated",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\">\n\t\t\t<p>\n\t\t\t\t<b> Personal care services provided at person's home</b>\n\t\t\t</p>\n\t\t\t<p>\n\t\t\t\t<b> Based On</b> : Peter's Long Term Services and Supports (LTSS) care plan</p>\n\t\t\t<p>\n\t\t\t\t<b> Status</b> : completed</p>\n\t\t\t<p>\n\t\t\t\t<b> Beneficiary</b> : Peter James</p>\n\t\t\t<p>\n\t\t\t\t<b> Service Name/Code</b> : Personal care services <span> (Details : {HCPCS code 'T1019' = 'Personal care services, per 15 minutes'})</span>\n\t\t\t</p>\n\t\t\t<p>\n\t\t\t\t<b> Service Date</b> : Apr 5, 2018</p>\n\t\t\t<p>\n\t\t\t\t<b> Service Provider</b> : Adam Careful</p>\n\t\t\t<p>\n\t\t\t\t<b> Service Delivery Address</b> : Peter's home</p>\n\t\t\t<p>\n\t\t\t\t<b> Service Comment</b> : Assisted with bathing and dressing, doing laundry, and meal preparation</p>\n\t\t</div>"
  },
  "basedOn": [
    {
      "display": "Peter's Long Term Service and Supports (LTSS) Care Plan"
    }
  ],
  "status": "completed",
  "code": {
    "coding": [
      {
        "system": "https://www.cms.gov/Medicare/Coding/HCPCSReleaseCodeSets/Alpha-Numeric-HCPCS.html",
        "code": "T1019",
        "display": "Personal care services, per 15 minutes, not for an inpatient or resident of a hospital, nursing facility, icf/mr or imd, part of the individualized plan of treatment."
      }
    ],
    "text": "Personal care services"
  },
  "subject": {
    "reference": "Patient/example",
    "display": "Peter James"
  },
  "performedDateTime": "2018-04-05",
  "performer": [
    {
      "actor": {
        "reference": "Practitioner/example",
        "display": "Adam Careful"
      }
    }
  ],
  "location": {
    "reference": "Location/ph",
    "display": "Peter's Home"
  },
  "note": [
    {
      "text": "Assisted with bathing and dressing, doing laundry, and meal preparation"
    }
  ]
}

Yes, this is why I was wondering if this could be something we could add to the Reference Model. However, @thomas.beale had some good arguments why this might not be a good idea.

Hence, I’m not sure how to resolve this

I think the #1 priority is to be able to find out whether the annotations are from the original source system, i.e. part of the EMR#s patient record, or were attached in some other system or integration step. If that cannot even be known from the data, then this feature of FHIR (and any other similar feature, where data are pasted in on top of original system data in a way that cannot be detected) is dangerous.

If it can be determined, either from the data or from some B2B org-level agreements, that information should be used to decide whether to include the data in the target EHR or not.

1 Like

I’m not sure there’s actually a satisfying solution (in the broad sense within openEHR’s general principles) to this @birger.haarbrandt As I said to @ian.mcnicoll in a chat, sometimes we’re going to hit the limits of non-standards based software and real life processes and I think the origin of the fhir annotations is those cases which we may not be able to support.

I think a major source of these annotations is non-standards based clinical systems allowing users to annotate data on the screen just like they could do with a paper based process. They’re forced to allow this because the request from users arrive ad-hoc and for different parts of the system, which, at HIS/Cerner/Epic scale is a massive surface area. I know this because I’ve been there on the non-standard HIS vendor side of the scenario more than once.

So these source systems, having their non-standard internal representation deal with it in any way they want, then it ends up on the wire as an annotation in a fhir message. They cannot stop users from misusing the annotations of course, and that is a choice between data quality, computability and running an actual commercial offering.

Us on the other hand, with model driven platforms, have to consider how to support this in the generalised computing framework we have, and sometimes in cases like this, we may just have to live with the fact that there is no ideal solution, as this thread kinda indicates. In our case things are even more difficult because our ideal approach has a clinical engagement step where models for the specific case must be updated for annotation capability, which means a new archetype/template version and all the work that comes with it, all the way to actual system updates.

Ian favours tackling the problem but I’m not sure it is possible to find a solution that ticks the boxes Tom listed, because we have no control over the source of the annotations or whatever data quality/patient safety principle is potentially violated. Some vendor is making that decision, and semantically speaking I’m not sure we can catch all the balls they can throw. This is pretty much the Z segment of the HL7 V2 messages all over again the way I see it.

The pragmatic in me says there’s no point trying to tame this. I’d suggest revisiting the tagging/metadata related discussions we’ve had for RM. Yes, there’s potential for abuse/misuse but we can work on stopping that on the openEHR side and FHIR based data projected to openEHR would not have to deal with this at archetype/modelling level, which is really very difficult once you have clinical systems running in the field.

So if fhir data with an annotation arrives, we put it in wherever feels more sensible in that particular case of openEHR <-> FHIR bi-directional mapping, and accept the fact that some semantic black hole was put into data by the producers of the data. Maybe we could come up with documented mappings for the openEHR representation and salvage some safety/semantics, but if that requires revisiting the target template itself rather than most RM types supporting metadata/tagging, it’ll be too long of a roundtrip for clinical systems delivery/management.

Others may be more optimistic (and idealistic) compared to me but I’m willing to accept there’ll be some compromise every now and then.

2 Likes

At minimum, there is a timestamp and an author for an annotation which should make it doable to decide if the annotation was part of the original version. I think there is also some FHIR metadata to have an audit trail of changes including metadata, but I would have to have a second look if this covers all needs.

1 Like

Timestamp and author are optional AFAIK, so if unpopulated, we can assume they are part of the original record (or an update of that).

I sympathise with Seref’s view also. We constantly have to juggle with this kind of situation in integrated environments, so if there was a generic solution to capturing this info, where deemed to be low risk, it does make life a lot easier. Then it is up to people like use to argue about specific issues that need to be handled specifically, because of critical use-cases.

And as I said earlier, there are other similar challenges about granular data coming from other systems.

3 Likes

I am also more inclined to look at that approach. However I recognise we can’t save the world from itself, but we should not promulgate errors / bad data either. The openEHR system will get blamed for the junk in it, not the sender of the letter bomb or the dodgy postal service that brought it…

Also yes…

It’s worth remembering that the first clinical error that occurs because of poor integration will be when this gets real. In HighMed’s case, it’s currently not point of care, it’s analytics and research. Who knows what analytics will silently go wrong with wrong underlying data? And what happens if anyone decided to start using the openEHR Shared EHR for clinical care, e.g. patient monitoring, clinical trial management etc?

I don’t disagree with these concerns. My point is, we should manage the risk for these scenarios while considering the trade offs of adoption and delivering actual solutions. I think the options I listed and you sound positive about are one way of achieving and managing that balance. I prefer them to our regular approach in modelling especially when openEHR is the upstream data source, not the downstream receiver.
There is not much I can say beyond this without repeating myself :slight_smile:

1 Like