PATCH for persistent compositions

Hi all,
I was wondering if anyone has ever looked into capability to PATCH persistent compositions? From a frontend perspective, there are times when pulling the latest composition, editing it and PUTting it back is quite tricky. I imagine there are lots of reasons why PATCH isn’t a good idea but thought I would ask if it’s ever been reviewed. Or at least the idea of PATCH like function.

1 Like

Hi Ian,

at vitagroup, we are also coming across some use-cases where PATCH would be favorable (including mapping FHIR PATCH towards openEHR composition updates). We have not done a deep analysis on the implications, but we definitely would be very interested to explore this.

Thanks Birger, good to hear that you have come across a need for this as well. The implications probably need a deep analysis as you say. Would this be something that could be progressed via SEC or best discussed outside? Happy to collaborate, especially around defining example use cases.

Hi @Ian_Bennett at CaboLabs we have that under the radar (actually on our backlog) since we need to optimize the PUT requests to alter just a couple of element values in a composition. Now by the spec we actually need to commit the whole composition for very small changes.

I won’t say PATCH isn’t a good idea, it serves its purpose and has a very specific use case, especially in openEHR where compositions tend to be big and sometimes we just need to update a couple of data points, that’s a perfect fit for PATCH.

Technically it’s easy to implement, though as a spec that the SEC agrees on I think we need to standardize how the modified data is committed. For instance, the FLAT format (once fully specified) would be a good solution.

3 Likes

Thanks pablo, good to know you have it on your backlog. It would make things a lot simpler for use case you referenced. Where updating items with multiplicity, I think you would still have to bring back the previous information if you want to add to it.

I agree this is a useful idea but I wonder if it might be better to focus on orchestrating the query/Get/ Put process for now. I know PATCH as a diff is, in theory, easy to technically implement but I would worry about the risks. Maybe if we stayed at patching at ENTRY level, rather than more granularly?

We can discuss case by case. I just mentioned one case that would be just updating the value on an existing data path (path in the compo instance not in the archetype).

Also using the data path you can add or remove items from a multiple attribute.

What we need to agree on is:

  1. which update types are allowed
  2. which data elements and metadata needs to be committed to allow those changes
  3. which rules, including defaults, should be met to by those data elements and metadata to allow those changes
  4. which format is used to actually commit the data

My opinion is to initially allow any change: single data field, add/remove items from collections, and substructure (for instance a whole cluster or a whole entry, etc).

For the data elements and metadata, I think the data path and the value would be enough for the change itself, and of course the composition UID. Though we might need the template ID and/or archetype ID for data validation (even the committed subset should comply with the template). Though I would need to test if that’s enough.

For the rules, besides the composition, template, etc. should exist, if the data path exist, it means the user wants to update the value or structure, If it doesn’t exist, it means the user wants to add, but there is no way to say the user wants to remove some part, so maybe for the metadata we might need the intent or operation too (note I’m just thinking out loud).

Then the format would be whatever fits the serialization of the data and metadata, of course we might want to make that as similar as possible to something we already have, though IMHO this is something we might not have yet so we might end with a new format.

I’m not sure of anything here, this is just a brain dump of some ideas I’ve been thinking to improve our implementation in the Atomik Server.

2 Likes

That’s right - the thing that would have to be worked out is the diff representation and related diff and patch operations.

We can imagine that an application has the older copy of some Composition C1, and creates (by whatever means) a modified version C2. If a diff operation diffRm(rmObj1, rmObj2): RmDiff can be defined, then the result of the operation diffRm (C1, C2) (call it diff1) is what is sent as a PATCH.

At the server, a matching operation patchRm (rmObj, rmObjDiff) has to be executed to generate C2 from C1, i.e. C2 = patchRm (C1, diff1).

If these operations can be defined then this scheme can be made to work. The general way to represent a patch is as a list of transactions that progressively change the original object into the target version. The transactions are of the form {path, op, args}, and take the concrete forms:

  • path, delete
  • path, replace, newObj
  • path, add, newObj

When such a list is processed on the server, since we usually want the most recent version of anything rather than prior versions, it makes sense to create C2 from C1, store that, and - maybe even generate the reverse diff and replace C1 with that, if you care a lot about space, and are implementing a full differential versioning scheme.

Hi

I would be interested to know in which situations this is needed. Especially from a medicolegal point of view.

In our specific case, we need to update the status of a record, and currently we need to send the complete record just to update the status. Technically we would prefer only to send the updated status and not the whole record.

IMO the medico-legal aspect can’t be just about the API, it’s about the whole client-server interaction (a platform-related aspect), including user and application permissions to make such transaction to change a record. I think what’s discussed here is about adding a new capability to the API and not about the hole platform. Maybe others have a different point of view.

Our use cases have been:

  1. Updating central list of information from a care plan or assessment - i.e. automatically creating alerts from information you capture about communication needs, reasonable adjustments, medical diagnoses etc not directly in alerts template. We have managed to meet this requirement but it is complex.
  2. We are exploring assessments (encounter-based) alongside care plans (persistent). When submitting an assessment where there are shared archetypes, the clinicians are asking for relevant sections of care plan to be updated without having to reauthorise manually.

The second aspect has more medico-legal considerations than the first use case but current use of persistent compositions raises challenges when used in the real world. Principally, that you have to fill in and submit all template fields otherwise you null the information effectively removing it from view when you call the last composition. My experience is that templates obviously constrain archetypes but also frontend systems will at times constrain templates which becomes more challenging to manage with persistent compositions vs non-persistent compositions.

Thanks,
Ian

1 Like

Hi Ian

Very helpful use cases. And we do have to he very careful about data quality and coherence.

Still think yhe sweet spot here is patching at ENTRY level. The entry was always seen as the minimum semantic statement that could stand alone.

The prime example as you say is a list style composition… allergies, alerts, meds where a single entry needs updated or added.

If we start there, we might be able to make more rapid progress esp if the primary id for an existing entry is a uid.

Id also leave the question of persisting diffs to cdr implementations for now.

One lesson learned from ucp london, is to keep the scope of each persistent composition as tight as possible. Eg we mixed prognosis with end of life care, when in retrospect they really should be split.

1 Like