Data can flow just fine between different openEHR implementations already, e.g. via the openEHR REST APIs. Those APIs also allow frontend and backend to come from different implementations.
But for backend products/deployments the major trend seems to be openEHR monoliths presently.
Combining backend modules from different openEHR implementations would be nice for healthcare providers (if implemented well) and also ease the pressure/feeling of system vendors of having to implement everything in the specification themselves.
Today perhaps standardising some Kafka event definitions and agreeing on naming conventions of some topics/event-streams would be an interesting example of an ITS helping multi-vendor backend modularisation.
The ācontribution builderā mentioned in the 2013 paper would likely today have a GraphQL interface, see related sidetrack in the thread named AQL through GraphQL (and it could of course produce Kafka events too upon contribution completion/saveā¦)
You were a bit ahead of the curve there! A few things I would alter, but it is very close to the architecture you can see in e.g. the CatSalut RFI docs. Itās just a matter of time and effort to get all this into the REST APIs.
Many things have happened since 2013, so Iām not sure āallā things would be relevant to standardise today.
Another possibly interesting thing from the paper may be the things we concluded are not suitable to solve with REST and instead suggested a message bus or similar approach to solve, see the header ātriggersā in the end of the āImplementationā part of the paper:
āmessages from the āContribution Trigger Handlerā only identify the patient, the committing user, the changed resources, and the archetypes and templates that were involved. Trigger listeners can then fetch more detailed information by using normal HTTP callsā
Today Kafka and other event-based sytems could be interesting targets of event-centered ITS standardisation for this kind of events.
The other described possible trigger source is the ācontribution builderā that represents state in the entry (form) GUIs. If only one user at a time is updating the same form, then this is likely a client side component, but some events could be interesting to also send to server side listeners. Youāll find code with similar functions in todayās āform rendererā components in different openEHR implementations.
Triggers and triggering/selection crireria of different kinds have been implemented and documented by openEHR system providers, but not standardised. I believe Better for example has implemented the possibility to even run stored AQL queries on COMPOSITION commit data, perhaps Better (@matijap@borut.fabjanā¦) Cambio (@rong.chen@stoffeā¦) Ocean (@Seref@sebastian.gardeā¦) Code24 (@sebastian.iancu⦠) or other implementors can provide some examples of trigger handling?
Both pre- and post-commit triggers could be possible, but a practical issue may be how much you want to potentially delay a commit by pre-commit-trigger handling.
Parsing of (e.g. COMPOSITION) content is likely not any big added delay, since parsing anyway has to be done in order to validate content pre-commit. Waiting for potential commit āprotestsā from trigger listeners in a distributed service before finalizing the commit could turn into a performance nightmare. You would not want too many pre-commit triggers in a live EHR CDR system (OLAP), it might be more acceptable in some data warehouse applications e.g. when importing data.
Iād guess many would prefer any pre-commit triggers to run in the same process/node as where the content validation is taking place. So standardisation of that would likely be regarding plugging in algorithms into the node somehow.
Thus standardisation of post-commit-triggers is likely the most interesting place to start regarding message/event-based standardisation for distributed multi-vendor systems.
I agree with @rong.chen that looking at CDS hooks is interesting. It could cater for several use cases.
Kafka would a more abstract/generalized way to handle events, independent on underlying technology or domain. In my impression those CDS-hooks are closer to our needs and domain, perhaps a first step to tackle specifications for an event driven system (and perhaps Kafka would be a technology layer to implement that).
Well, CDS-hooks is a concept specific to CDS; the general concept is an ECA (event-condition-action) rule engine that implements pre-commit, post-commit and possibly other events, as @erik.sundvall said. CDS is just one possible consumer of a specific type of ECA combination.
But of course we should implement CDS-hooks, which will just be specific types of rules/conditions generating CDS-flavoured notifications.