Modularisation vs openEHR monoliths. What would be interesting today?

Data can flow just fine between different openEHR implementations already, e.g. via the openEHR REST APIs. Those APIs also allow frontend and backend to come from different implementations.

But for backend products/deployments the major trend seems to be openEHR monoliths presently.
Combining backend modules from different openEHR implementations would be nice for healthcare providers (if implemented well) and also ease the pressure/feeling of system vendors of having to implement everything in the specification themselves.

I tried hinting about ways of openEHR platform modularisation (using REST) in a scientific paper in 2013, see e.g. figure 3. Only parts of that has been deemed to be of interest to standardise via REST so far.

Today perhaps standardising some Kafka event definitions and agreeing on naming conventions of some topics/event-streams would be an interesting example of an ITS helping multi-vendor backend modularisation.

The ā€œcontribution builderā€ mentioned in the 2013 paper would likely today have a GraphQL interface, see related sidetrack in the thread named AQL through GraphQL (and it could of course produce Kafka events too upon contribution completion/saveā€¦)

Regarding frontend, a partial standardisation/modularisation is discussed in: Standardised API for custom GUI-widgets for openEHR-based form editors & renderers?

4 Likes

I would love to see this happen! Both using something like Kafka, and making some progress on notifications.

2 Likes

You were a bit ahead of the curve there! A few things I would alter, but it is very close to the architecture you can see in e.g. the CatSalut RFI docs. Itā€™s just a matter of time and effort to get all this into the REST APIs.

1 Like

Many things have happened since 2013, so Iā€™m not sure ā€œallā€ things would be relevant to standardise today.

Another possibly interesting thing from the paper may be the things we concluded are not suitable to solve with REST and instead suggested a message bus or similar approach to solve, see the header ā€œtriggersā€ in the end of the ā€œImplementationā€ part of the paper:

ā€œmessages from the ā€˜Contribution Trigger Handlerā€™ only identify the patient, the committing user, the changed resources, and the archetypes and templates that were involved. Trigger listeners can then fetch more detailed information by using normal HTTP callsā€

Today Kafka and other event-based sytems could be interesting targets of event-centered ITS standardisation for this kind of events.

The other described possible trigger source is the ā€œcontribution builderā€ that represents state in the entry (form) GUIs. If only one user at a time is updating the same form, then this is likely a client side component, but some events could be interesting to also send to server side listeners. Youā€™ll find code with similar functions in todayā€™s ā€œform rendererā€ components in different openEHR implementations.

1 Like

Yes indeed; I am very interested in this, and it is essential to Task Planning and related things!

1 Like

And of course decision support systems, we should strive to reuse designs like cds-hooks and make them work in openEHR environment.

4 Likes

Triggers and triggering/selection crireria of different kinds have been implemented and documented by openEHR system providers, but not standardised. I believe Better for example has implemented the possibility to even run stored AQL queries on COMPOSITION commit data, perhaps Better (@matijap @borut.fabjanā€¦) Cambio (@rong.chen @stoffeā€¦) Ocean (@Seref @sebastian.gardeā€¦) Code24 (@sebastian.iancuā€¦ ) or other implementors can provide some examples of trigger handling?

Both pre- and post-commit triggers could be possible, but a practical issue may be how much you want to potentially delay a commit by pre-commit-trigger handling.

Parsing of (e.g. COMPOSITION) content is likely not any big added delay, since parsing anyway has to be done in order to validate content pre-commit. Waiting for potential commit ā€œprotestsā€ from trigger listeners in a distributed service before finalizing the commit could turn into a performance nightmare. You would not want too many pre-commit triggers in a live EHR CDR system (OLAP), it might be more acceptable in some data warehouse applications e.g. when importing data.

Iā€™d guess many would prefer any pre-commit triggers to run in the same process/node as where the content validation is taking place. So standardisation of that would likely be regarding plugging in algorithms into the node somehow.

Thus standardisation of post-commit-triggers is likely the most interesting place to start regarding message/event-based standardisation for distributed multi-vendor systems.

I agree with @rong.chen that looking at CDS hooks is interesting. It could cater for several use cases.

2 Likes

Kafka would a more abstract/generalized way to handle events, independent on underlying technology or domain. In my impression those CDS-hooks are closer to our needs and domain, perhaps a first step to tackle specifications for an event driven system (and perhaps Kafka would be a technology layer to implement that).

Well, CDS-hooks is a concept specific to CDS; the general concept is an ECA (event-condition-action) rule engine that implements pre-commit, post-commit and possibly other events, as @erik.sundvall said. CDS is just one possible consumer of a specific type of ECA combination.

But of course we should implement CDS-hooks, which will just be specific types of rules/conditions generating CDS-flavoured notifications.

1 Like