The Value of Integration Archetypes

Hi all,
We’re currently looking at pulling in an awful blob of HL7 v2.? into openEHR and I have read about integration archetypes in the specification. I’m trying to understand if they have value for our use case where we have already taken the XML input and created a data transfer object that represents the input as a basic JSON structure. We are looking at the appropriate component to do this basic conversion but seeking to represent it in a usable, uniform way. This is then mapped to an opt to represent the semantic coherence with the input message, but with ‘designed’ archetypes.

I believe, we’ve mimicked the integration archetype approach but taken that step prior to a the feed into the openEHR platform. We would be looking to standardise this with a transforms engine/proxy service but I want to make sure I am understanding the integration archetype approach, and that it probably does not apply to our standard processes/components we use at the moment?

1 Like

I know there are other views but I think we can generally do a whole lot better in matching those kind of datastreams to ‘proper’ archetypes, albeit there will need to be local extensions. A few years ago, we basically modelled a target composition for the IHR (in its HealthCare Gateway incarnation) and pulled data from E-MIS.

The ICDR*.* templates in Apperta CKM will cover a lot of that content - I would start there, even if it meant re-factoring them into a single template to make the integration easier.

3 Likes

I’m wondering whether integration archetypes are widely used for real-world integration efforts.

And how would these integration archetypes look like? Any examples available out there?

In a sense, they’re somewhat similar to FHIR Questionnaire resources. See 2.37.5.3 Using Questionnaires versus using Resources.

Hi, when we want to migrate legacy into openEHR, we generally find specific archetypes for each data set, and until now we didn’t use “integration archetypes” and those didn’t help with any of those integrations. That is: archetypes based on GENERIC_ENTRY from the integration model (Integration Information Model). Though maybe we have a different understanding on what an “integration archetype” is.

When I do HL7v2.x integrations with openEHR or other models, all the data mappings are done to normal archetypes (or whatever model is the destination), and the implementation is done in Mirth Connect which uses HAPI HL7 inside, so all the MLLP communications and the message processing and parsing is done out of the box, and the transformation mappings are done in JS or in predefined mappers, which makes you write less code.

Just my 2 cents.

2 Likes

Hi, Pablo. Thank you so much for your valuable input, as it has solved a long-standing puzzle in my head.

1 Like

This is a timely question. We have been asked to get involved in a project which ‘requires’ the use of Generic archetypes.

Our experience, like Pablo’s is that we can very largely use ‘proper archetypes’ to handle integrations sometimes along with some local archetypes, for which we would normally just use one of the CARE_ENTRY classes.

I can see the argument for using GENERIC_ENTRY where you want to explicitly capture the content of the original message or record as-is , as a temporary solution, and ‘mark’ it as such but so far we have been able to be a bit more pro-active in our integrations.

3 Likes

Thank you Ian for confirming that the integration archetypes are not needed! I have been asked to get involved in (probably) the same project which ‘requires’ the use of GENERIC_ENTRY archetypes but my data mapping tools skip them. It is good to read this is normal from somebody who have done their share of migrations :wink:

The same goes for Pablo’s post.

3 Likes

I can actually think of one project where, in retrospect, we might have been better to go for developing a set of archetypes which match these incoming data.

It turned out, for various reasons, not to have any ‘phase 2’, where the generic data was to intended be mapped to CKM archetypes, so we could have saved quite a bit of analysis and mapping. We did actually create some quite local and generic archetypes but even then it seemed to make more sense to use e.g. an Observation class.

A lot of the challenge of integration is any mapping valuesets and not in the structural aspects.

Ultimately though, the client calls the shots, and there may be very good reasons for the approach being suggested. We will await with interest.!!

2 Likes

Thanks, Ian. Your sharing gave me a feeling of sudden enlightenment.

Given my experience from HiGHmed (with comprehensive mappings from arbitrary sources), I think it would have been preferable to have integration archetypes on multiple occasions. We took a similar approach as Ian describes mostly because GENERIC_ENTRY was not supported in our tools available. Especially ad-hoc templates would have saved some time when integrating complex documents in contrast to squeeze the data onto the openEHR RM and multiple local archetypes. We still find enough situations in projects where getting data into a technical openEHR representation provides value (I remember Ian mentioning their struggle to model generic observation data in Finland?!).

For the people interested in exploring this: you can use LinkEHR Editor and EHRbase to work with integration archetypes. Only limitation I’m aware of is that FLAT and STRUCTURED is not fully working (but AQL does!) and will require a second look by the team.

2 Likes

That is to say, there are often no absolutes in the real world :grinning:
Thanks, Birger.