Looking ahead to openEHRv2

There has been recent discussion touching on the question of what an openEHRv2 would look like, and whether breaking changes to the RM are worth the trouble.

While in the US, I and my team (including @borut.jures , who had very useful tools and code generators ) at a now defunct startup implemented many such ideas to see how they would look.

Here are some of the changes that we implemented:

  • reworked data types, especially Quantities
  • ITEM_XXX / CLUSTER / ELEMENT replaced by a single type that has both value and children, enabling much better representation of structures from fractal reality than the current structures
  • solved the problem of inlined reference items such as devices with a smart reference subtype of Node
  • in archetypes, replaced nearly all slots with direct references, greatly improving the ease of working with templates
  • improved Entry hierarchy, with 7 subtypes of Observation that reflect much better the disparate kinds of data we really have e.g. realtime, scores, questionnaires, labs, etc
  • flatter structures generally, resulting in shorter paths
  • greatly improved Entity model (= demographics + things, including all ā€˜reference’ data)
  • universal is-about coding of all archetypes - this reduces the brittleness of AQL queries to almost zero with respect to evolving archetypes

The form of models we used in this exercise was experimental, and not developed under optimal circumstances. I am in the process of building out new and better openEHRv2 models from scratch, for consideration by the community in the near future. Once you see the systemic effects of such improvements, it’s like the advent of electric windows in your car - no-one ever wants to go back. Many of the changes were aimed at improving clinical modelling, and we showed that very substantial improvements are available.

And this is not to mention another set of experiments on the generation from models to FHIR profiles (and any other similar data standard artifact, e.g. IHE, X12), rather than the latter being manually created. All that work goes away when they can be generated from upstream domain models.

8 Likes

This is my kind of brain dump discussion :slight_smile: I even have my own list of stuff that I would like to change:

  1. Simplified ITEM_STRUCTURE (I think we all agree on that one) with just the ITEM, CLUSTER, ELEMENT to represent the same semantics (I don’t want to discuss if one class is enough, just point to the removal of ITEM_XXX). Attachments - openEHR 2.x RM proposals - lower information model - Specifications - Confluence
  2. DV_IDENTIFIER is underdeveloped and we need, for instance, to allow codes in the type and issuer as ADL constraints, even with external code systems.
  3. Allow to use some base types as data types (inherit from DATA_VALUE), for instance we needed PARTY_REF inside COMPO instances at the ELEMENT.value but used DV_IDENTIFIERS because we can’t use PARTY_REF. In fact most classes in the base.base_types.identification could be just data types.
  4. Fix the INSTRUCTION_DETAILS references to the instruction and activity (I raised an issue about that Jira )
  5. Not making the identities 1..* in the demographic model DEMOGRAPHIC model: does it make sense to have contacts and identities for ROLE? - #7 by thomas.beale

We discussed some of these some time ago https://openehr.atlassian.net/wiki/spaces/spec/pages/4915242/openEHR+2.x+RM+proposals+-+lower+information+model

I do like the idea of flatter structures! The ITEM_XXX can remove a level of the hierarchy, another level could be the SECTION, which I don’t know if today is of much use. I see its role in displaying data but not much in storage, querying and data processing.

That’s on the top of my mind, I’ll need to check the JIRA tickets for other stuff that I might have forgot.

4 Likes

Looking forward to the public release of openEHRv2. :heart: :clap:

1 Like

All of this is replaced by a single Node type, with some subtypes for:

  • info node (= described right here)
  • entity ref node (points to some other entity, described elsewhere, like a device or a person)
  • info ref node (= a citation of previously committed info)

The collapsing of CLUSTER and ELEMENT together makes life much easier. I wish I’d seen it 20y ago. In the last couple of years, I had an important realisation: data in a health record consists of essentially 4 kinds:

  • meta-data - audit trails (i.e. commit meta-data)
  • real world context - the usual who, when where of acts in the real world
  • epistemic data structures - the arrangement of data according to our preferred schemes, e.g. SOAP to take a simple example; Observation time series; any other structuring scheme designed to make data ā€˜work’ for our minds
  • ontic data structures - the arrangement of data that directly describe things in the real world, e.g. characteristics of a tumour, or a pulse, things seen in the colon during colonoscopy, and so on.

The first 3 kinds require ā€˜information modelling’ i.e. the creation of models to represent semantics that we invent to understand our own recordings of things. For the last kind we go to ā€˜fractal representation’, i.e. a free data structure whose specific shape is driven by the agreed description of whatever it is used to describe in a specific situation - pulse, tumour etc. That’s what CLUSTER / ELEMENT tries to do (we copied this originally from CEN 13606), and it can be done more cleanly with a class like:

class Node
    attribute value: Data_value[0..1];
    attribute items: Noed[*];
end

When you use a Node for something you think of as an ā€˜ELEMENT’ today, you still have the chance to add more children one day. This structure allows something very nice: the value can carry a coded or other (e.g. String) ā€˜summary’ representation of something, and you can have child nodes with details, that some systems care about and others not. It no longer matters if you think any data point is a ā€˜leaf element’ today - you can never get caught out with the wrong decision. This structure simplifies many archetypes as well - today there are many ā€˜double data points’ carrying a summary and a detailed form, that can be reduced to one.

The whole PARTY_REF part of openEHR evaporates, we use Entity_ref_node instead, for all refs to Parties, devices, any external thing.

I didn’t get around to fixing that, but it would be done with the citation class Info_ref_node I think.

I built a much improved demographic model that is more archetyped and more flexible. Needs to be proved out with test use cases, so your breaking cases would be good for that.

Section is an interesting question - it’s one of those ā€˜epistemic’ elements that are essential to the practice of recording in medicine, and essential for human understanding, but in a lot of querying don’t matter, if you’re just querying for real things. E.g. imagine a med student put a BP measurement under SOAP- Subjective. It should be under SOAP-Objective. But querying based on is-about codes will find it no problem.

Subtyping Compositions into kinds that represent human-created reports, summaries etc versus machine created observations, action-notes and so on might clarify here Section could be dropped.

Well we are a way off that - but we have some elements worked out, and no doubt local innovations in vendor systems (done so because they are breaking changes) can be made openEHRv2. For my part, I’m slowly creating a new expression of the things we learned during the last two years.

1 Like

I don’t want to discuss specifics on the ITEM_XXX thing at this time, since I would like to have time to compare requirements and analyze pros and cons of each approach. What’s is clear is that the ITEM_XXX should go away.

Another item that came recently as a requirement in a real world project, and might have to do with what you mentioned about COMPOSITION subtyping, is the ability to extend the RM and have real flexible composability. For instance I would like to represent a CLAIM, I can bend the current semantics of COMPOSITION and, for instance, ADMIN_ENTRY and create archetypes for those, but semantically the CLAIM has a different scope, and it’s not really part of the EHR (as COMPO and ENTRY are). But if we have a higher level STRUCT (generic structure) that we can use as the base for new concepts that might be at the same level as the COMPOSITION (in ehr) or PARTY (in demographics), and that extension can be represented by an archetype (which is understood by modeling tools and CDRs alike, e.g. for storing and querying), then we can extend the RM to other domains in a controlled manner. In fact we could be able to extend even data types, for instance adding extra attributes or substructures.

An item that always bothered me, and this is a personal rant, is that we could be using a single formalism to represent RM and archetypes/templates, which would be way easier to implement and manage. For instance, we could have the class metamodel as the base, then over that we create all the RM classes as archetypes, and on top of that we specialize and/or extend (what I mentioned above) to create concepts as we have in the CKM. With something like that we could track all changes to RM and archetypes using the same formalism, instead of having UML for one thing, BMM for another, ADL for another, expression languages for another, etc. So we can use, for instance ADL to define classes, attributes and relationships, and not only to define constraints over existing ones. It’s kind of a declare-constrain-declare-constrain approach.

That’s the thing, SECTION might only be a visual thing we are modeling in structures used for storing, managing and querying data. It’s kind of the ITEM_TABLE case, it’s something for visual representation that is modeled in the data structures used for managing the data. I think in those classes there are mixed requirements: organization and visualization might be better separated from data management, as a different semantic level or set of constraints, on top of the data structures for managing data.

This is an example of another domain which we need to cover with the same modelling approach as the openEHR EHR, but it’s not the EHR. Claims belong to a record of transactions between providers and payors, scoped by an ā€˜account’ that relates those transactions to a given patient.

The ā€˜EHR’ class is not the only top-level class in the ecosystem. THere is nothing to stop claims related transactions being represented as Compositions, but they would be added to an account. There is also the concept of the ā€˜billing encounter’ which is effectively ā€˜episode of care’, a record of all service events and resources that occurred / were used to complete a care episode for such a patient.

Everyone seems to get seduced by this idea at some point. But it can only work if you want to throw away the division between the base information model (which provides data interoperability) and the archetypes and templates (which provide domain models).

Part of the reason here is that the base information model is an object-oriented model, with OO inheritance, which is additive down the hierarchy, i.e. subclasses add new things. The archetype layer is a constraint formalism, and is subtractive down the inheritance hierarchy (even the ā€˜additions’ are constraints). More on this here.

We only need two formalisms for this approach to modelling: a class model formalism (originally UML, replaced by BMM), and a constraint formalism (currently ADL).

Depends on what you mean by ā€˜visual’! Headings are essential for organising certain types of medical information, and docs will only look for certain information under certain headings, e.g. in a discharge summary. So they can’t be discarded or ignored in certain structures. They should not however be required to interpret the semantics of anything. This has subtle effects. Imagine the following:

History (section)
systolic BP > 160 x3 in last month

Goals (section)
systolic BP <= 140

In the above, the ā€˜systolic BP’ is not the same under the two headings: under the first it is an actual measurement, in the second it is a goal. This is managed correctly in openEHR by using different archetypes; it can be improved by using coding to distinguish a goal measurement from an actual. If that is done correctly, the headings are not needed to computably interpret the systolic BP values.

3 Likes

Yes. I’ve been working with US claims for the last 10 years. Just recently someone got interested in the openEHR methodology and needed to have some representation of the claim in the openEHR formalisms.

Sure, though it it’s a COMPO as of today, it needs to be in the EHR. For allowing something else we need to go out of spec. Though if we were able to extend the model without adding RM (I mentioned that before), we can create that kind of ACCOUNT container for COMPOs of for some other type of record.

I think there is a mix between representation and interpretation, which doesn’t need to be, in fact you don’t need to through away anything, just reuse what you already have for a different purpose.

The base OO model should be defined in some way, which is like adding constraints on the UML base metamodel (Element, Relationship, Classifier, …), or another base metamodel for OO design (anything that represents the elements of a class model).

Those constraints are not subtractive, but additive, because is to define new higher level elements based on the base ones. So what I’m saying is that adding that capability to ADL we can use ADL to do both, definition and constraining. In this context, definitionincludes extension, and inheritance is one method of extension.

Consider that we can have archetypes that define or extend and archetypes that constrain or even both, for instance if I need to constrain a DV_IDENTIFIER but also need to add a new attribute to it in a formal and controlled way that both tools and CDRs understand.

That way we could be able to reuse all the infrastructure we already created in modeling tools, utilities, CDRs and apps without worrying of supporting 5 different formalisms to get the full picture of some piece of information.

Visual means there is a person looking at the information through a user interface, so it’s not related to internal data management, querying, analytics and anything that is done inside a system without a user looking at the data.

The example is valid, and I get your point, though there are ways of modeling such case without using SECTIONs. Goals seem to be represented by EVALUATION in current archetypes, so it’s not possible to use the BP archetype inside, but some CLUSTER structure. History might also be some kind of EVALUATION too.

Not quite - Compositions are also part of Task Plans. A Composition is just a commit bucket for a new version of some content created during some health system even related to the patient.

It is easy to create a new kind of ā€˜record’ object in openEHR RM, e.g. PSR (= Provider Service Record), whose contents are encounters and resource consumption events. Or a PCR (= Payor Claim Record), whose contents are the Compositions containing data pertinent to the claim being made and resolved.

ADL only has constraint semantics. When you ā€˜add’ elements (possible in archetypes, contra-indicated in templates) you are actually just constraining currently open container attributes. It can’t add any new attributes, e.g. it can’t add something like Observation.guideline_ref or whatever - you need UML-like formalism to do that.

(Mathematically speaking, UML and OO formalisms like BMM are ā€˜frame logics’ - see Michael Kifer papers; ADL is a constraint logic; even standard SQL is - it can generate projections, not new objects, if you ignore the schema create commands).

Well, we need two (or 3 if you want to count EL as a 3rd formalism, used by the other two). And we can generate flattened views, including the RM - that’s what the ADL Workbench does, so does LinkEHR (check with @yampeku ) and Archetype Designer as well.

You can model without Sections, but if the clinical note structure requires Sections, they will be needed in the model. SECTION archetypes just put Entry level archetypes in the right places; there can be multiple SECTION archetypes containing the same Entry data - e.g. imagine an army field hospital ER set of headings, different from a civil hospital ER. The reported data are the same; to allow both, you just do it in the COMPOSITION.content. It might be that we need to make this more obvious in tools though, and possibly add a no-headings alternative. Or separate each SECTION archetype from a contents archetype. I can certainly see possibilities for improvement here.

Maybe I wasn’t clear: what I’m proposing is actually changing that. I’m not talking about today, I’m talking about something for openEHR v2 :slight_smile:

As said: have a single formalism to cover both things: additive (definition/extension) and subtractive (constraints) definitions.

The result could be a simplified infrastructure and better management of underlying models and other artifacts, adding the possibility to extend the RM via ADL (I’m talking about ADL 3+ here :slight_smile:

Aha,..

I would recommend looking at XML-schema, and particularly the inheritance (specialisation) rules, such as they are. They try to achieve both subtractive and additive semantics, and worse, are different for attributes (i.e. data items inside the tags) and elements. Nearly impossible for humans to understand in practice (everyone had to hire experts to make it make sense) or to compute with within the one framework.

What I have tried to do is make BMM and ADL a fully mutually compatible eco-system, which is about the best that can be achieved in my view. If you can figure out a framework that cleanly does both things in one go, you’ll probably get a Nobel prize (well, the Turing award)!

I’m all for that, and have plans for ADL3 that will simplify certain things, but not in this particular way.

1 Like

@thomas.beale for context here I mentioned an idea for anā€extension* keyword for ADL

That’s for defining an extra attribute on an existing class. The same or similar keyword could be used to define a class (which could inherit from an existing one). All that is the a additive aspect that could be built from scratch on top of a base metamodel, because you need, at least, to know what a class, attribute and relationship is.

Then the corresponding classes to the AOM should be added to support those new ADL constructs.

If tools, utilities, CDRs and apps can interpret those, then you can ā€œcompileā€ (flatten) any set of different levels of definition into one final ā€œOPTā€ to be used in software, with full tracking/traceability off all the elements used to generate that final definition, and using a single formalism can result on having a single/simpler/integrated workflow and simpler/less expensive software. Today we have different flows to manage different formalisms at different levels, with different pieces of software, and the is no capability of extending the RM in a formal/controlled way (which IMO is a must today, specially when interacting with other standards like FHIR which allows extensions to the base resources).

Besides the claims case I mentioned, we had another project where they wanted to analyze appointment scheduling data following the openEHR processes, for which we also (ab)used the composition, and I would have preferred to formally extend the model.

I think it’s great to see forward-looking ideas like this being explored.

At the same time, it’s worth clarifying here that, at present, the Specification Editorial Committee (SEC) does not have formally a ā€œv2ā€ on its strategic agenda. That doesn’t mean such discussions aren’t valuable - in fact, we welcome initiatives, explorations, and community-driven proposals. However, any substantial change to the specifications needs to be driven by demonstrated community demand and supported by a clear implementation pathway for vendors and open-source projects alike.

Breaking changes, while sometimes necessary to enable innovation, must be approached with great care. The openEHR ecosystem already has significant active deployments, and one of our key responsibilities is to minimise disruption for those implementers, including through tooling, migration strategies and transitional support where possible.

It’s also worth note that many of the requirements being discussed here might not require a full rework of the models. Some could potentially be introduced incrementally through upcoming minor releases or compatible extensions, where that makes sense technically and strategically.

So while a complete ā€œRM 2.0ā€ is not currently on the SEC roadmap, these kinds of discussions are valuable for identifying future needs and helping us prioritise what matters most to implementers and users.

7 Likes

@sebastian.iancu it would be useful to generate a formal context within the SEC to discuss about future ideas, with breaking or non-breaking changes.

I have a personal concern in terms of how we handle breaking changes, in terms that we try to avoid them at all costs, because people can be tempted to fork away from openEHR and do their own thing just to be able to implement those changes, thus generating a new divergent spec. This is not new in software, specially in open source. Recently we had the discussion about the openEHR Labs, and though that was proposed within openEHR, I feel more people would like to try to do new things in order to innovate faster, and I think we focus more on maintaining current status than exploring future options.

Of course I totally agree in terms of having great care when analyzing breaking changes, following all formal processes, etc. though I think we are not giving enough space for analyzing these, for instance: the ITEM_STRUCTURE simplification was proposed in 2011 (even before we had a SEC, industry partners, CIC, etc), it was discussed extensively but informally (not as part of the SEC agenda) and the whole community knows nobody is using anything but ITEM_TREE (though I know you have some ITEM_LIST in some archetypes) but the ticket is still there Issue navigator - openEHR JIRA

1 Like

These will all be consolidated into the next generation of BMM + ADL tooling I am working on.

Cool, I’m working on the other idea to have just ADL/AOM with no BMM/UML. In fact I won’t even have ADL, it will be JSON/AOM (after years of working with openEHR I realized inventing new serialization formats doesn’t add much value, though aligning with existing ones enables faster development with less errors and reusing tens of existing libraries/packages in tens of programming languages).

We should compare our results!

When I say ā€˜ADL’ I don’t really mean the syntax specifically, but the wider idea of ā€˜Archetype Definition Language’, which is indeed captured in its meta-model. Many people have asked why there is a language at all. I often say, why is there the language Java / Python / xyz? Why not just write Java meta-model instance in JSON (which BTW is not well defined or understood)? Once you think about it, it’s easy to understand why - it’s almost impossible for humans to author directly in such a way - you need a dedicated tool. With a symbolic language, a text editor with syntax colouring / auto-complete is all you need. Plus, learning and teaching archetypes with no symbolic language is a lot harder.

If you get rid of BMM completely, then you will have the challenge of demarcating which parts of the archetype models you build are ā€˜built-in’ and which are not. Then you will find you need to apply specific rules to those parts. Pretty soon (I predict), you will end up with BMM-inside-AOM…

OK, I’m specifically talking about ADL as a syntax. I’m not familiar with the wider idea of ADL. All I know is ADL as the syntax and AOM as the underlying model of that syntax representation in memory (which is usable in software, because you also have the Abstract Syntax Tree representation in memory which is just a 1 to 1 mapping from the syntax, which isn’t practical to use in software that handles archetypes).

I don’t think we can compare a programming language that is mapped to CPU instructions and memory addresses, to something that is purely declarative static definitions, which is the ADL/AOM case. In my experience with openEHR and other things I did that deal with declarative stuff, like rule engines, development/testing/running was way easier when using off the shelf formats instead of creating my own DSL, which is what ADL basically is. In practical terms there is no single advantage of having such DSL

Declarative stuff in all the things I did (I’m talking about my own experience), were better with a specific editor, though sometimes that wasn’t available and manual editing was needed, which is prone to errors and hard to test/verify. So in what I’m doing there are two requirements I don’t impose on myself: readability and manual editing. I want this to always be edited by an editor that deals with the format(s).

That’s actually the plan: having everything in AOM :slight_smile: because we can’t lose any semantics from there (BMMs/UML/OCL), and doing that by adding the ā€˜additive’ part for definition and extension (what we talked before), and including the invariants (the OCL part of the RM specs). Basically I want a single model and a single syntax to represent the base for the RM and even the same AOM, then extend on opt of that to define both models, and the declarative representation being in JSON with canonical transformations to XML/YAML/…

I imagine tools like code generators to just merge/flatten a set of elements of the same type (same syntax and model) in order to produce code for the RM, AOM, instances of archetypes, templates and instances of the RM compliant with those templates. I actually dreamed about this situation, so let’s see where this ends.

1 Like

A potentially relevant blog from FHIR community:

Profiliferation: A major interoperability challenge

A key insight from the study was the widespread occurrence of ā€œprofiliferation,ā€ a term describing the rapid proliferation of redundant and overlapping FHIR profiles.

This phenomenon typically arises when teams repeatedly create new profiles for similar purposes instead of reusing or adapting existing ones. Profiliferation makes it harder to share data, raises maintenance costs, and takes away from the benefits FHIR is meant to bring.
…

Some keywords: Consistency, Diversity, Reusability, Extensibility, Flexibility and Interoperability

1 Like

It’s fun to explore these ideas, and we need a communication channel for them.

But I’m concerned about calling this openEHR v2 and discussing it mixed with openEHR focussed discussions.

I mostly fear we will take away some key strengths of openEHR: a very stable reference model, allowing for stable and universal implementations, and a life long patient record and mostly the quality of the domain models. The domain models imho are so good for two key reasons, a very well thought out reference model that set you up to create domain models with a solid informatics basis (this is where many other models, zibs, FHIR etc go wrong) and technology so support clinicians to become modellers (a language that’s somewhat understandable and tooling that let’s you work mostly from a UI without losing the power of the underlying language).

Now the key challenge of openEHR imho is adoption. This has been increasing nicely, but my gut feeling is there’s too many challenges with the technology at the moment to progress to what we all strive for, openEHR as the universal technology for health IT. The specific challenges are in developer adoption (laid out by Sidharth in the archetypes as SQL post), clinical adoption (mostly how to scale the CKM governance model and how to get more doctors to take ownership of health IT) and then there’s the political side (hospital administrators, government etc., vendor’s commercial interests) which will improve a lot if the first two are ā€˜fixed’.

The improvements Thomas and Pablo list are important, but to me very minor improvements to these two key challenges compared to the huge cost it will be for the key strengths. E.g. breaking backwards compatibility of openEHR data might well kill openEHR adoption. And lessening the quality of the tech for modellers will do the same by (slowly) stopping modellers and degrading model quality.

So in order to make a next step in health IT technology in alignment with the openEHR design requirements (which are still excellent) Architecture Overview . We need either something that’s backward compatible and has a gradual migration path or something very different with at least a 10x improvement. since the openEHR design requirements and RM and DSLs are still sound informatics wise. My gut feeling is we need to create an abstraction over current openEHR technology aimed at openEHR client app developers. That gives maybe 90% of the power with maybe 10% of the learning curve.

I think there have been good efforts in this respect already: @borut.jures code gen, openFHIR, OpenAPI, webtemplate etc. But it needs a lot more work to come close to what we need. Personally I’m still hoping for a new version of (something like) OpenAPI to become good enough to represent the RM and do code generation of RM/AM and archetypes/templates in developer’s favourite languages. And then a standard query language working on these OpenAPI models.

Btw, given the move to microservices and api-first, I’m not convinced SQL is the way to go for openEHR. API level abstractions focus seem like a much better fit. Especially if it’s combinable with SQL for analytics on a secondary data store.

If we can’t do it without keeping backwards compatibility of data, we at least need to unify with the HL7 standards and probably by using a (as yet to be invented?) new technology for use also outside of healthcare.

5 Likes

Any next generation (aka breaking changes) version of anything necessarily implies technical incompatibilities in data, APIs and so on. This is not a problem, as long as those incompatibilities are essentially at a detail level, while remaining within an overall single conceptual architecture and design methodology. It is easy to build a data bridge from the v1 of a technology to a coherently designed v2, within the same paradigms.

The semantic gap and incoherence that we already accept today, to transform to/from openEHR and FHIR; openEHR / OMOP; openEHR to anything else is much greater than any openEHR v1/v2 approach will ever be. Objectively speaking, openEHR and FHIR are not directly commensurable modelling ecosystems at all; so mappings are crossing conceptual gaps and differences, not just representational details. Doing what we are doing today entails far greater costs, not to mention threats to patient safety, than any openEHR v1/v2 data transformation will ever create. (Many of us have 20-30y experience with this problem - openEHR to HL7v3 RIM, CDA, and now FHIR).

Well the improvements include covering major domains that are just not covered properly or at all today, including:

  • Demographics, places, orgs, etc
  • Entities - all ā€˜reference’ data, including drugs, prostheses, devices
  • Inventory - instances of the above, i.e. actual machines, devices etc
  • Service model, i.e. encounters, resource consumption, suitable to enable cost tracking and billing
  • Bookings / appointments
  • Process automation (i.e. Task Planning area)
  • Comprehensive CDS
  • Clinical trial management
  • Research studies
  • etc

All of those areas can benefit from the RM + archetypes + templates approach.

In addition, there is a great deal of innovation available to get from templates to re-usable models of UI/UX, to automated app generation. I believe the way we currently do separated ā€˜applications’ will disappear. Instead, applications of the future will be forms / other interaction modalities, attached to nodes within workflows, and the care pathway workflow will be the orgnising principle.

I have not even mentioned AI, ML, process mining or another dozen areas of computing. Or persistence, as per other interesting discussions.

Some of the incremental improvements to e.g. RM, archetypes and tools are precisely intended (and able) to reduce the friction and difficulties experienced today by implementers and clinical modellers.

Exactly. THere are pre-requisites for being able to do this without too much pain.

As a semantic representation, it almost certainly is not. But newer more powerful forms of SQL are clearly nicer targets for semi-automated model transformation into partial implementations, than the SQL of 30y ago.

This is the fallacy. We can introduce breaking changes into an openEHRv2 that will entail mild (generically representable and verifiable) data transformations from openEHRv1 data (where we have mappable archetype paths and so on); but we cannot achieve any such generic or safe data conversion for transformations in and out of FHIR (no generic model-model mappings available at all) that we so casually do today.

The level of risk we are running right now in all current data transformation in and out of semantically incompatible ecosystems is far greater than any such risk between an openEHRv1 and v2.

2 Likes