Federation of persistent/episodic compositions

The community, lead by @evermeulennl from EY, is working on specifying federation of openEHR data across CDRs (from different vendors). Now this concept is mostly straightforward for event category compositions. Eg. a unified graph of blood pressure observations where data comes from different CDRs (using AQL).
Now for episodic/persistent compositions, this is a bit different, because there’s only a single truth for a data point at a given point in time (‘persistent’) and context (‘episodic’). E.g. Advance care planning, problem list, allergies, medication list etc.
The current RM has both a class for versions and version containers. And there’s a specification for distributed versioning.

In a distributed federation there will be one latest VERSION at the federation level, that contains the truth at that point in time that can be persisted only in one CDR (for episodic there may be multiple concurrent active versions).
This is a case not covered by the current RM, because the RM assumes copying and moving, but not federating versions.
The (ORIGINAL_)VERSION, is probably fine, but the version container (VERSIONED_OBJECT) has a ’ latest_version (): VERSION" function, how can a version container in one CDR know, the latest version is in another CDR?
What are the communities thoughts about solving this?

This is a really important area to think about @joostholslag .

The problem is, I think different for longitudinal vs. episodic compositions.

Longitudinal compositions tend to represent information that is true about the person, regardless of context, but also tends to be universally of some importance in every context, whether that is an allergy list or an end of life decision.

So I would argue that it must be maintained as a pure single source of truth across the federation space.

Approaches

  1. An actual single ‘person-centric’ federation- wide CDR for this kind of Global data.

  2. One of the federated CDRs acts as the source of truth for Global data, with some kind of record locater XDS or whatever. There are some discussions in England about how this might work for Federating Shared care records (though much simpler). I think this lives outside the RM VERSION mechanism. i.e we should not be trying to mix VERSIONS across CDRs 0- agree from the outset that all of ‘this kind’ of composition lives ‘over there’, which does make AQL across CDRs kind of interesting but that is probably not going to be common.

  3. Very tight synchronisation across CDR (essentially what you are suggesting above) - chan be in one CDR triggers change in all of the CDRs so that they are continually synched. This makes me very anxious, especially as we scale up.

2 Likes

Well I think the problem is similiar, just episodic has the additional problem there may be multiple truths at the same time. E.g. both the GP and the home care nurse each have their own problem list.

This doesn’t hold for our health care system, since the ‘responsible clinician’ will be the one responsible for managing the ACP composition, so the active version should be in a CDR under the control of the ‘responsible clinician’. And the responsible clinician will shift occasionally, e.g. change of GP, hospital admission etc.

Agreed, I don’t think this scales, so my first idea would be a ‘federation service’ (that also distributes AQLs and collects and merges result sets for event compositions) keeps track of where the latest VERSION for a given VERSIONED OBJECT is. Does that make sense?

Whether there will be a single federation service in a federation or whether there can be many and how to sync those is strongly dependent on the context of the federation (local law, cultural, healthcare system, cross national collaboration etc.) and out of scope for this question (and to me hopefully out of scope for the federation spec

Interesting question, and even more interesting the suggestion how to solve it.

Personally, I expect that in reality this won’t be easy to solve as the any of the above - I think there won’t be consensus, or cross-domain and cross-setting acceptance that such data persisted in one CDR in one organisation, will be actually (and suddenly) modified directly (or indirectly) by other orgsanisation. The federations that I see at this time might accept a read-only aproach, or perhaps contribution new data (via import/syncs), but I can hardly see them accepting concurrent-edits (like cross-CDRs).

Also, I have the impression that what is recorded as persistent in one organisation, might not be always relevant/necessary/‘in-scope’ for another organisation (I guess except some allergies, etc).

I think for now it safer to go for a ‘union’ of federative query results, indicating properly the source of data, perhaps do some smart filtering on duplicates. Thus ‘show them all’ to the user. The next step would be to try out (occasionally) what Ian suggests as 3) and aim that on long term we should go for 1) where that global-data actually resides (somehow magically) by the patient.

That might break a lot of things… it sounds like the version-container becomes “virtual”. You might be able to do this in some close cluster-like databases, with some low-level native functionality, but I am not sure if is doable to openEHR data, especially that these systems connect themselves through API.

Just as a question of pure design, you will see from the spec the virtual version tree concept that distributed versioning relies on merging versions from some other location, for the same logical versioned object (e.g. representing medications list or problem list). So if such a merge has been done, then functions like latest_version will correctly return the latest version from anywhere - if the merge has been done.

The easiest way to think about this, thinking of modern information technology, is that federated openEHR versioning is like Git versioning - seeing all the changes in some file relies everyone making changes on copies
(‘clones’) on their own systems pushing those changes to the ‘origin’, i.e. the source of truth repo (usually on GitHub, but could be anywhere).

The only difference is that, generally speaking, there is not a recognised master (= origin) repository that is the single source of truth for a given patient. Things are changing, but it’s still the case that there are often patient records in multiple institutions for the same patient, each with their own idea of medications list. In places like parts of NHS, Catalonia, Norway, parts of Sweden etc there is movement toward a single EHR per patient.

In this situation, ‘distributed versioning’ works more on a peer-peer principle - that is, changes from any location (for the same content - meds list or whatever) can be merged across to the same versioned content object any other institution, even backwards and forwards. This relies on the ‘system id’ being part of the id of every version of content, no matter where created.

There would be other technical solutions to this whole problem (e.g. based on distributed querying), but the current model is pretty solid, and it has built in the concept of merging, which often will require human intervention to determine if certain changes make sense to be added to an existing source version (again, Rx list is easy content to imagine this).

It is - everything I said above relates to multiple copies of the same logical content items (Rx list, Dx list etc). For event content, e.g. blood glucose recorded at a) home, b) general practice checkup, c) hospital, ‘merging’ such content from elsewhere to my EHR system here doesn’t merge those glucose Obs onto the same versioned object, it s more like merging new ‘files’ (in Git thinking) into the target EHR. The same thing could also be achieved by distributed querying.

So the federation concept is different:

  • for persistent content (Rx list, Dx list, allergies list, vacc history, main care plan, etc) it is like merging changes to the same file from different places
  • for event content (labs, most observations, orders etc), merging means brining in more ‘files’ (data objects) from elsewhere, each with different time stamps.

The first kind of merging is harder to do, but it’s the one that everyone really wants (one day). Without it, we are stuck with endless ‘medications reconciliation’, repeated interrogation of the patient on allergies, and things like the WHO yellow vaccination card.

The second kind is technically easy.

That’s the ideal, but it’s not strictly necessary, as per above - merging persistent content back and forth among peer EHR systems is possible. I certainly not recommending it, but it will work, and it’s probably necessary as a stepping stone to the future. Note that distributed querying has to be engineered in a certain way if we know that there are duplicates of event data in multiple places.

Ian’s option 3. is this anxiety-inducing way of doing things :wink:

Anyway, Ian’s 3 options are a good description of the possibilities.

The US is also far from a SSOT EHR for patient - EMRs are held inside Kaiser, Intermountain, Mayo, - it’s all ‘health system’ centric, or worse, institution-centric. So at least some geographies need to live with this truth right now.

I agree - for now, mostly there will be no such agreement. That is why peer-peer version-merging of content has to be considered.

Also true - even establishing what is ‘persistent’ content will require some whole-of-community agreements. But for a few things - medications, master problem list, allergies, vaccinations - I would think everyone will agree on these as being persistent content. with the same information design etc.

When I first designed the distributed version approach of openEHR (stealing liberally from past expertise and standards) I was assuming that there would be no single-source-of-truth EHR for a given patient for quite some time. It’s not the complete solution, but I think it’s probably the most realistic technical approach for today.

1 Like

I’m not sure that’s a federation problem, or that’s is actually having multiple truths.

The patient is always one, so the information should be centered on the patient independently of when, how, from where and who created the information, and where that information is stored (one or multiple locations).

If the federation means storage in multiple locations, some synchronization process should be in place to be able to have a copy of the latest data about the patient on every location as soon as possible.

On that scenario, there will always be the possibility of having conflicts, which is part of the sync process, like two doctors from different hospitals updating the medication list for the same patient, before the synchronization occurs. So for some (previously agreed) time both lists are out of sync, then for the sync, some reconciliation might be needed, because what happens if both doctors add medications that have the same component or with components that could interact in an unsafe way. In that context an automated reconciliation might not be possible and might need human interaction for clinical safety reasons.

So for me, event, episodic, persistent, etc. is all the same. The key is the sync and the reconciliation.

NOTE: the “reconciliation” is like a git merge which has conflicts, and the merge can’t be done automatically.

Then after the information is fully synchronized, if everyone in the federation has a copy, I would say the query part it’s pretty simple: just query your local repo. IMHO the problem is getting the data synchronized between different systems and that everyone has a copy of the EHR.

The other way of doing this is by sharing parts of the EHR of the same person between different organizations, which IMO doesn’t make much sense, but it’s technically possible. Then when querying, you might need to go to each repo in the federation to extract part of the info to get a full picture of the patient. Why this doesn’t make much sense? I would say each hospital wants as much data as possible about a patient, and a federation in general agrees to share the same data set between the different stakeholders, so everyone has a copy of the agreed data set.

You can think this as IHE XDS, in which each organization in the federation has their copies of the clinical documents, and they share an “index” which could be queried in order to retrieve the documents from the source organization. I think makes sense for documents, but not for smaller pieces of information like individual entries. And of course it doesn’t make sense for persistent lists like medication list, allergy list, etc. you want those to be stored on every organization! But might work for event data.

Just my 2 cents.

1 Like

Clinically it has multiple truths. A comp.problem_list has inherently subjective information. By putting something on the problem list it implies some degree of relevance. It’s not just a list of all problems. In that case just use an AQL for eval.problem/diagnosis.
So it’s highly context dependent what the truth is. E.g, a home care nurse has a very different opinion on what should be on the problem list from what a radiologist wants on there, let alone if you take (curated) ordering preferences etc into account.

This, to me, is actually what we’re trying to solve with federation.

There are many reasons why a hospital wouldn’t want and can’t do this, technical, legal, clinical, privacy etc.

Thanks for this nice example, this is actually out nicely into words what I vaguely had in mind. So I think for persistent/episodic data you have to keep an index at the federation level of where the current active version for that data is (potentially including the version container). For event data you can just send an AQL to each CDR in the federation.

If you want to have a federation without a central index (as I would want for my country) you have to find a way to distribute the index in a way that each federation service has a (reasonably) current (part or) copy of the index. That feels hard but solvable, but as said before, outside of the scope of this discussion.

So in my previous thinking, there will be no merge, but that might change, as detailed below.

Agreed. But this is still assuming there should be a single place ‘master branche that has the truth. And data can be branched out and merged back in. Which is a great feature. But In my project there is no single place that holds the truth. It depends on the current context. And it may shift, as explained in the post below. Now this concept is still very useful, but a little beside the point I’m making. But it is helpful in making me think that for persistent/episodic data when the location for the truth is shifted, this is a great moment to copy over the version container and it’s (some) versions, and potentially merge the data inside those versions into a new version. And then tell the federation the new location for the truth of that version container.

Yes, you’re probably right, and I don’t want to go to clustered databases, I want to solve it at the openEHR (federation/RM) level. So probably this is an extra reason to conclude we need a single Version container at a given time (and potentially multiple concurrent, depending on context variables, for episodic). Will the proposal in this post around indexing version containers at the federation level solve this issue?

1 Like

Agreed, but: it’s more acceptable if the other organisation only edits a new version of that data, and then tells the federation service about this new version. That way the organisation only need to trust the particular federation service, for a specific composition, coming from a specific organisation, that tells you there is a new version of your data. That should scale reasonable well.

And in the end we really need to start trusting other organisations to edit data that is highly relevant for my organisation.
So the use case that gets mentioned most in NL is ACP (resuscitation policy). What you want is:

  1. prevention of undesired treatment. E.g. GP records no more hospital admission, patient collapses, neighbours call ambulance, ambulance doesn’t know about the ACP info from the GP and brings to hospital anyways. Patient dies in hospital after getting invasive procedures and ranking up cost. Everybody unhappy. What you need for this is to have the ACP information available in the care network of the patient. In our region of the Netherlands, involved parties are ambulance, gp, hospital, home care nurse (potentially more). What you need to prevent this is:
    A) the ACP information available to all parties in the network.
    B) be able to trust that information

To achieve A and B, you have the following requirements:

  • all parties need to have an application that can process the ACP data
  • a common data format for ACP data
  • a common understanding of the semantics of that data
  • verification of the source and authenticity of data
  • guarantees about the (clinical) trustworthiness of the data
  • know the information is current
  • a shared access policy to control operations on that data
  • a trustable networking protocol
  • to know the source of truth (depending on variables for episodic data)
  • easily shift this source in case of a change in patient situation (e.g. responsibility for ACP decisions shift from GP to specialist in case of hospital admission)
  • (probably more?)

And you want it in a data centric and distributed way, for reasons not specific to ACP data (briefly mentioned above in my reply to Pablo saying each hospital wants a copy of the data).

  1. You want to have synergies in recording the data to save cost and labour. e.g. no longer each party having to manually update its version of the data.
    This will require most of the above requirements to a more stringent degree

  2. Increased clinical alignment/agreement on the content of the meaning. This may require additional functionalities like video calls, scheduling meetings etc.

As an approach for this project, given there will be many similar ones:
I think you need to

  1. determine the parties in the clinical network, pragmatically a region for now (free text)
  2. Agree on scope and goals (free text)
  3. Agree on a data set and its semantics for ACP data (archetype/template)
  4. Agree on ways of working that guarantees production of data according to those semantics
  5. Define and enforce an access policy that guarantees those ways of working. E.g. regio code that specifies user X can for patient Y write compositon.acp if that user is a doctor, is employed by an organisation that is a party in the federation (for this use case) and the doctor X is at this moment responsible for managing the ACP data for patient Y. More info: access control for openEHR 'resources'

In this way you can scale by the use case. And different use cases can have different results of these four steps.

And the proposal above also downsizes the size of the community Thomas describes, without creating issues on the other requirements. But off course creating cross community issues.
But I feel it’s the only way to make timely progress. Because already this list I disagree with, since a problem list is definitely episodic to me. As explained in the post above. And I can see the same reasoning go for the other compositions, although less convincingly.

For event data definitely this union of a query is where we should focus the federation spec. I’m even ok with keeping persistent/episodic compositions out of scope of the current version of this version of the federation spec. But anyways it’s to me use full on thinking about the challenges and solutions around federating persistent/episodic data.
I wouldn’t like to do what Ian suggests to sync data across CDRs, I think the risk (technical and clinical) of issues is not worth the benefits, maybe outside of niche use cases or highly controlled scenarios. I also feel there shouldn’t be a global universal truth of data location. At the very least it should be a location per patient (otherwise you end up with something like the country, EU, UN managing that, the horror…) And even a truth per patient is highly problematic. Because, at least in NL law there’s a responsibility for individual doctors and care organisations for that data. And you can’t make good on that responsibility if you can’t control the data. And you can’t control it if it’s not in a CDR under your control. Solving this by trying to trust a central authority requires enormous governance issues making it extremely slow, and it doesn’t scale. History has proven it’s not been possible, and little reason to expect this to change.

That is exactly what we have done for London UCP (all 5 points).

and ACP (Anticipatory Care planning) is a really good use-case because it is very largely not about clinical opinion or “contextual problem lists” (I agree). It is or should be a set of largely factual information that is decided by a shared decision making process with the patient.

All of our current UCP dataset uses persistent compositions, in theory End of life care is an episodic type of persistence but most of us do not have multiple End of life situations!!

I think it is really important to separate legitimate questions of privacy/security around single/multiple datastores, from the informatics, as the appetite for more or less convergence of information varies wildly from country to country, region to region and over time as cultural appetites/politics change.

Even if a single national datastore is culturally unpalatable, it may be ok at regional basis, say 5-7 million population, but exactly the same challenges of data quality and currency arise at that level.

What is always true, is that there are some kinds of persistent information like most of ACP that really does need to be handled asa virtual single truth. i’e conflicting versions of that truth are a really bad idea from a patient PoV.

So you might not want or be able to use a single CDR (or even openEHR) but one way or another there is set of information that must have shared governance (who exactly to be decided) and absolutely watertight synchronisation of that data with no loss of meaning/quality/granualrity.

Our experience ,which we are happy to share, is that this is almost impossible to achieve across assymetric datastores (openEHR included), even when decent interoperability via FHIR is available. And even if it was possible you land back at the same joint governance model i.e “my system accepts your synch update without question”, otherwise you run the risk of mismatches

I’m not minimising the cultural/technical and legal shifts that have to take place to achieve this but in the UK there is now a growing move to handle at least this kind of ACP/ care coordination information outside traditional EPRs as a primary record, and not just openEHR-based systems. Done regionally, at least in the UK, it is happening, with moves in place to allow this information to be accessed ‘out-of-region’ via record locator XDS -type services.

Definitely agree that event data is easier to handle as a ‘union’ @sebastian.iancu

@joostholslag

I wouldn’t like to do what Ian suggests to sync data across CDRs,
I was definitely not suggesting that!! I was saying that if you want to manage ACP data read-write in a federated way, across datastores, you are going to have to sync that data, one way or another.

@thomas.beale Your suggestion about updating a local version stack from a remote CDR is interesting but from a governance PoV it is really no different from working against a remote CDR. You are accepting as-is an update from that remote CDR and I would expect that rule has to be reciprocated, os you might as well just extend the access/ governance rights across the CDRs.

@Joost - I agree there
needs to be multiple contextual problem lists, although we would regard the GP problem list in England to be the best ‘master problem list’ but tightly governed by GPs for now. Meds are complex but ultimately ‘factual’ and should be a global record as they have in Denmark .
Allergies should be a master list but are clinically fraught so for now we would keep that as a read-only feed from the GP record.

Here is our list of ‘Global Background’ information that is actually much easier to manage as a single (if virtual/federated record)
Much easier because it is mostly information directly about the personls life and not abput specific conditions or contexts, and is often handled pretty badly in existing systems, but very high value to people themselves. Collecting and sharing it between practitioners is a huge burden,

  • Communication / behavioural differences
  • Living arrangements
  • About me (what I like don’t like) just narrative
  • Advance statements (LIving wills)
  • Advance decisions recommendations (CPR etc)
  • Carer contingency planning - my responsibilities if I get sick (child, wife, cat)
  • Current Functional assessment - Needs help with …
  • Emergency professional contacts
  • Emergency personal contacts
  • Legal status information - power of attorney etc
  • End of life care (Primary diagnosis, prognosis, place of death preferences)

In One London , these are mostly modelled as separate templates.

Condition-specific information e.g. Sickle cell disease is modelled as 2 other templates. It has its own problem list and treatment escalation planning models.So far this is very much an ‘emergency treatment’ plan but we would envisage it having very tight wrote/update governance to the local SCD team, and I suspect that will remain.

Where seamless shared governance across datastores is not possible, we played with the idea
of something like pull requests, where a remote organisation, working on the same data models, could construct a composition which was provided to the local datastore but had to be accepted formally.

Imagine where a London patient has a sickle cell crisis in Scotland. The Scottish team may review and update their plan locally but the London team would have to accept this back wholly or partially into their CDR - so a bit like @thomas.beale idea but ‘gated’.

1 Like

To back to this, and for true ‘single-truth’ type data I don’t think this can ever be achieved without shared governance, and then, technically either there is a completely separate CDR for this kind of data, or one of the CDRs acts as the custodian and there is a pointer system to know which is the current source for ‘ACP’.

Having a master branch in a repo (Git or openEHR-like) isn’t the same as having a master repository in a certain place (like GitHub). There is nothing to stop there being multiple (even partial) copies of the same repo (patient 123’s EHR), and in each the master branch is an attempt by all previous modifiers to correctly state the total truth for the patient, e.g. the full current list of medications.

It is however true that without some de facto location (maybe the GP) that acts like a central ‘moderator’, it’s going to be harder to get this right.

Logically single EHR per patient is still the (by far) preferred target architecture. But current reality isn’t yet like that (sorry @pablo - even thought your logic is pretty much correct), so we have to look for possible / workable solutions, not just the ideal ones.

I think this is pretty much the equivalent of declaring a master ‘moderator’ / owner of patient 123’s data, which means that distributed versioning logic a la Git could work. It’s just that I think yuo want to allow for the fact that not all patients have the same master location / SSOT repository, and it might even be that different parts of the same patients record have different master locations. This sounds like a baaad idea, but still, one can imagine the Rx list being moderated by the GP (due to own prescriptions + what is in discharge summaries), while a cancer care plan would be managed by the patient’s oncologist.

Your ACP example is based on a SSOT model… End of life is the perfect use case to look at for achieving distributed SSOT, and I think a good one to try to solve.

Interested to see how this develops.

2 Likes

This is correct. Because after all this is what is going to happen in reality, the Health IT either helps it or gets in the way. Our job is to make it reflect what is happening in reality as much as possible. That means replicating ‘singleton’ entities that are digital twins for real world singletons - as least some of:

  • the patient;
  • authoritative decision makers e.g. family member of late stage dementia sufferer
  • some kinds of care plan, e.g the ACP / EOL kind

That is true; I was assuming that direct access to the other CDR (other than for asynch data sync) is not available. If it is, then you can work on a service model basis instead.

These are two good examples of ‘digital twin singletons’, i.e. single source of truth replicants of real world singletons.

Indeed - needs data sharing agreements, as well as the pure technical capability to share update / version information.

1 Like

Logically, the patient problem list is one, you can’t say in this context the patient X has problems A and B and in other context the patient X has problem C.

What information is displayed, for which use case, and in which context (e.g. a specialist only wants to see part of the problem list) is not something related to federated data, is an application level requirement. I think the curation, order, priority, etc. comes on top (different layer) than the federated data management.

That will depend on the agreed architecture for the federation. If you want the federated data management to go on how repositories are queried, you already choose one approach. There are other approaches at different levels of the architecture that are worth exploring before coming to early conclusions that might be based on biased assumptions.

There are two big approaches are: delta + indexes (like XDS) and full sync. My previous message was based on full sync, that is why querying is not so important, you can query anywhere and get the right data, and there are no silos.

If there is a hard requirement/reason why a hospital won’t want to have all data about their patients, and I can’t think of any good reason, then we have a problem.

Note if there are no technical, legal, clinical, privacy, etc. agreements done, there is no federation at all. The solution for a federation comes with those types of agreements (contracts) done beforehand.

Having as much data as possible means to have the data that is agreed to be shared by the organizations, approved by the patient (consent), complying with regional, national and local regulations, in a common technical framework (architecture) and in an agreed clinical framework (who will use which information for what in which contexts etc).

What I tried to say is that in my opinion is that’s not a good solution for persistent data. It might be for episodic since in episodic you have many event data in the context of the episode, and for event data the index works fine. For persistent data I think the sync process + reconciliation is better, since every hospital could have a copy of the list of medications, problems, etc. then do the the curation, order, priority, etc. per organization, per specialty or even per clinician.

Remember that having an index means you need to manage it, and the index is a centralized shared piece of software that all and nobody owns. Of course if the federation is public, the MoH can own it. With the sync approach there are no centralized components, just a bus where all organizations that participate of the federation sync with each other (again, only the agreed data with all the legal and privacy stuff on top).

In XDS when querying, you first go to the index to find information about patient X on every hospital in the federation, you can filter by patient, author, type of document, and some other metadata, but you can’t query by the contents, you get full documents. Then the index responds with which hospitals have that data for that patient and then a pull to each CDR (really document repositories) of each hospital is done, but is the client system that pulls data from each hospital, is not that you send one pull to a central component and that central component does the federated retrieval. You can try to extend that approach and instead of pulling documents, you send a query, but for this to make sense, the query should be constrained by the parameters you used for asking the index in the first time, which might be technically possible. Though this for persistent data will be a chaos for the client to make sense of all the different lists of the same type retrieved from different hospitals (what I mentioned above about that persistent lists are logically/conceptually one for each patient).

That’s the sync + reconciliation approach explained before. There is no index even locally, just a set of data that is shared and copied between all members of the federation. So a git merge + conflict resolving kind of problem. The rest IMO is on a different layer (security, privacy, legal, visualization, permissions, etc).

In general, this is an iceberg and we are just looking at part of it (not even the tip hehe). We need a full list of requirements to be able to find the best solution for each case, and test what falls into openEHR’s scope so it can be specified.

You would think that, but it doesn’t work at all that way. That’s why @ian.mcnicoll talks about a ‘master problem list’ - but there can other specialty-centric problem lists, a nursing problem list and so on. This is why we engineers need to listen to the MDs. Things are rarely as simple or logical as we would like!

It’s the problem we already have today… if you ask docs whether they want ‘all the data’ they’ll say yes. But there are layers of applications, systems, procurement, data partner (or not) agreements, and the general inertia of HIT, money and much else that stands between the dreams of MDs and RNs and the reality.

Not that we shouldn’t try to break it - that’s our mission - but it’s a long term thing, so we need working solutions in the interim phases.

2 Likes

I’m not using “logical” in the sense of being “logical”, I’m using it in the computer science way.

We need to make the formal distinction between the logical/conceptual view and the concrete/materialized view. I’m not talking about how things work or how they are actually implemented, I’m just saying when looking at one patient, that patient has a unique logical/conceptual problem list (or any other list of medication, allergies, etc) because the patient is one physically unique entity.

Then if we say that there are problem lists that are specialty-centric, we are talking about another level, closer to the application and the end user, which could differ from the information management level (these levels should be defined in order to understand at which level we are discussing, since I’m sure we are discussing about different levels here).

The underlying discussion/challenge is: can we represent that unique logical/conceptual problem list for a single patient at the information management level when that information is shared between different organizations in a federated context? (being federated means there are agreements in place: legal, technical, clinical and so on). The second discussion is: can we represent the different concrete/materialized specialty-centric problem lists from that unique problem list for the patient? (based on what is defined at the information management level).

I don’t have a solution for both of those problems, but it’s necessary to define them in order to understand the whole picture and to discuss at the right level.

For me “it doesn’t work that way” is not a good enough argument.

I understand how this works, though I’m not talking about the end-user, but about the organization as a whole to have all the “agreed to be shared in the federation” information available. Then how they manage that internally and build the paths on their internal systems so clinicians can access that info, is out of the scope, at least when discussing at the level of the federation.

So we need to focus on federation or the internals of each hospital. We can’t do both at the same time because they have a different set of requirements. I’m trying to define a scope here :slight_smile:

1 Like

This is very interesting thread, but in the same time very complex, with a lot of different perspectives.
Perhaps @joostholslag does not mind too much the discussion flying on top of other aspects, but in the interest of his original question and the readability, try to reply to the point as much as possible.

1 Like

In case not very obvious in all the posts above, I guess Joost questions were around solving these issues with openEHR, where those organisations are of a different type (hospitals, GPs, home/elderly care, etc), perhaps even from different countries or languages, and they all use different CDR instances, from various vendors, but all part of a federative network.

2 Likes

Hi @sebastian.iancu for my part, I was thinking of a similar context than the one you described when giving my opinion. My take is this problem can’t be solved with openEHR alone, and that there should be a framework of agreements at many levels to make this work. If those are not in place, IMHO the federation itself doesn’t exist. So for starting, we need to define what it means for an organization to be part of a federation that shares data with other organizations in the same situation.