We need to provide information about the composer and the corresponding organization that was responsible for the treatment of the patient (which can be different from the healthcare facility where the treatment took place). To my understanding, this would require a party_identified for the composer that points to a (demographics or provider) service that stored this information. However, when importing the data, this information can not always be properly put into such dedicated service and we would be better off if we can directly state such information as part of the party_identified.
I would say, use (implement) openEHR Demographics - then problem solved
Perhaps an slightly tricky suggestion: you could also use FHIR practitioner refs, assuming that your external system is FHIR server which can handle Party/Practitioner-Organisation relations?
The question is if I would always like to have object representations of any participant or composer in such a service. For our use-case, this is just unnecessary overhead. How do you feel about allowing to add some extra properties like role and organization directly in a party or even introducing a new party type for such purposes?
See this post, and the previous one in that discussion for a) a solution and b) reasons why we really donāt want to keep adding details into the PROXY_PARTY types.
Yup - in a perfect world, we would reference back to the formal record, whether in openEHR demographics, or FHIR or ā¦ but in many circumstances this is just not possible, or doable within budget/ governance arrangements. So we end up having a need to add snippets of demographic information to the EHR, commonly Role but also variably organisation, sometimes contact details .
If we added Role(s) and and an other_details archetypeable structure to PARTY_IDENTIFIED that would very largely solve this problem, while still allowing references to be used ācorrectlyā where this is possible/desirable.
This could then work inside composer and participations.
I am still unclear how this (the main) problem will be avoided:
Letās say an area health service has 1,000 clinicians, and they have average 10 contacts / day, x 300 days in the year. Thatās 3million contacts. With no common repository for their details, thatās 3 million replicated data entry events. Normally we want to avoid that.
Perhaps we donāt have to take that decision, we just have to facilitate. Usually such large organisation may have a demographics or fhir service and theāll go for optimal way, whereas other mat have a simplified infrastructure, with less data, and donāt mind the duplicates.
The problem is that we also might receive data from outside the organization/EHR. Maintaining a correct and up-to-date external demographics/provider repo might not be feasible or desirable in this case, as we might not get proper identifiers for organizations or there is no shared list of value sets for organizations and roles.
Its just a dirty real-world out there and openEHR needs to deal with that without punishing users.
Well, the point isnāt to punish users, itās just to avoid truly basic design errors, like uncontrolled copying.
But I think the key thing is to work out the ārealā real world requirements for āsimplified demographicsā. The requirements you describe do not sound like those @ian.mcnicoll or @heather.leslie describe in this thread.
In that thread, Heather is talking about proper archetyped demographic structures, but using EHR-based archetypes. If anyone is going to that trouble, they donāt want uncontrolled replication in their system (which really is disastrous, and might even be a patient safety issue). But on the other hand, if the project / product doesnāt support openEHR demographics, then the hack I proposed will solve it, and using those EHR archetypes - the main need for referencing rather than copying is still achieved.
I think we need to be pretty clear on whether it is intended that every time Dr Jane Smith is referenced from the (say) 50k EHRs in some health service, that the EHR recorder has to re-enter all the same data every time, or whether they could search previous entries and just select the Jane Smith already there.
If the need is to represent demographics snapshots in imported data, and this is regarded as unreliable, partial etc, then that may be a different problem - I would say this belongs with the original_content within the FEEDER_AUDIT. But if the situation is that the demographic data needs to be relied upon to be in any way correct, then creating the circumstances for uncontrolled replication isnāt going to help that.
Secondly, if there is the need to replicate the sophistication of the openEHR demographic model using EHR archetypes, that is doable, but it implies that there is control over the creation of that data; if so, it should be in a managed cache that supports references rather than copying, and which is very easy to implement.
As I see it, the main problem is how to define the federated architecture and common services in the federation.
For instance if you have one organization like health ministry managing many individual organization, a common demographic service could be implemented, so one organization controls the federation. But in the real world, we can have different organizations sharing clinical records, that might be at the same level (there is no central organization managing/coordinating all of them) and itās more difficult to create shared services, they might only share a subset of the data. So a centralized demographic service is very difficult to create and maintain. In that context, the easier is to copy data everywhere as context of clinical data, so when sharing a subset of the data the different organizations manage, the extra bit of context is there. I totally understand the issue, which is very common when integrating systems between big organizations. As Ian mentioned, creating those centralized/shared services takes a lot of time and money, since agreements should be done, shared departments should be built, policy is required, etc. and that requires some level of maturity to be done.
But itās gonna take money
A whole lot of spending money
Itās gonna take plenty of money
To do it right, child
Just to be clear, I am not advocating that ā¦ Iām advocating a CDR-local demographic cache - shared by all EHRs in the CDR. Each CDR would have to have its own, and if Dr Sarah Jones works at two places serve by separate CDR instances, sheāll have two separate records, probably not being correctly maintained. But if each CDR has say 500k EHRs, even this simple system will drastically reduce the amount of copying, repeated data re-entry, local errors etc at each of those sites.
Thatās the problem: a shared service is difficult to come up, even if that is local for one single organization with 100ās of departments and systems in place. Though my comment was about the enterprise, the same principle applies to tine intra-organization of a certain size: a shared service is a centralized local service and is difficult to have all parties agreeing on migrating to the new approach. Think of having just two CDRs in the same hospital, where the CDRs have different vendors, who can actually push them to coordinate and how much will they charge for that? I have seen this happening, luckily it wasnāt my battle to fight
Thanks @pablo, this matches my experience. I think we need to address this. I think providing both options is helpful, even if this means that some people might shoot into their own foot.
Iām not even thinking that big. Iām talking about a cache completely internal to the CDR, with no exposed service API at all - itās really a cache. It just achieves one thing: enabling re-use of (some) demographic entities across the EHRs maintained inside that CDR. And Iām also assuming that data items like āpatients lawyerā etc are represented āinlineā as they are today, i.e. as PARTY_RELATED. Iām also assuming we have added role to PARTICIPATION as discussed in the past. This cache enables the use of ADMIN_ENTRYs containing archetyped demographic entities as Heather wants, and the sharing of those that represent HCPs across that population of EHRs.
Anyway, my main point is: I think we should always look carefully at the requirements, and make coherent changes to the specs and downstream artefacts that address the requirements properly. If we need to make non-ideal changes, thatās fine as well - but we need to understand properly what changes will address what problems, as well as creating new problems that people will be hating us for in the futureā¦
Gotcha. In that context I think there are two issues:
You still need to get that data from a source (most of the cases multiple source) to be able to cache it.
That component should be built, since itās not part of the CDR, which still requires some money and time from our friend George Harrison.
There is some management needed there, for instance, if there are COMPOSITIONS referencing those records, what happens if the cache is deleted or outdated. Some caching policy is required here.
Though, those are things any system should go through either way, since storing no EHR data in the EHR is a quick and dirty solution IMO, and I would go that way only if I don{t have much time or money or people in my project. In a more venturous situation, I would construct something to manage that extra data and build some management policy around it.
For instance, when I was building the EHRServer I knew we needed a demographic server to work side by side. I even have a design for that somewhere in my diagram collection. I guess I need to look back at that when I have more time to do some programming
On a second thought, if the demographic model was a first class citizen in the RM (we didnāt give it much love in the last years, mainly in tool support), and we have some demographic templates, we could have openEHR VNAs capable of storing any RM component (EHR, DEMOGRAPHIC, etc) in it, so technically a generic openEHR data storage, which at deployment time is configured to work as a demographic or clinical repo. I can see a couple of simple changes in the EHRServer to make that possibleā¦ With this approach there is no need to build something else to deal with demographics.
Well, thatās indeed a problem: tooling support. But there a few are more that requires our (SEC) attention: AQL spec, API spec, some RM 1.2 spec. There is also an issue on end-consumer side, as sometimes there is not enough will/knowledge/budget/time to use it.
In any case, I would not say that Demographic RM was, as (formally) is still on the same place in the RM.
But Iām curios now if @birger.haarbrandt got his answer to his initial question, as it feels like this topic starts to deviate a bit (IMO).
I hate to be controversial but Iām afraid the reason that the Demographic RM is less well supported by the community, is there simply is little market demand for it, or prioritisation from Industry partners. I donāt see that changing any time soon. In any case it does not solve @birger.haarbrandt 'd problem, which is exactly what I have arguing for some time.
There is a market/project reality that we often have to handle small and variable snippets of ādemographicā information inside an EHR context, where we have little or no control over the the way that the local environment operates. If we extended PARTY_IDENTIFIED to ad Roles(s_) and add archetypeable other_details, the problem will largely be resolved. There will of course be some design decisions and perhaps compromises to be made, but these can only be made by those involved in individual projects.
The Demographic RM does not solve this issue per-se.