I’m using the Procedure archetype and want to record structured details of the procedure under the “procedure details” slot based on what the procedure is. For eg: CPR procedure cluster if the procedure is CPR, Laryngoscopy procedure cluster if the procedure is laryngoscopy etc.
Is there a way to have multiple archetypes under the same slot in a template and choose between them during runtime?
In principle is no different of having a choice in any given place, but slot would need to have a maximum occurrences of more than one in the first place. In our implementation, we first clone the slot and give it a new virtual atcode, and then resolve it with the archetype we need
This is a different issue ands one that everyone hits sooner or later i.e the variability of the granular data. I know that various implementers have toyed with some kind of dynamic templating/validation but I have not herard of any real success.
At the end of the day, you do have to validate against a set of constraints, and since that also acts to provide the schema for querying that templated data, you need to persist something that equates to the particular instance of data saved. So you might save on the size of a template that carries every optional ‘procedure’ but then you will then have to create per-instance template, or do something clever like using a lot of references embedded templates. AFAIK none of this is supported by any CDR right now.
So, for good or for bad you are probably stuck with the mega-template for now. That’s certainly our current practice but we would try to break things up a little where possible e.g. you might think of splitting out the Procedure record as a separate template / composition from the rest of the record. We did something like that for a GP system - split out things like meds, allergies, referrals from the main content.
It is frustrating but it is hard to see how it can effectively work differently, at least for now. This is the kind of detailed granularity that makes building any healthcare system in any tech stack, really hard.
The mega template is not in itself a problem - it is just the design-time artefact hat is huge (and I guess must slow validation a bit) but if you are doing things correctly . the composition should not be bloated.
Neither really, it’s on the mapping specification step (but it should be closer to the template creation step, where you should be able to solve a slot twice)
Just-in-time incremental archetype → template generation at runtime, much of which can be precomputed… but nesting of slots/archetypes requires incremental instantiation.
I’m coming back to this on the basis of this topic
which was really raised because of an attempt to do something very similar to the idea of a ‘dynamic template’.
By that we mean that a template can in some way be extended at run-time to allow dynamic validation of a composition.
A classic use-case is the GP consultation where an almost infinite number of Observation archetypes might need to be used to cover all the clinical possibilities.
The normal solution is the ‘mega-template’ but this does become unmanagable at some point.
I wondered about the another possibility, possibly enabled by ADL2, which handle Entry level templates (embedded templates) more elegantly, and the ability to leave slots open and fillable at run-time.
The core of a GP Encounter is a traditional Composition template but it is possible for the app, at run-time to fill any valid open slots with Entry level templated data, as long as these Entry-level templates are registered on the CDR (essentially as .opts).
When the composition is committed, the Entry-level TemplateId is carried in ENTRY.LOCATABLE.archetype_details.
At commit the CDR would assemble a virtual .opt against which to validate the Composition.
Could that work. It might even work for Clusters but ?? issues with nesting (or not)
Well, I think adl2 helps a lot. But I would suggest an open slot in the template. And run time generation of the operational template where slots are closed. Because otherwise you’ll have incompatible nodeids.
You and Tom referencing to different variants of the same idea at some level. I did not have time to think about the impact of this ‘model from dynamically composed submodels’ idea as I’d call it, but one thing that struck me is this would introduce a headache for knowledge governance, and analytics.
We put a great effort in making sure that clinician/stakeholder input is explicit when building models, and so does other initiatives (FHIR etc). When clinicians start implicitly composing models by means of mixing and matching model components at runtime, that is some untraceable semantics we’re looking it, isn’t it?
In theory, this could already be handled at the composition root model (which may already be the case for some) and we can say go see the individual semantics of allowed archetypes/templates that can be composed into this one. If knowledge governance people are happy with that, fine, but if this ‘convenience at runtime’ creeps up to the models as every convenience eventually does (a god composition with whatever you can put into it), it may be an issue. Or not, you and other modellers can tell me.
I see the real problem on the querying and population health/analytics side of things. Those combinations of models that come to be at runtime will be a nightmare to do population level stuff. You don’t know what the clinician X thought at the time and yet there are compositions based on models which are subsets of the god model in a way, and you’re supposed to report consistently (from a semantics pov) across them. Same goes for the poor devs who need to handle retrieval and display/sharing of any subset of those in an app (think about the behaviour at the limit as ‘convenience’ goes out of control) If you start emitting events based on those, now the downstream developers start polishing up their CVs
Not sure I get this bit. Clinicians aren’t going to invent models on the fly, they’re still just using archetypes from the existing library (mainly for Observation and Instruction), it’s just that the choice of those archetypes is in real time. But when some particular archetype is chosen, it will almost always have a well-defined shape as used locally (e.g. by a GP or whatever). So those could be pre-templated as Ian says, and built as OPTs that can be connected to the parent encounter template.
So I’m not seeing the problem of ad hoc use of well-known models. The resulting data will still be well-formed Observations etc. And querying will still work normally.
This is what I was talking about. Maybe I misunderstood, but it looks like you’re talking about new models being generated at runtime based on the decisions made by the users. Then Ian says:
which, to me sounds along similar lines to your suggestion, and I was referring to difficulty of capturing the semantics of the runtime generated models, if that is indeed the suggestion.
There is also a partially overlapping potential anti-pattern, which I may have failed to describe with sufficient clarity. I was suggesting that templates (or container archetypes in adl2.x if I can call them so) still have particular semantic even though their main task is to bring together other archetypes. This is what Ian referred to as a mega template above. I was suggesting that we could end up in cases where the mega template’s clincial scope becomes pretty much all the archetypes in the app, because it is convenient to be able to add whatever you want and someone will hack their way into such god archetypes/templates. I think we’d lose the semantics of the container archetype/template (mega one) in that case, which still matters.
If a root archetype does not limit what is allowed to be contained in it, then that relaxes the precision of the context so to speak, does it not?
Ignore this if it is still not making sense, maybe I’m concerned about a problem that does not exist (in the second case).
Well a slot in template Comp.dynamicX.v1.2.0 has a node id eg at0078. If you at run time specialise that node to refer to a specific archetype eg blood pressure. That node will be at0078.1 use archetype eval.bp.
Now if you do that again for the same template. But now for a pulse. Suddenly the path for at0078.1 Will refer to either a BP or a pulse. That’s trouble, right?
So I’d suggest to instead create an operational template for each rubriek scenario:comp.dynamicwithbp.v1.2.0 and comp.dynamicwithpulse.v1.2.0. So paths are again unique within an operational template. And at the template (and archetype) level you only know at0078 will be any entry and at0078.1 doesn’t exist.
No you’re totally correct Seref. Also an issue is that archetype selection is quite a hard process. That’s not a realistic problem for an individual clinician to do anywhere near as well as a clinical modeller.
So no constraints with a single unconstrained composition.X.v1.0.0 template (‘oet’) and adding all semantics at run time will result in low quality data that’s hard to reuse like in the way you described.
So it’s not a great strategy.
But currently all the semantics are defined at design time. Which results in both mega templates, pushing data into the wrong archetype because that’s what’s there, and a lot of data in free text, eval.story because the right archetype wasn’t there.
So we need something in between.
It will depend on the use-case how many constraints will be at design time vs run time should be the job of the clinical modeller. Currently the modeller can only decide to allow adding archetypes at design time, while the real world requires adding archetypes at run time.
Now one thing to specify is how to diffentiate data entered based on design time use_archerypes va run time inclusions. Because the semantics, in the sense of how likely it is that the data strictly conforms to the archetype. And what expected non-conformance looks like. Probably data ‘overfitting’ compared to an archetype included at design time and ‘underdfitting’ for archetypes included at run time. (Overfitting and underfitting are probably badly used words but I could not think of better ones)
Apologies I have caused confusion - the runtime ‘virtual .opt extensions’ would have to be based on existing archetypes or embedded templates, the latter allowing more constrained components, perhaps aligned to local integrations or UI widgets.
I was definitely not suggesting any kind of ad-hoc extension based solely on RM
If a root archetype does not limit what is allowed to be contained in it, then that relaxes the precision of the context so to speak, does it not?
Well, yes but in some cases that is exactly the situation you might find yourself having to face e.g a hospital Discharge summary where for a very particular patient in a very specific clinical context, needs to have a Gugging Swallowing score recorded.
I agree in principle that you want to reduce the scope of what is allowed but sometimes that simply is not possible.
A GP consultation is another case where almost anything could be recorded.
So, if not completely mad idea, my hypothesis is that if we tagged the embedded templateIds in ENTRY.archetype_details, the virtual .opt could be reconstructed if required, and querying would not be affected. I might be wrong - Joost had some concerns there.
Yes. It’s similar. But this slot is usually closed in the template. We’d want to fill it at run time. It’s not specified how to do that. Systems don’t support it afaik. And the implications are unclear.
Yes but right now that only applies at design-time. i.e to make use of it, you have to explicitly fill the slot with a cluster at design-time in the context of the parent composition template. ADL2 allows for ‘open slots’ but I would expect most of these to be closed again in most templates, for the reasons that Seref highlighted. However there are scenarios where we may want to extend to run-time, possibly exerting more control on the slot-fill or using embedded templates, not just leaving the underlying {0..*} matches - that makes perfect0 sense at archetype level but I would expect to apply much more constraining in a template.
Templates only ‘remix’ existing well-known semantics.
The mega-template is an anti-pattern for sure, but it’s been in use for lack of this magical JIT template capability (which I remember discussing with Sam and others at CHIME about 15 y ago). So some concerns about ‘god templates’ are well-founded, but more to do with maintainability of templates rather than bad resulting data.
And don’t forget, a GP encounter really could report almost anything. You don’t know who is going to walk in the door.
That doesn’t cause a problem. It’s the same as if you had created the two templates at design time, one filling slot at0078 with Obs.blood_pressure and the other with Obs.heart_rate.
The paths through an OPT include the archetype ID at the root points. And the created data already contains the archetype IDs at the root point. So a query for BP or pulse will always find the right data.
That might be true if we did everything like that. But a few very general situations like GP encounter, and to some extent discharge summary and referral are ‘known unknowns’. I can’t see any problem with run-time selection of specific Entry types pertinent to the encounter in these situations. Specialist medicine won’t work like this, generally.
If that’s happening, it’s a demonstration of the anti-pattern, and why we need dynamic template creation.