# Dynamic archetype in slot based on preconditions **Category:** [Clinical](https://discourse.openehr.org/c/clinical/5) **Created:** 2021-02-28 10:41 UTC **Views:** 838 **Replies:** 40 **URL:** https://discourse.openehr.org/t/dynamic-archetype-in-slot-based-on-preconditions/1329 --- ## Post #1 by @Sidharth_Ramesh I'm using the Procedure archetype and want to record structured details of the procedure under the "procedure details" slot based on what the procedure is. For eg: CPR procedure cluster if the procedure is CPR, Laryngoscopy procedure cluster if the procedure is laryngoscopy etc. Is there a way to have multiple archetypes under the same slot in a template and choose between them during runtime? The question is similar to https://discourse.openehr.org/t/choice-between-data-element-and-slot/995 but not quite the same. --- ## Post #2 by @yampeku In principle is no different of having a choice in any given place, but slot would need to have a maximum occurrences of more than one in the first place. In our implementation, we first clone the slot and give it a new virtual atcode, and then resolve it with the archetype we need --- ## Post #3 by @ian.mcnicoll This is a different issue ands one that everyone hits sooner or later i.e the variability of the granular data. I know that various implementers have toyed with some kind of dynamic templating/validation but I have not herard of any real success. At the end of the day, you do have to validate against a set of constraints, and since that also acts to provide the schema for querying that templated data, you need to persist something that equates to the particular instance of data saved. So you might save on the size of a template that carries every optional 'procedure' but then you will then have to create per-instance template, or do something clever like using a lot of references embedded templates. AFAIK none of this is supported by any CDR right now. So, for good or for bad you are probably stuck with the mega-template for now. That's certainly our current practice but we would try to break things up a little where possible e.g. you might think of splitting out the Procedure record as a separate template / composition from the rest of the record. We did something like that for a GP system - split out things like meds, allergies, referrals from the main content. It is frustrating but it is hard to see how it can effectively work differently, at least for now. This is the kind of detailed granularity that makes building any healthcare system in any tech stack, really hard. The mega template is not in itself a problem - it is just the design-time artefact hat is huge (and I guess must slow validation a bit) but if you are doing things correctly . the composition should not be bloated. --- ## Post #4 by @Sidharth_Ramesh Then I think having a mega template with all possible clusters is the current solution. --- ## Post #5 by @Sidharth_Ramesh [quote="yampeku, post:2, topic:1329"] In our implementation, we first clone the slot and give it a new virtual atcode, and then resolve it with the archetype we need [/quote] How do you do this? Is this done at the time of creation of the template? Or at runtime? --- ## Post #6 by @yampeku Neither really, it's on the mapping specification step (but it should be closer to the template creation step, where you should be able to solve a slot twice) --- ## Post #7 by @thomas.beale [quote="ian.mcnicoll, post:3, topic:1329"] do something clever like using a lot of references embedded templates. AFAIK none of this is supported by any CDR right now. [/quote] Just-in-time incremental archetype -> template generation at runtime, much of which can be precomputed... but nesting of slots/archetypes requires incremental instantiation. --- ## Post #8 by @ian.mcnicoll I'm coming back to this on the basis of this topic https://discourse.openehr.org/t/linking-multiple-compositions-with-a-compound-composition/6549 which was really raised because of an attempt to do something very similar to the idea of a 'dynamic template'. By that we mean that a template can in some way be extended at run-time to allow dynamic validation of a composition. A classic use-case is the GP consultation where an almost infinite number of Observation archetypes might need to be used to cover all the clinical possibilities. The normal solution is the 'mega-template' but this does become unmanagable at some point. I wondered about the another possibility, possibly enabled by ADL2, which handle Entry level templates (embedded templates) more elegantly, and the ability to leave slots open and fillable at run-time. The core of a GP Encounter is a traditional Composition template but it is possible for the app, at run-time to fill any valid open slots with Entry level templated data, as long as these Entry-level templates are registered on the CDR (essentially as .opts). When the composition is committed, the Entry-level TemplateId is carried in ENTRY.LOCATABLE.archetype_details. At commit the CDR would assemble a virtual .opt against which to validate the Composition. Could that work. It might even work for Clusters but ?? issues with nesting (or not) --- ## Post #9 by @joostholslag Well, I think adl2 helps a lot. But I would suggest an open slot in the template. And run time generation of the operational template where slots are closed. Because otherwise you’ll have incompatible nodeids. --- ## Post #10 by @Seref Nice can of worms you have there Ian. You and Tom referencing to different variants of the same idea at some level. I did not have time to think about the impact of this 'model from dynamically composed submodels' idea as I'd call it, but one thing that struck me is this would introduce a headache for knowledge governance, and analytics. We put a great effort in making sure that clinician/stakeholder input is explicit when building models, and so does other initiatives (FHIR etc). When clinicians start implicitly composing models by means of mixing and matching model components at runtime, that is some untraceable semantics we're looking it, isn't it? In theory, this could already be handled at the composition root model (which may already be the case for some) and we can say go see the individual semantics of allowed archetypes/templates that can be composed into this one. If knowledge governance people are happy with that, fine, but if this 'convenience at runtime' creeps up to the models as every convenience eventually does (a god composition with whatever you can put into it), it may be an issue. Or not, you and other modellers can tell me. I see the real problem on the querying and population health/analytics side of things. Those combinations of models that come to be at runtime will be a nightmare to do population level stuff. You don't know what the clinician X thought at the time and yet there are compositions based on models which are subsets of the god model in a way, and you're supposed to report consistently (from a semantics pov) across them. Same goes for the poor devs who need to handle retrieval and display/sharing of any subset of those in an app (think about the behaviour at the limit as 'convenience' goes out of control) If you start emitting events based on those, now the downstream developers start polishing up their CVs :slight_smile: Let me know if I'm not making sense. --- ## Post #11 by @ian.mcnicoll Can you explain further re incompatible nodeids? --- ## Post #12 by @thomas.beale [quote="Seref, post:10, topic:1329"] We put a great effort in making sure that clinician/stakeholder input is explicit when building models, and so does other initiatives (FHIR etc). When clinicians start implicitly composing models by means of mixing and matching model components at runtime, that is some untraceable semantics we’re looking it, isn’t it? [/quote] Not sure I get this bit. Clinicians aren't going to invent models on the fly, they're still just using archetypes from the existing library (mainly for Observation and Instruction), it's just that the choice of those archetypes is in real time. But when some particular archetype is chosen, it will almost always have a well-defined shape as used locally (e.g. by a GP or whatever). So those could be pre-templated as Ian says, and built as OPTs that can be connected to the parent encounter template. So I'm not seeing the problem of ad hoc use of well-known models. The resulting data will still be well-formed Observations etc. And querying will still work normally. --- ## Post #13 by @Seref [quote="thomas.beale, post:7, topic:1329"] ust-in-time incremental archetype → template generation at runtime [/quote] This is what I was talking about. Maybe I misunderstood, but it looks like you're talking about new models being generated at runtime based on the decisions made by the users. Then Ian says: [quote="ian.mcnicoll, post:8, topic:1329"] At commit the CDR would assemble a virtual .opt against which to validate the Composition. [/quote] which, to me sounds along similar lines to your suggestion, and I was referring to difficulty of capturing the semantics of the runtime generated models, if that is indeed the suggestion. There is also a partially overlapping potential anti-pattern, which I may have failed to describe with sufficient clarity. I was suggesting that templates (or container archetypes in adl2.x if I can call them so) still have particular semantic even though their main task is to bring together other archetypes. This is what Ian referred to as a mega template above. I was suggesting that we could end up in cases where the mega template's clincial scope becomes pretty much all the archetypes in the app, because it is convenient to be able to add whatever you want and someone will hack their way into such god archetypes/templates. I think we'd lose the semantics of the container archetype/template (mega one) in that case, which still matters. If a root archetype does not limit what is allowed to be contained in it, then that relaxes the precision of the context so to speak, does it not? Ignore this if it is still not making sense, maybe I'm concerned about a problem that does not exist (in the second case). --- ## Post #14 by @joostholslag Well a slot in template Comp.dynamicX.v1.2.0 has a node id eg at0078. If you at run time specialise that node to refer to a specific archetype eg blood pressure. That node will be at0078.1 use archetype eval.bp. Now if you do that again for the same template. But now for a pulse. Suddenly the path for at0078.1 Will refer to either a BP or a pulse. That’s trouble, right? So I’d suggest to instead create an operational template for each rubriek scenario:comp.dynamicwithbp.v1.2.0 and comp.dynamicwithpulse.v1.2.0. So paths are again unique within an operational template. And at the template (and archetype) level you only know at0078 will be any entry and at0078.1 doesn’t exist. --- ## Post #15 by @joostholslag No you’re totally correct Seref. Also an issue is that archetype selection is quite a hard process. That’s not a realistic problem for an individual clinician to do anywhere near as well as a clinical modeller. So no constraints with a single unconstrained composition.X.v1.0.0 template (‘oet’) and adding all semantics at run time will result in low quality data that’s hard to reuse like in the way you described. So it’s not a great strategy. But currently all the semantics are defined at design time. Which results in both mega templates, pushing data into the wrong archetype because that’s what’s there, and a lot of data in free text, eval.story because the right archetype wasn’t there. So we need something in between. It will depend on the use-case how many constraints will be at design time vs run time should be the job of the clinical modeller. Currently the modeller can only decide to allow adding archetypes at design time, while the real world requires adding archetypes at run time. Now one thing to specify is how to diffentiate data entered based on design time use_archerypes va run time inclusions. Because the semantics, in the sense of how likely it is that the data strictly conforms to the archetype. And what expected non-conformance looks like. Probably data ‘overfitting’ compared to an archetype included at design time and ‘underdfitting’ for archetypes included at run time. (Overfitting and underfitting are probably badly used words but I could not think of better ones) --- ## Post #16 by @ian.mcnicoll Apologies I have caused confusion - the runtime 'virtual .opt extensions' would have to be based on existing archetypes or embedded templates, the latter allowing more constrained components, perhaps aligned to local integrations or UI widgets. I was definitely not suggesting any kind of ad-hoc extension based solely on RM >> If a root archetype does not limit what is allowed to be contained in it, then that relaxes the precision of the context so to speak, does it not? Well, yes but in some cases that is exactly the situation you might find yourself having to face e.g a hospital Discharge summary where for a very particular patient in a very specific clinical context, needs to have a [Gugging Swallowing score](https://ckm.openehr.org/ckm/archetypes/1013.1.7751) recorded. I agree in principle that you want to reduce the scope of what is allowed but sometimes that simply is not possible. A GP consultation is another case where almost anything could be recorded. So, if not completely mad idea, my hypothesis is that if we tagged the embedded templateIds in ENTRY.archetype_details, the virtual .opt could be reconstructed if required, and querying would not be affected. I might be wrong - Joost had some concerns there. --- ## Post #17 by @borut.jures [quote="ian.mcnicoll, post:16, topic:1329"] almost anything could be recorded [/quote] In [openEHR-EHR-OBSERVATION.body_temperature.v2.1.5](https://ckm.openehr.org/ckm/archetypes/1013.1.2796) the `protocol` has ``` allow_archetype CLUSTER[at0062] occurrences matches {0..*} matches { -- Extension } ``` This allows any cluster archetype to be used (a bit to unconstrained from my perspective). Isn't this similar to what you discuss? --- ## Post #18 by @joostholslag Yes. It’s similar. But this slot is usually closed in the template. We’d want to fill it at run time. It’s not specified how to do that. Systems don’t support it afaik. And the implications are unclear. --- ## Post #19 by @ian.mcnicoll Yes but right now that only applies at design-time. i.e to make use of it, you have to explicitly fill the slot with a cluster at design-time in the context of the parent composition template. ADL2 allows for 'open slots' but I would expect most of these to be closed again in most templates, for the reasons that Seref highlighted. However there are scenarios where we may want to extend to run-time, possibly exerting more control on the slot-fill or using embedded templates, not just leaving the underlying `{0..*} matches` - that makes perfect0 sense at archetype level but I would expect to apply much more constraining in a template. --- ## Post #20 by @thomas.beale [quote="Seref, post:13, topic:1329"] This is what I was talking about. Maybe I misunderstood, but it looks like you’re talking about new models being generated at runtime based on the decisions made by the users. Then Ian says: [/quote] Not new models, only new templates. Templates only 'remix' existing well-known semantics. [quote="Seref, post:13, topic:1329"] There is also a partially overlapping potential anti-pattern, which I may have failed to describe with sufficient clarity. I was suggesting that templates (or container archetypes in adl2.x if I can call them so) still have particular semantic even though their main task is to bring together other archetypes. This is what Ian referred to as a mega template above. I was suggesting that we could end up in cases where the mega template’s clincial scope becomes pretty much all the archetypes in the app [/quote] The mega-template is an anti-pattern for sure, but it's been in use for lack of this magical JIT template capability (which I remember discussing with Sam and others at CHIME about 15 y ago). So some concerns about 'god templates' are well-founded, but more to do with maintainability of templates rather than bad resulting data. And don't forget, a GP encounter really could report almost anything. You don't know who is going to walk in the door. [quote="joostholslag, post:14, topic:1329"] Well a slot in template Comp.dynamicX.v1.2.0 has a node id eg at0078. If you at run time specialise that node to refer to a specific archetype eg blood pressure. That node will be at0078.1 use archetype eval.bp. Now if you do that again for the same template. But now for a pulse. Suddenly the path for at0078.1 Will refer to either a BP or a pulse. That’s trouble, right? [/quote] That doesn't cause a problem. It's the same as if you had created the two templates at design time, one filling slot at0078 with Obs.blood_pressure and the other with Obs.heart_rate. The paths through an OPT include the archetype ID at the root points. And the created data already contains the archetype IDs at the root point. So a query for BP or pulse will always find the right data. [quote="joostholslag, post:15, topic:1329"] So no constraints with a single unconstrained composition.X.v1.0.0 template (‘oet’) and adding all semantics at run time will result in low quality data that’s hard to reuse like in the way you described. [/quote] That might be true if we did everything like that. But a few very general situations like GP encounter, and to some extent discharge summary and referral are 'known unknowns'. I can't see any problem with run-time selection of specific Entry types pertinent to the encounter in these situations. Specialist medicine won't work like this, generally. [quote="joostholslag, post:15, topic:1329"] But currently all the semantics are defined at design time. Which results in both mega templates, pushing data into the wrong archetype because that’s what’s there, and a lot of data in free text, eval.story because the right archetype wasn’t there. [/quote] If that's happening, it's a demonstration of the anti-pattern, and why we need dynamic template creation. [quote="ian.mcnicoll, post:16, topic:1329"] Apologies I have caused confusion - the runtime ‘virtual .opt extensions’ would have to be based on existing archetypes or embedded templates, the latter allowing more constrained components, perhaps aligned to local integrations or UI widgets. [/quote] Exactly. --- ## Post #21 by @linforest An alternative ( and maybe complementary/adjunctive) strategy might be to delegate some of the dynamic flexibility/composability to the terminology standards of the archetype adopters. --- ## Post #22 by @Seref [quote="thomas.beale, post:20, topic:1329"] Not new models, only new templates. [/quote] I thought we were not happy with the distinction between templates and archetypes. Wasn't one of the primary aims of ADL 2 ? I.e. losing that distinction ? I may be diverging from the common thinking but I've always used the term models to indicate both templates and archetypes. This also explains why I am uncomfortable about these spawning ad-hoc and you not being that concerned. I think templates do and should carry semantics beyond being an aggregator of archetypes, and if that line of thought makes sense, then so should my concern about their dynamic creation. I admit it may be subjective, but this is a point I'd stick to if I took part in building a clinical software based on openEHR. --- ## Post #23 by @joostholslag [quote="thomas.beale, post:20, topic:1329"] That doesn’t cause a problem. It’s the same as if you had created the two templates at design time, one filling slot at0078 with Obs.blood_pressure and the other with Obs.heart_rate. [/quote] Well, if you do it at design time, you can assign different ids for the redefined slot eg at0078.1 for blood_pressure and at0078.2 for heart_rate. That’s much harder to do at run time. Because the template may be in different systems that are unaware of the existence of at0078.1. And I think you want the operational template to have all slots filled/closed. Because you’d want a definitive schema for the persisted LOCATABLE. [quote="thomas.beale, post:20, topic:1329"] So a query for BP or pulse will always find the right data. [/quote] Agreed, but that’s not the issue for me. It’s that you’d want a reliable path. For querying and ‘hardcoding’. Can’t think of a real world need, but my intuition says we shouldn’t allow operational templates to have incompletely defined paths. [quote="thomas.beale, post:20, topic:1329"] I can’t see any problem with run-time selection of specific Entry types pertinent to the encounter in these situations. [/quote] Well the problem is, selecting the right archetype for the data you want to record requires niche clinical modelling expertise to do it reliably. Example mistakes are recording eval.diagnis where a screening archetype is a better fit. Or persisting an ankle bp for an ankle arm index in a obs.blood_pressure. I’m not saying it shouldn’t be allowed. Just that there’s an increased risk of using the wrong archetype. And we should have a way of managing that at design time, both at the template and query level. [quote="thomas.beale, post:20, topic:1329"] The paths through an OPT include the archetype ID at the root points. [/quote] What do you mean with this one? Does it relate to: https://openehr.atlassian.net/browse/SPECPR-467 ? --- ## Post #24 by @joostholslag [quote="Seref, post:22, topic:1329"] I thought we were not happy with the distinction between templates and archetypes. Wasn’t one of the primary aims of ADL 2 ? I.e. losing that distinction ? I may be diverging from the common thinking but I’ve always used the term models to indicate both templates and archetypes. This also explains why I am uncomfortable about these spawning ad-hoc and you not being that concerned. I think templates do and should carry semantics beyond being an aggregator of archetypes, and if that line of thought makes sense, then so should my concern about their dynamic creation. I admit it may be subjective, but this is a point I’d stick to if I took part in building a clinical software based on openEHR. [/quote] Adl2 is not losing the distinction between archetypes and templates. What it does is making template inherit from archetype and adding a template overlay attribute. So technically a template is an archetype with potential template overlays. So also semantically this should be true. So the semantics of what a template is are a bit stricter in adl2 vs adl1. But I think it doesn’t have a practical impact. So also from an adl2 perspective templates indeed carry semantics beyond aggregating archetypes. They also specialise those archetypes. And the template is for/about a specific use-case. I also agree the template is a model, as is the RM (and the AM itself). But I also agree mostly with Tom’s implication that most of the semantics in ‘the model’ (rm+archetype+template) are defined at archetype level. And thus it’s fine to allow changing the model at run time. Where I don’t agree, as stated in my message above, is that it’s fine for the operational template to have loose semantics. I think operational templates should be conclusive re model and semantics. So a redefinition of a template (‘oet’) by adding a constraint (including a new archetype) should create a new operational template. Ian is suggesting something similar, but want’s to do that virtually. I prefer to do it ‘phyiscally’. But I do see the issue that this is a significant shift from how opts are created, currently in the design time tooling, in the future that would have to be the job of the CDR (or client) at run-time. Given the other changes in adl2 I think you want to go that way anyway. As nedap has done. But it requires that the CDR has access to (versions of) all archetypes in the hierarchy available at run-time. So I can also see the added complexity is not worth it for a CDR implementer. --- ## Post #25 by @Seref [quote="joostholslag, post:24, topic:1329"] Adl2 is not losing the distinction between archetypes and templates. [/quote] I'm genuinely not trying to be annoying or stubborn, but maybe we're talking about different distinctions here :) The difference between the original templates in adl 1.4, which is an XML based formalism put together by Ocean in a way that does not use adl at all is the distinction that is hard to ignore for my old brain, compared to adl based templates we have in adl 2.0. Once again, subjectivity strikes! [quote="joostholslag, post:24, topic:1329"] Where I don’t agree, as stated in my message above, is that it’s fine for the operational template to have loose semantics. I think operational templates should be conclusive re model and semantics. [/quote] I agree. [quote="joostholslag, post:24, topic:1329"] But I do see the issue that this is a significant shift from how opts are created, currently in the design time tooling, in the future that would have to be the job of the CDR (or client) at run-time. [/quote] That is a natural outcome of the suggestion yes, and personally I don't think it is a good idea, but I have only so much time to discuss why, and I cannot do much beyond making a point, which is how the specs are developed anyway :) [quote="joostholslag, post:24, topic:1329"] Given the other changes in adl2 I think you want to go that way anyway. [/quote] If the "you" above is me, then no, I don't. [quote="joostholslag, post:24, topic:1329"] So I can also see the added complexity is not worth it for a CDR implementer. [/quote] I think this is one of those things that will introduce fractal complexity that goes beyond what we can encapsulate into some technical feature in a CDR. It is a leaky abstraction in the sense that downstream use cases will have to consider this mechanism if it is used, but that's just my gut feeling talking. There's a lot of helpful input in this discussion though, to thanks to all for their time and thoughts. --- ## Post #26 by @thomas.beale [quote="Seref, post:22, topic:1329"] I thought we were not happy with the distinction between templates and archetypes. Wasn’t one of the primary aims of ADL 2 ? I.e. losing that distinction ? [/quote] Technically there is (almost) no distinction (templates can contain overlays is about it). However, in terms of domain semantics, we have: * international / national archetypes - reliable statements of info semantics for some topic * (potentially) site / product / something -specific specialised archetypes (e.g. something Nedap or Better or Ocean makes) * templates - combiners and localisers of the above in use-case specific ways. New templates can be made with impunity - mostly locally created for UI forms, documents or other reasons. Since Templates are specialised archetypes, they can't break any archetype semantics - whether pre-built or just-in-time (JIT). [quote="Seref, post:22, topic:1329"] I think templates do and should carry semantics beyond being an aggregator of archetypes, and if that line of thought makes sense, then so should my concern about their dynamic creation. I admit it may be subjective, but this is a point I’d stick to if I took part in building a clinical software based on openEHR. [/quote] Exercise: try to think of any new semantics templates add beyond: * archetype choice * archetype element removal * narrowing (which might just be adding) of constraints, including terminology value-sets * extra annotations * minor UI tricks like hide-on-form We're just talking about templates being created JIT based on user choice of some particular Observations or similar. Many archetypes already have slots that allow more or less any archetype of a certain RM class in them. [quote="joostholslag, post:23, topic:1329"] Well, if you do it at design time, you can assign different ids for the redefined slot eg at0078.1 for blood_pressure and at0078.2 for heart_rate. That’s much harder to do at run time. Because the template may be in different systems that are unaware of the existence of at0078.1. And I think you want the operational template to have all slots filled/closed. Because you’d want a definitive schema for the persisted LOCATABLE. [/quote] You can do that if you want, but it's not necessary, and not done in numerous archetypes that contain slots that are essentially open. [quote="joostholslag, post:23, topic:1329"] Well the problem is, selecting the right archetype for the data you want to record requires niche clinical modelling expertise to do it reliably. Example mistakes are recording eval.diagnis where a screening archetype is a better fit. [/quote] Ok but here we are talking about clinical knowledge. If the GP or whomever doesn't know what they're doing, or can't drive the EMR system, that's another thing. But choosing at runtime to create an encounter for patient A that happens to include a) auscultation of the chest, because the patient is coughing, b) inspection of the ears because the patient gets infections from swimming and c) palpation of breast due to concern of a new mass is not going to be controversial. And for patient B, and different bunch of things. Similar things happen at discharge, referral writing and so on. Don't get me wrong - the importance of clinical modelling cannot be overstated, and all the above things i mentioned need to be modelled properly. But there's nothing to prevent the design of a good EHR system and application that allows the natural runtime choice of particular information without pre-design. Most patient interactions will be with specialists and all the forms and templates can be pre-designed, just not all of them. [quote="joostholslag, post:23, topic:1329"] Or persisting an ankle bp for an ankle arm index in a obs.blood_pressure. I’m not saying it shouldn’t be allowed. Just that there’s an increased risk of using the wrong archetype. And we should have a way of managing that at design time, both at the template and query level [/quote] You're talking about a different level of semantics here. If there are multiple systemic arterial BP archetypes (not the case today), then there absolutely needs to be a way of choosing which one correctly via some use UI interaction. --- ## Post #27 by @joostholslag [quote="thomas.beale, post:26, topic:1329"] You’re talking about a different level of semantics here. If there are multiple systemic arterial BP archetypes (not the case today), then there absolutely needs to be a way of choosing which one correctly via some use UI interaction. [/quote] Well the issue is an ankle BP taken in an ankle arm index, which is test for suspect vascular problems, is not a good surrogate for a systemic arterial blood pressure. Currently there is one archetype for systemic arterial blood pressure and one draft for intravascular pressure. My point is it will be (too) hard to build a system that facilitates the user in picking the right one always. And ‘normal’ doctors (without a clinical modelling quirk will not realise they shouldn’t use the obs.blood_pressure. This specific example can be accounted for, but there are many other non-trivial selections for archetypes. This is not a reason not to allow archetype inclusions at run time, but it does change the semantics a little bit. Which is, I think, partly what @Seref was pointing out. Some semantics are defined in the template. So we should be able to query for the difference. [quote="thomas.beale, post:26, topic:1329"] [quote="joostholslag, post:23, topic:1329"] Well, if you do it at design time, you can assign different ids for the redefined slot eg at0078.1 for blood_pressure and at0078.2 for heart_rate. That’s much harder to do at run time. Because the template may be in different systems that are unaware of the existence of at0078.1. And I think you want the operational template to have all slots filled/closed. Because you’d want a definitive schema for the persisted LOCATABLE. [/quote] You can do that if you want, but it’s not necessary, and not done in numerous archetypes that contain slots that are essentially open. [/quote] Why is not necessary? How else do I construct a path that reliably identifies the node in an operational template that includes (only) an obs.blood_pressure? And if I can’t do that, how do I built an app that takes a VERSION with template_id = comp.GP_ENCOUNTER.v1.0.0. With a ui that renders a BP graph from a BP archetype, but can’t render e.g. an ankle arm index? --- ## Post #28 by @joostholslag [quote="Seref, post:25, topic:1329"] I’m genuinely not trying to be annoying or stubborn, but maybe we’re talking about different distinctions here :slight_smile: The difference between the original templates in adl 1.4, which is an XML based formalism put together by Ocean in a way that does not use adl at all is the distinction that is hard to ignore for my old brain, compared to adl based templates we have in adl 2.0. Once again, subjectivity strikes! [/quote] I think you are asking valuable questions and making valuable points. I think we have a somewhat diffent understanding of what a template is. I think I have some level of understanding of what an adl1 template is/are. What complicates this is the only ADL2 implementation, with which I’m very familiar, compared to all adl1 implemantions, has shifted the operational template into run time territory: in the adl1 tool chain the operational template (.opt xml) is generated by the design tool (right?) and this is the artefact that get’s uploaded to the CDR. There exists a (non operational) template, ‘defined’ in the .oet xml schema, but this is rarely used. Mostly as an export format between template designer tools, right? So in adl1 ‘template’ generally means operational template. Now in adl2, templates are serialised in adl2 (and defined in aom2 UML) and are technically specialised archetypes. How nedap tooling handles this is you upload the whole specialisation hierarchy into the CDR and at run-time (currently upload time, not data entry) an operational template is generated. So the operation template is generally not a design time artefact. So in (nedap) adl2 template generally means the ADL (not operational) template. (Now further confusing this: template is often called archetype, since technically a template is an archetype and the semantic difference isn’t very relevant for the CDR builders and client app builders). [quote="Seref, post:25, topic:1329"] [quote="joostholslag, post:24, topic:1329"] Given the other changes in adl2 I think you want to go that way anyway. [/quote] If the “you” above is me, then no, I don’t. [/quote] It was meant to be you, as in the organisation/developer responsible for implementing a CDR, migrating to ADL2. Sorry for expressing myself badly, I didn’t want to imply you have a desire for building it this was. I was trying to say: there are other compelling reasons for building it this way. That I expect will also apply to your implementation once you migrate to ADL2. But no need to argue. If you want to handle it in the current (or some other) way that’s fine. I just want to state that the current way of generating operational templates at design time isn’t the only valid way to do it. So if supporting run time slot filling requires run time generation of operations templates, that shouldn’t be a reasons not to accept dynamic templates in openEHR. --- ## Post #29 by @thomas.beale [quote="joostholslag, post:27, topic:1329"] Well the issue is an ankle BP taken in an ankle arm index, which is test for suspect vascular problems, is not a good surrogate for a systemic arterial blood pressure. Currently there is one archetype for systemic arterial blood pressure and one draft for intravascular pressure. My point is it will be (too) hard to build a system that facilitates the user in picking the right one always [/quote] I don't doubt the challenges here, but you are talking about a pretty specific choice that a) needs careful modelling and b) the doc doing BP on the ankle needs to have a way in the application of reporting that correctly. This is a different topic the original one in this thread. [quote="joostholslag, post:27, topic:1329"] Why is not necessary? How else do I construct a path that reliably identifies the node in an operational template that includes (only) an obs.blood_pressure? And if I can’t do that, how do I built an app that takes a VERSION with template_id = comp.GP_ENCOUNTER.v1.0.0. With a ui that renders a BP graph from a BP archetype, but can’t render e.g. an ankle arm index? [/quote] Because the path through an archetype root point looks like `/content[openEHR-EHR-OBSERVATION.whatever.v1.2.3]`. There have been discussions on the exact format of the paths, but the archetype id is always available at the root point. Paths inside OPTs are not very important however. What is important is that the data committed will always have its archetype id at its root point. Now, I said earlier that having distinct slot codes for different purposes is not necessary, i.e. in the strict technical sense. But it is useful. If you wanted to distingush e.g. principle diagnoses from associated problems in a Composition archetypes, and use the EVALUATION.problem_diagnosis archetype for both, it would obviously be better to have 2 slots, not one, because in this situation, the archetype is more general than the slot meaning. --- ## Post #30 by @joostholslag [quote="thomas.beale, post:29, topic:1329"] I don’t doubt the challenges here, but you are talking about a pretty specific choice that a) needs careful modelling and b) the doc doing BP on the ankle needs to have a way in the application of reporting that correctly. This is a different topic the original one in this thread. [/quote] To me the topic is: how do we support unpredictable archetype inclusion requirements at run time. Right? I think we all agree the following: - In current implementions the only way to do that is include all potential archetypes in a god template. - The god template is a bad way to do it A little more controversial: (@Seref might disagree) - We need to be able to include archetypes at run time - We should explicitly support allowing slot fills at run time - We should support node additions at run time Templates have semantics in itself. No they can’t break the semantics and usually don’t change the semantics of the archetype very much at all: Narrowing the meaning is allowed, but practically this doesn’t change the meaning much: e.g. not allowing a ‘mean arterial pressure’ to be recorded in a blood pressure archetype in a GP encounter template, doesn’t much change the meaning of what a blood pressure is. But they do add a lot of context information *to the data*. I would go as far as saying the template can’t change the semantics of the clinical concept ‘blood pressure’ at all. Only the semantics of the data conforming to that archetype. But it’s a bit arbitrary, since a specialised archetype e.g. Dutch blood pressure, can definitely change the semantics of the parent archetype significantly (but not break it). And since templates are technically specialised archetypes, I’d say it’s more of a consensus rule than a specifications rule. (We might want to try to specify that rule/ semantic difference, but it’s not clear to me we understand the difference well enough to do so now.) @thomas.beale do you agree with the above paragraph? My example shows: - letting users pick archetypes at run time will lead to usage of an archetype that is a bad fit. - This creates data: e.g. ankle pressure of 80/50mmHg, incorrectly understood to be defined by the Blood_pressure archetype, so incorrectly assuming 80/50 is a [systemic arterial] blood pressure. - This is a clinical safety issue, for example if a low blood pressure algorithm triggers administration of blood pressure increasing drugs. - It’s not trivial to select the right archetype for data recording. It can’t easily be done computationally. LLM’s can help a lot, but they are probabilistic, so inherently faulty (as are humans btw.) And users (at least normal doctors) are bad at considering data quality when recording data. Further remarks: - Now the reverse is also true, if you include a limited set of archetype in the template and don’t allow inclusion at run-time, the user will be triggered to fit all data into one of the preselected archetypes. - So it’s not simple, but a trade-off. Since it’s arbitrary and a clinical safety issue this should be a clinical modelling issue. I agree my example is very specific. I think less than 1% of blood pressures recorded are ankle pressures in vascular patients. But I disagree this makes it off-topic. It’s just an example to show the risks of unconstrained archetype selection by users. But my point is the semantics is different, so we need a way to control both the inclusions at run time and the data interpretation. Now the question/debate is what is currently supported in the specs. How to apply that safely. And wether the specs need to change. My position on what we should aim for functionally: So clinical modellers need to be able to create a template that explicitly specifies the constraints on inclusion of archetypes at run-time. E.g. for a GP encounter add a few standard ones: like eval.story, reason for encounter, a section for observations with a slot for any observation, but not other entries etc. (This is already supported with a regex on the slot). Do we agree on this aim? Now re the specs: I think the above is implicitly supported by (the ADL2) spec. But is not understood/supported that way by implementers. Because the consequences are not understood. @Seref points out a serious safety concern, with high impact on how he views the semantics of the models. So this brings us to the question on how to apply this safely. My suggestion is to generate an operational template, with a unique id, every time a slot is filled, so also at run time. Ian suggested a ‘virtual template’ concept. Which is a different implementation for the same idea imho. It’s probably more aligned with how operational templates are currently handled in adl1 (generated at design time). So easier for adl1 implementers. The downside to me is the operational template itself no longer is sufficient to interpret the data. Currently the opt2 spec states: > * all slot-fillers and direct external references (`use_archetype` nodes) have been resolved and substituted; > * all closed slots are removed; > * all attribute (`C_ATTRIBUTE`) nodes that have `existence matches {0}` (i.e. are logically removed) are removed; so it doesn’t say all slots must be filled, only if they are filled they need to be done so conclusively. And if closed, they are removed. Other nodes that can’t be changed (zero existence can’t be further constrained) are removed as well. This to me means you can’t add slots or nodes at run time. Otherwise you may add a node that has already been removed in the template/archetype, breaking conformity with that template. I think stating references should be resolved conclusively, and thus can’t be changed at run-time, is conflicting behaviour with allowing slot fills at run time. So this to me should be expanded to state that in an operational templates all slots should be conclusively filled or removed and no nodes may be added. This would (again) give a conclusive schema for an operational template. So if some data refers to a schema that defines it, we now we need only that schema to understand the data. Now if we agree on the above paragraph. We can only support run time inclusions of archetypes by generating (new versions of) an operational template at run-time. [quote="thomas.beale, post:29, topic:1329"] Because the path through an archetype root point looks like `/content[openEHR-EHR-OBSERVATION.whatever.v1.2.3]`. [/quote] I disagree that a slot is an archetype root point. The slot is a node itself (in adl2). The slot has a child object which is the root of the archeype. So I would say the path is /content/at0078.1[openEHR-EHR-OBSERVATION.whatever.v1.2.3]. This is also (close to) how nedap implemented it, see the example in the linked jira ticket. Ian told me in adl1 the slot is not considered a node in itself. so it’s an (unintended?) change. But I think it’s valuable. Especially since you’d want to be able to name the slot. Which requires a node id. Especially if the slot included multiple archetypes. E.g. in the comp.gp-encounter template, which has a slot for ‘gp observations’. It’s useful to be able to name (and query!) that. In your example, if there would be multiple slots in the archetype/template, how would you distinguish archetypes included under slot1 vs slot2? [quote="thomas.beale, post:29, topic:1329"] Now, I said earlier that having distinct slot codes for different purposes is not necessary, i.e. in the strict technical sense. But it is useful. If you wanted to distingush e.g. principle diagnoses from associated problems in a Composition archetypes, and use the EVALUATION.problem_diagnosis archetype for both, it would obviously be better to have 2 slots, not one, because in this situation, the archetype is more general than the slot meaning. [/quote] Exactly, your example is better than mine. So how would you distinguish the problem_diagnosis in the ‘principle_diagnosis’ slot form the problem_diagnosis in the ‘associated_problems’ slot ? --- ## Post #31 by @thomas.beale [quote="joostholslag, post:30, topic:1329"] Templates have semantics in itself. No they can’t break the semantics and usually don’t change the semantics of the archetype very much... I would go as far as saying the template can’t change the semantics of the clinical concept ‘blood pressure’ at all. Only the semantics of the data conforming to that archetype. But it’s a bit arbitrary, since a specialised archetype e.g. Dutch blood pressure, can definitely change the semantics of the parent archetype significantly (but not break it). And since templates are technically specialised archetypes, I’d say it’s more of a consensus rule than a specifications rule. (We might want to try to specify that rule/ semantic difference, but it’s not clear to me we understand the difference well enough to do so now.) @thomas.beale do you agree with the above paragraph? [/quote] Pretty much. [quote="joostholslag, post:30, topic:1329"] letting users pick archetypes at run time will lead to usage of an archetype that is a bad fit. [/quote] Well, it shows that such a thing is possible. My counter-argument to that is: a) it's not that common a need; b) where it does occur, it can be properly designed for. [quote="joostholslag, post:30, topic:1329"] It’s not trivial to select the right archetype for data recording. It can’t easily be done computationally. LLM’s can help a lot, but they are probabilistic, so inherently faulty (as are humans btw.) And users (at least normal doctors) are bad at considering data quality when recording data. [/quote] I'm not convinced of this. Archetypes are topic-based; we can show the topics on the screen of archetypes that could possibly fill a certain slot at runtime, e.g. the 'Objective' heading in a SOAP note. But we can greatly reduce that to the subset that could realistically apply in the situation. If it's a GP encounter, beyond a relatively small set of observations they can do themselves, anything else will be done by a lab, imaging, or a specialist. So maybe 20-30 possibilities for GP observations in an encounter? [quote="joostholslag, post:30, topic:1329"] So clinical modellers need to be able to create a template that explicitly specifies the constraints on inclusion of archetypes at run-time. E.g. for a GP encounter add a few standard ones: like eval.story, reason for encounter, a section for observations with a slot for any observation, but not other entries etc. (This is already supported with a regex on the slot). Do we agree on this aim? [/quote] Well that's the current capability. Clinical modellers should probably define more constrained limits on slots in many cases (it's often 'any' today). [quote="joostholslag, post:30, topic:1329"] Now re the specs: .... The downside to me is the operational template itself no longer is sufficient to interpret the data. [/quote] Well you don't do data interpretation (i.e.once data is committed) with OPTs generally, but with the archetypes. Or am I missing something here? [quote="joostholslag, post:30, topic:1329"] so it doesn’t say all slots must be filled, only if they are filled they need to be done so conclusively. And if closed, they are removed. Other nodes that can’t be changed (zero existence can’t be further constrained) are removed as well. This to me means you can’t add slots or nodes at run time. Otherwise you may add a node that has already been removed in the template/archetype, breaking conformity with that template. I think stating references should be resolved conclusively, and thus can’t be changed at run-time, is conflicting behaviour with allowing slot fills at run time. [/quote] Some of this might need to be changed. However, one other thing to keep in mind is that there is another way to do archetype inclusion at runtime. If you use a use_archetype[...] node instead of a slot in most places (we got rid of slots that only point to one archetype, because that's just reuse), the system should allow any specialisation of that archetype. E.g. if you specified an 'exam' Cluster archetype, then any examination archetype would be allowed. Making this work means using specialisation properly, which is not done in the ADL1.4. archetypes, because specialisation doesn't really work in ADL 1.4. I am investigating whether use_node refs can be expanded to be of the form use A OR B OR C. Then they do what slots do, but with the (really not very good) regex constraint approach. Anyway, this approach is something to keep in mind for the overall mix. If we go this way (it makes for much easier templating), then the number of archetype slots really needed is quite small - I think only in the places where runtime choice is needed. [quote="joostholslag, post:30, topic:1329"] This to me means you can’t add slots or nodes at run time. [/quote] You can't add slots, but you can still fill an open slot with something, so you can 'add' something. [quote="joostholslag, post:30, topic:1329"] So this to me should be expanded to state that in an operational templates all slots should be conclusively filled or removed and no nodes may be added. This would (again) give a conclusive schema for an operational template. So if some data refers to a schema that defines it, we now we need only that schema to understand the data. [/quote] If you accept that archetype inclusion at runtime can be done from the specialisation subsumption of a statically declared archetype in a use_ref node, there can still be a lot of choice available at runtime, but it's appropriate choice. Right now, this possibility is not available for most archetypes, because there is no specialisation. But there could be (and IMO should be). [quote="joostholslag, post:30, topic:1329"] I disagree that a slot is an archetype root point. The slot is a node itself (in adl2). The slot has a child object which is the root of the archeype. So I would say the path is /content/at0078.1[openEHR-EHR-OBSERVATION.whatever.v1.2.3]. [/quote] Technically, a use_archetype[...] statement _specialises_ an archetype node; it's not a child node, it logically replaces that node. See [here in the spec](https://specifications.openehr.org/releases/AM/Release-2.3.0/ADL2.html#_slot_filling_and_redefinition). So the path in the archetype is ` /content[at0078.1]` and in the template, where this is filled, it will be `/content[openEHR-EHR-OBSERVATION.whatever.v1.2.3]` (there are better ways to do OPT paths, I have proposed in the past: `/content[at0078.1:openEHR-EHR-OBSERVATION.whatever.v1.2.3]`). So filling a slot doesn't generate a new object level with respect to the slot. [quote="joostholslag, post:30, topic:1329"] Exactly, your example is better than mine. So how would you distinguish the problem_diagnosis in the ‘principle_diagnosis’ slot form the problem_diagnosis in the ‘associated_problems’ slot ? [/quote] That will just be done in the usual way - two different id-codes (i.e. in ADL2.4 at-codes being used to identify nodes. I get tired of having to say all that... hence 'id-codes'). --- ## Post #32 by @pablo [quote="joostholslag, post:30, topic:1329"] But it’s a bit arbitrary, since a specialised archetype e.g. Dutch blood pressure, can definitely change the semantics of the parent archetype significantly [/quote] For sake of discussion, the term "change semantics" should be defined here. A "dutch BP" archetype can't change the semantics of a "human BP" archetype. If that happens, someone is doing something wrong in the modeling side. "Changing semantics" would mean something different is measured, not just that what's measured is in or has a different context. IMHO "Meaning" can't change just because context changes. Please define what "change semantics" mean in this discussion. --- ## Post #33 by @borut.jures [quote="joostholslag, post:30, topic:1329"] Dutch blood pressure [/quote] @joostholslag Are you allowed to attach it here? I still remember hearing about "100s of blood pressure profiles in FHIR" :blush: I thought that the international BP should cover all humans. What is different in the Dutch version? --- ## Post #34 by @joostholslag [quote="pablo, post:32, topic:1329"] A “dutch BP” archetype can’t change the semantics of a “human BP” archetype. If that happens, someone is doing something wrong in the modeling side. “Changing semantics” would mean something different is measured, not just that what’s measured is in or has a different context. IMHO “Meaning” can’t change just because context changes. Please define what “change semantics” mean in this discussion. [/quote] I think there’s multiple ‘directions’ of change: there’s a widening of meaning, e.g. intravascular pressure is wider in meaning than (systemic arterial) blood pressure: a systemic arterial blood pressure is a intravascular pressure. A venous pressure is also an intravascular pressure but not a (systemic arterial) blood pressure. Widening change in meaning is not allowed in specialisation. The reverse, a narrowing in meaning, is allowed in specialisation. So it would be acceptable to make blood pressure a child archetype of intravascular pressure. (Now whether that’s a good modelling choice is a very different question). But it’s definitely a change in meaning right? I feel there exist also ‘lateral’ changes in meaning, this one is a little harder to define. But archetypes define there meaning in all parts of the archetype. The major parts are off course the ‘concept’ ‘description’ and ‘definition’. The concept name and description is a bit of a definition of the concept. But definitely not conclusive. This is why we there’s also ‘purpose’, ‘use’ and ‘misuse’, which help to further clarify the concept and define the meaning of the (concept the) archetype (models). Now, when modelling a template the job is to properly interpret and apply this meaning to a dataset. This is usually arbitrary. One example is the screening family of archetypes. e.g. eval.problem_diagnosis vs OBSERVATION.problem_screening. They are about a very similar concept: whether the patient ‘has’ a diagnosis. But the meaning of the information is different: one is saying the one entering the information based on the eval.problem_diagnosis is making the assessment the patient now has a diagnosis, a de novo (new) ‘fact’. The obs.problem_sceening has much looser semantics, the patient says some familie member told her she has a diagnosis. The former may well trigger a medication prescription for that diagnosis while the latter shouldn’t. Now these meanings are a bit of continuum, now distinct. So it’s arbitrary which one is the best fit for a specific template describing a specific clinical ‘form’. So I would say it’s a ‘horizontal’ change in meaning. In this case the horizontal meanings difference is big enough to be its own archetype. But a smaller change in horizontal meaning, could imho be a specialised archetype. Like Dutch-blood_pressure. [quote="borut.jures, post:33, topic:1329"] I thought that the international BP should cover all humans. What is different in the Dutch version? [/quote] The example Dutch-blood_pressure is a little careless of me. Because it doesn’t really exist. There is a Dutch model for blood pressure, and there’s a draft ‘localisation’ of te CKM archetype. Currently in the form of a mapping to and template on the CKM archetype. https://openehr-nl.github.io/ZIBs-on-openEHR/zibs/bloeddruk.html In the ‘commentaar’ column you can read a bit on the differences in meaning between the two. I haven’t checked lately but I expect you will find all three types of changes: narrowing, widening and ‘horizontal’ change. Now I think one lesson (at least for me), is meaning is a human brain thing. So it’s associative not deterministic. And different humans interpret the same definition in different ways. This is also an answer to @thomas.beale why you fundamentally can’t solve archetype selection computationally. The definition of a an archetype is language not logic statements. Practically you can get close enough to be very useful as Thoms described. And there’s other helpers like ontologies. (Study inter observer variance and neuroscience for leaning how the brain makes decisons e.g. on ‘meaning’, if you want to understand more of what I’m saying here.) (I will think a bit more on Thoms other remarks before I can respond in a useful way.) --- ## Post #35 by @borut.jures [quote="joostholslag, post:34, topic:1329"] three types of changes: narrowing, widening and ‘horizontal’ change [/quote] Two different blood pressure archetypes mean that "out of the box" interoperability is lost. However I'm glad to see that the Dutch Bloeddruk is fully LOINC and SNOMED CT coded. This means that the interoperability is possible (in an automated way) :+1: I hope that CKM archetypes will one day be coded in a similar way. This would support all kinds of automated mapping. --- ## Post #36 by @joostholslag [quote="borut.jures, post:35, topic:1329"] Two different blood pressure archetypes mean that “out of the box” interoperability is lost. [/quote] Yes, tell me about it. This is why @openehr-netherlands is working hard to make Nictiz.nl adopt CKM archetypes. With slow but steady progress. Instead of creating competing (and usually worse) models in a custom format. https://nationalebibliotheek.nictiz.nl/assets/uploads/2024/10/Architectuur-advies-zib-transitie_verder-met-zibs-de-toekomst-van-zibs-in-databeschikbaarheid_versie-1.0-ter-consultatie_okt-2024.pdf (Dutch). --- ## Post #37 by @pablo What I understand from what you say is that, for me, when you say "meaning" and wider/narrower, it's all about the scope, not really the meaning of the concept being modeled. The concept should be well described in the archetype's metadata (this is the main semantic definition, since it explains purpose, use and misuse) and node descriptions. If the concept modeled today needs extra information, it can be because: - current requirements (scope) were incomplete and the archetype needs to be updated and it's version increased (expanding the scope) - scope needs specialization (narrowing the scope) For instance, if the BP is missing some specific type of BP, but it's still a BP, it might need to be added to the existing archetype, if the scope of the current BP archetype is to represent all possible BP readings. In this case, specializing might create the same interoperability issues as creating a new archetype. On the other hand if the new requirement requires, for instance, recording information about a different thing, though it might be similar to an existing archetype, it might need a different archetype. Another point is that openEHR doesn't model clinical concepts per se, it models the information recorded about a certain concept with its context. I still don't see that all of these are actually changes in "meaning". The archetype defines the meaning/semantics for recording information about a concept, which is based on some requirements and have some scope, and we all know requirements change and scope change (all the time), but in general, strictly speaking, there is no change in "meaning", unless the initial requirements or modeling was incorrect in any way (which can happen). One comment about the original question, and many contents about this. I think it's OK to have a pretty generic model/template at design time, and have those open/generic constraints defined at runtime. But I don't think archetype's or templates can define at design time the containers never for dynamic content at runtime. Then some commented too, that having open containers at design time might make querying more difficult, but if new constraints are added at runtime, then queries could also be built over those runtime constraints, because at some point the system know about them. Sorry, I don't think I can add more value to this discussion, just sharing an opinion. <3 --- ## Post #38 by @thomas.beale [quote="joostholslag, post:34, topic:1329"] I think there’s multiple ‘directions’ of change: there’s a widening of meaning, e.g. intravascular pressure is wider in meaning than (systemic arterial) blood pressure: a systemic arterial blood pressure is a intravascular pressure. A venous pressure is also an intravascular pressure but not a (systemic arterial) blood pressure. Widening change in meaning is not allowed in specialisation [/quote] Well those kinds of differences are understood as different topics in models anyway. The blood_pressure archetype is documented as being only used for systemic arterial pressure, i.e. the usual surrogate for general health. To be ontological, the thing we are interested in measuring is the continuant 'systemic arterial pressure', not CVP, no pulmonary circuit pressure, not some other continuant. We usually do that by taking measurements of a brachial artery, sometimes in other places where we know how to adjust the values to be comparable with other values. Comparability is the test for whether we are measuring the same thing. The standard brachial site way of measuring BP is a specific method: shutting off flow, and then listening for sounds when we re-open it and watching a pressure gauge. Or using a modern machine that does that for us. That's one way of measuring an IVP at a particular site, that gives us a good proxy value for the pressure in the systemic circuit. So that's one archetype. IVP is a technical concept - but what continuant you measure depends on where you measure it. [quote="joostholslag, post:34, topic:1329"] The reverse, a narrowing in meaning, is allowed in specialisation. So it would be acceptable to make blood pressure a child archetype of intravascular pressure. (Now whether that’s a good modelling choice is a very different question). But it’s definitely a change in meaning right? [/quote] What is technically allowed in a specialisation of a constraint model is a narrowing of the constraints. But when you talk about IVP and systemic arterial pressure, you are talking about ontological categories, and asserting something like: systemic arterial pressure (which we think is a measurement of a continuant defined by the class intra-vascular pressure AT-SITE {brachial artery OR }) IS-A intra-vascular pressure. All I am saying here is: don't confuse the specialisation relation among archetypes (info models) with the IS-A relation among ontological categories. Archetypes are models of epistemic entities (what info can we obtain from the world, which gives us knowledge of individuals) and ontological entities (universals aka categories of the real word). The interesting question is whether a specialised archetype can have a parent that not only has wider constraints, but is actually an ontologically more general category. It's not that easy to answer, because the ontological category/ies in most archetypes is currently implied; we don't yet have ontology markers that connect the archetype elements to the ontological classes or categories to which they stand in the IS-ABOUT relation. [quote="joostholslag, post:34, topic:1329"] The concept name and description is a bit of a definition of the concept. But definitely not conclusive [/quote] That's the problem. [quote="joostholslag, post:34, topic:1329"] This is why we there’s also ‘purpose’, ‘use’ and ‘misuse’, which help to further clarify the concept and define the meaning of the (concept the) archetype (models). [/quote] These relate to when this information model can be used - it's essentially trying to make sure users don't use it to document the wrong ontological entities. [quote="joostholslag, post:34, topic:1329"] One example is the screening family of archetypes. e.g. eval.problem_diagnosis vs OBSERVATION.problem_screening [/quote] Questions like this can be potentially clarified by trying to distinguish the related ontological categories first, and then sorting out what the archetypes should be. Archetype construction to date has mainly commenced from the practical knowledge of clinical modellers - they know their information, but probably don't know the formal ontology behind it (I'm fairly sure this is not being taught in med school even today ;) [quote="joostholslag, post:34, topic:1329"] The definition of a an archetype is language not logic statements [/quote] Today, that's sort of true, due to the lack of linking of archetypes to ontologies, and in many cases the lack of ontology for the specific entities in question anyway. Archetypes are mostly 'ok' to use though because they are built by people from the domain who despite not being formal ontologists, understand their information pretty well, and have a good intuitive knowledge of some areas of ontology e.g. anatomy (FMA area), physiology (OGMS) and so on. --- ## Post #39 by @MattijsK Hi all, We recently found this thread because we are currently looking into a similar problem where some kind of "dynamic composition" could be the solution. We would like to explain what is currently possible in Archie (and ADL2) and then we would like to explain our problem and if our aimed solution seems like something that would be OK according to the spec. ## Currently possible in Archie It is currently already possible in Archie to validate open archetype slots in the [RMObjectValidator using the configured OperationalTemplateProvider](https://github.com/openEHR/archie/blob/9cadbc533fcc864ce0f477289a0b12f179d8a667/tools/src/main/java/com/nedap/archie/rmobjectvalidator/RMObjectValidator.java#L68-L73): When the validator encounters an open archetype slot in the base operational template (or no constraint at all) and a Locatable with archetype details, it will try to get the operational template for that archetype id from the OperationalTemplateProvider. If that is available, it will continue validation with that operational template. The operational templates in the OperationalTemplateProvider can either be stored somewhere or dynamically generated when needed. In our CDR implementation we can dynamically generate them from an archetype. This would make it possible to fill an archetype slot at runtime. ## Our question Similar to the following use case Ian mentioned: [quote="ian.mcnicoll, post:8, topic:1329"] A classic use-case is the GP consultation where an almost infinite number of Observation archetypes might need to be used to cover all the clinical possibilities. [/quote] We also have a situation where we want to include Observations and/or Instructions without defining them in a mega-template beforehand. We are considering using open archetype slots as a solution. And then we’ll use some “UI magic” to make the user choose the right archetype. Our question would be if open archetype slots would indeed be an acceptable solution according to the spec. We are curious about your opinions. --- ## Post #40 by @ian.mcnicoll One relatively simple solution might be to allow ‘embedded templates’ i.e. ENTRY or CLUSTER level templates to be included dynamically. LOCATABLE.template_id does allow us to record at any level, which template was used to validate the COMPOSITION. e.g a set of agreed local embedded templates that matched some UI widgets That would require embedded templates to be registered with the CDR, in the same way as COMPOSITION templates. AFAIK, right now embedded templates are essentially design-time only artefacts. At run-time the system would have to inject the embedded template_ids into the LOCATABLE prior to validation. Hmm . feels too simple - what am I missing?! It is not a perfect solution or free-for-all but it would go some way to getting around the mega-template issue. --- ## Post #41 by @siljelb [quote="borut.jures, post:35, topic:1329"] However I’m glad to see that the Dutch Bloeddruk is fully LOINC and SNOMED CT coded. This means that the interoperability is possible (in an automated way) :+1: [/quote] Yeah, if there’s consensus about which SNOMED CT terms are the correct ones for (systemic arterial) blood pressure. The current [“blood pressure” subtree](https://browser.ihtsdotools.org/?perspective=full&conceptId1=75367002&edition=MAIN&release=&languages=en) is (IMO) a bit of a mess, and curiously missing the specific terms for (systemic arterial) systolic and diastolic pressures. These have been added to the Norwegian edition ([4471000202106 |Systemic systolic arterial blood pressure (observable entity)|](https://browser.ihtsdotools.org/?perspective=full&conceptId1=4471000202106&edition=MAIN/SNOMEDCT-NO&release=&languages=no,en) [4481000202108 |Systemic diastolic arterial blood pressure (observable entity)|](https://browser.ihtsdotools.org/?perspective=full&conceptId1=4481000202108&edition=MAIN/SNOMEDCT-NO&release=&languages=no,en)) along with (systemic arterial) pulse pressure ([4481000202108 |Systemic diastolic arterial blood pressure (observable entity)|](https://browser.ihtsdotools.org/?perspective=full&conceptId1=4481000202108&edition=MAIN/SNOMEDCT-NO&release=&languages=no,en)), but not yet suggested for inclusion in the international edition due to lack of time. --- **Canonical:** https://discourse.openehr.org/t/dynamic-archetype-in-slot-based-on-preconditions/1329 **Original content:** https://discourse.openehr.org/t/dynamic-archetype-in-slot-based-on-preconditions/1329