CLUSTER.exam-specializations are expecting a previous version of CLUSTER.exam archetype

I’m not sure I argued against specialitations :slight_smile: I just stated the modelling pattern is now not to be too specific what should go into any SLOT, because we a) don’t know what the future archetype to be put into that slot will be (might not be made, or will be named) and b) don’t want to backtrack all archetypes to correct the slots when there has been a change to an existing archetype.

1 Like

Fair enough, sorry for overstating on your behalf!! Based on the work we did in the past, using a lot of inheritance specialisation, that turned out to be a bit of a blind alley and maximising ‘generality’ is much more manageable using aggregated clusters, but we are really still learning as we go in areas of very fine granularity like physical examination, imaging results and detailed anatomical pathology.

3 Likes

It’s really a question of balance. If every exam-like thing (to use a current example) has 6 common attributes, and whatever number of varying attributes, then it’s a strong candidate for specialisation from a common ‘core’ exam archetype, with the extra attributes being modelled local to each archetype and/or via slots defined either locally or in the parent.

I agree that we should use composition (slots) in a significant number of cases, but not using specialisation where it helps is making things harder for a) model definition and b) querying. Also, specialisation is usually essential to make composition work properly.

Formal substitutability at run-time (which is the consequence of specialisation) means that a lot of querying power comes for free - querying for any of those core exam attribute for example - that relies on the query engine knowing that all kinds of exams are specialisations of the core one. If all those exams have no specialisation relationship, writing the query becomes nearly impossible. The same argument applies to any kind of clinical data that has a lot of variations on a common base.

All I’d really advocate for is that with ADL2/AOM2 (which implements specialisation properly), it’s worth figuring out what the right balance is. ADL1.4 archetype modelling is not a good guide to that.

Just a bunch of opinions from a different angle, to give you some food for thought Tom :slight_smile: :

Maybe you may consider distinguishing the use of specialisation for describing clinical concepts (archetype authoring via differentials à la ADL2) from composing the descriptions of clinical concepts by using slots and describing what’s allowed in a slot. These are not necessarily the same use cases for inheritance.

“…you are stuck with enumerating…” → Some may call this being specific, explicit and preferring a closed set.
“…and that list keeps changing…” → not every day. Slower actually, as the applications mature. Some may prefer the tests to break if that list changes btw :slight_smile:

Well then, don’t fake inheritance with archetype names, instead provide a closed set, and you’re back to knowing which data points are guaranteed to appear in the slot, for each member of the closed set.

I wish you could see this as a trade-off between having a guaranteed set of data items in a slot and not having to discover common data items between two different sets of data, which is what I suspect pushing @bna to his approach. Consider the case in which the author of the archetypes have no clue about the requirements of the composers of those archetypes (template) in Bjorn’s case (slots). At best they’ll have to work together to follow maximal data set principle for the next version of an Archetype, or worse, they’ll fail to find a the common subset but will have delayed actual solution delivery, which is again what I suspect hitting @bna

Admittedly, preferring explicit-one-of semantics (for the curious, google Sum types please) over is-a semantics will introduce challenges in other areas, say Aql :slight_smile: My point is implementation concerns and opinions may point at a different way of approaching modelling and it may be good to consider these. Maybe even support them with ADL constructs (better than regexes that is)

Just some opinions as I said in the beginning. I may be wildly off the mark of course, given where I take my archetype advice these days: Archetypes with Meghan: A Spotify Original Podcast | Archewell

2 Likes

I’ll just note that there’s a reason no-one does this in software development :wink:

Oh it’s certainly a trade-off - that’s why specialisation isn’t used alone to achieve re-use & substitutability - it would make for terrible archetypes and terrible software.

What I would say here is that in a design sense, it’s never just a question of whether there may be some common data points, but whether the overall concept of the archetypes is essentially the same (e.g. they are all some kind of ‘exam’). There are certainly situations in which there are superficially similar sets of data points (or maybe even not that superficial) that still don’t point to specialisation as the solution.

All I am really saying is that:

  • ADL2 does inheritance properly (such that specialising an archetype doesn’t create a fork), plus fixes 20+ problems identified in the past, including … everything to do with templates
  • what is thought to be the optimum balance of specialisation and composition is influenced by the inability of ADL1.4 archetypes to do specialisation properly, so I recommend that when we get to using ADL2, that optimum should be reviewed
  • quite possibly, we still have not realised the benefits of type-substitutability at an archetype level, for runtime querying, which is the main deal in any system…

These are indeed my main concerns: a) being able to (possibly machine) create queries that are efficient, powerful and succinct and b) being able to build more resilient applications that can deal with e.g. all varieties of a specialised parent archetype, rather than just an ad hoc collection of named archetypes, or archetypes whose names happen to match some regex.

Agree - the thing I want to add is a constrainer type for ARCHETYPE_HRID class, i.e. C_ARCHETYPE_HRID. It would support operators like SNOMED’s ‘<<’, and ‘<’, meaning ‘any specialisation child of this archetype’. And so on. See this wiki page for an 8-year old proposal to do this.

I’m not trying to tell anyone to model archetypes differently, I’m just suggesting that when we move the ecosystem to ADL2, it would be worth reviewing the balance of specialisation v composition, since specialisation will now work.

1 Like

and I’ll just note that I hope as many people as possible think this way :wink:

1 Like

haha touché!

1 Like

Hi @bna ,

The original pattern that I advocated, built & tested for Physical exam archetypes, was nesting CLUSTERs that had common (manually copied) core (but not necessarily all) data elements to keep patterns aligned but simultaneously trying to avoid the issues that come from ADL specialisation/tooling. Not ideal in many ways, but by modelling each CLUSTER explicitly meant we modelled exactly and precisely the relevant content for that use case; no data elements were included that were not directly relevant to the clinical concept.

Specialisations work most effectively where the patterns are universal and in reality even strong patterns in clinical medicine very often break the rules unless the core pattern is so generic it is nigh on useless in an implementation. In some (many?) use cases, by overlaying rigid patterns on child archetypes, and whether using ADL1.4 or 2.0, specialisations can potentially increase templating complexity and ambiguity in some areas as much as they might add benefits in others.

It may be contentious to some, but my philosophy has always been that the clinical content in archetypes should primarily be designed to be accurate, unambiguous and understandable to the clinicians (first priority) and to support modellers with minimal training to represent the data appropriately (second), not for technical elegance or purity. Implementation requirements should be considered as a third priority.

In any case, creating coherent families of patterns is not simple and whether manually managed or specialised, there are arguments for and against. So Physical examination was our first test bed of this approach - with a fractal mix-and-match design requirement for the detailed examination of specific body regions/parts etc. By early 2019 we had developed many archetypes using this pattern and it worked well from a clinical modeller’s point of view.

In February 2019 @Silje and I were requested to attend a meeting where you & @anca.heyd pushed very strongly for a specialisation pattern for the Physical examination CLUSTERs. You argued that DIPS needed to be able to query exam findings based on specialisations and especially the need to add the index ‘System or structure examined’ data element carrying a SNOMED code for the system or structure already identified in the archetype name. This made the focus of the examination explicit and unambiguous in the model as well as supporting your implementation requirement to be able to query for all related examination findings based on the SNOMED hierarchy eg all chest exam findings - chest wall, heart, lungs etc. This is my recollection of the reason and subsequent justification for the development of a parent (generic) CLUSTER.exam and transformation of all of the existing Physical exam CLUSTER archetypes available at the time to specialisations.

I remember the meeting well because the consequences of changing the modelling pattern at that point were already huge. As one of the most experienced implementers at the time, we listened and (reluctantly - me, at least) agreed to your proposal. I then remodelled the CLUSTERs as specialisations - from CKM records it seems that work started in July 2019 and has been gradual and ongoing. This was NOT a trivial task - many hours, attention to detail, and considerable cost.

Since then more Physical exam specialisation archetypes have been added to the library, as well as breaking changes to the CLUSTER.exam.v1 resulting in a v2. None of the CLUSTER specialisations have been upgraded (a manual process, due to ADL1.4 & current tooling) and one new examination CLUSTER for placenta was added recently. When we started modelling the imaging exam domain, which has similar fractal, mix-and-match requirements, we naturally used the same pattern.

This is the history as I recall my involvement…

3 years later, it is incredibly frustrating to find this thread playing out. The consequences of reverting back to the previous patterns are even greater now - larger numbers of archetypes across multiple domains.

@bna, most of all, I would appreciate you clearly explaining to us all the reason for changing your mind - in the thread above I can only see you’ve stated ‘over the years I’ve moved more in the direction of slots with specific clusters… this is more scalable than specialisation….’ OK, but precisely what has shifted between your very strongly held view in 2019 and your current view?

I have always favoured the nesting of CLUSTERs approach, so while I would welcome the reversion back to the original patterns in principle, it would be very helpful if we can understand more about what you have learned in those intervening years as we weigh up our choices, especially the cost of reverting in terms of skilled availability, resources and $$. It has implications for us all to understand how best to solve this; so that the new Clinical modelling leadership group can document the philosophy/logic/rationale as a critical consideration for modelling decisions in the future, and so we can develop an agreed approach to creating aligned families of archetypes: essentially the pros & cons of specialisations vs non-specialised ‘pseudo copies’.

(Please don’t @ me about ADL 2.0 - that’s a related but somewhat separate issue from pattern design POV. Assume for this discussion that perfect tooling for ADL specialisation design and implementation exists)

Cheers

Heather

2 Likes

Dear Heather,

I am sorry If my writing made some confusion. I should have been more preceise when talking about specialisation and composition pattern. I didn’t make it explicit that I talked about generic modelling patterns, both for information models and more traditional models like classes in i.e. C#, Java or TypeScript. When I said I have moved from specialisation to composition I meant both for programming patterns and my thinking about clinical modelling. This not only about specialisation and/or cluster in cluster.

It is also related to the pattern of maximal data set. Maybe we can create smaller and more sustainable archetypes combined with SLOTS for the specific details? Maybe this can make long-term governance more maintainble? There is question marks here since I really don’t know the best solution for all requirements in the different clinical domains, the technical impact on vendors and the impact for global governance of archetypes.

For archetypes: I think I never modelled a specialised archetype for production usage. I explored the capability but never found a good way to make it scalable. This could be due to tooling or ADL 1.4. I think it mostly is because it didn’t fit the modelling domain or the human brain :slight_smile:

I don’t remember details about the meeting you mention back in 2019. I do remember looking at the patterns for physical examinations over some time. The way it was modelled, with a shared set of attributes to define the system or structure examined, it surely had some features pointing in the direction of specialisation.

I also think it would be easier to develop AQL to get all phyical examination defined by some terminology to define the system or structure examined.

I don’t think I made some kind of veto on this. I just shared my thinking.

There could be several ways if achieving this. I.e. using a CLUSTER with an attribute/element defining the system or structure examined and the expand by slots for the specific examinations and systems examined.

What I also wanted was to have a review or discussion with the technical group to make sure we had the right concensus in this important modelling domain. It could even be that there was some patterns which where candidates for an upgrade of the RM. We in openEHR should be an agile community being able to work horizontal and vertical in such changes.

I don’t have the facit on how this should be done. There might be different modelling problems which require different patterns. When introducing a pattern i.e. CLUSTER in CLUSTER or specialisation it would be nice to have some background info about the decisions and the functional and non-functional requirements behind the decision.