Is using EVENT in archetypes an inconsistent way of modeling "any event"?

If you check the OBSERVATION archetypes in the CKM, some have the EVENT type in

That represents at that point there could be a POINT_EVENT or INTERVAL_EVENT in the OBSERVATION object instance that complies with that archetype.

Now consider EVENT is an abstract class.

In other cases of having abstract classes and allowing “any type”, the pattern is not to put the abstract class in the archetype but to add alternatives to C_ATTRIBUTE.children.

So, shouldn’t EVENT work in the same way? I mean, instead of having the abstract EVENT class in the archetype, there could be two alternatives, and if the internal structure of both is the same, it could be defined once then referenced from the other event with a CONSTRAINT_REF.

It’s just strange that we do an exception for EVENT and treat all other types differently. That means in code you need to write a special rule to process the EVENT following the current pattern instead of the “alternative types” pattern. For instance, for data validation based on AOM/TOM, if EVENT is present, that should be transformed to the concrete subclasses to be able to check if the type in the RM object complies with any of the types of EVENT, and that rule is only for EVENT.

We have been struggling with the use of EVENT abstract class in LinkEHR since 2005. Since the aim of LinkEHR has been always to facilitate the generation of data instances, having an abstract class in the archetype or the template has been always problematic. That’s the reason why in LinkEHR we prefer to “force” to choose between a POINT_EVENT or an INTERVAL_EVENT when editing an archetype (although loading the abstract EVENT is also supported).

I don’t have a strong opinion on the modeling approaches you mention, I usually think more at the RM level. Given that POINT_EVENT does not add any additional attribute to EVENT, why does it exists at all? I mean, wouldn’t be better to have an instantiable EVENT class (with the same meaning as the POINT_EVENT), and then an INTERVAL_EVENT that inherits from it? Then you always have at least a class that you can instantiate and just check for additional properties in some cases. It would be the same design and behavior as DV_TEXT and DV_CODED_TEXT.

Not sure I understand why it is a special rule…both approaches seem ok to me.
Generally speaking, it has advantages and disadvantages of course - but with only two concrete subtypes and both are used, the difference is a bit moot. To me, the abstract type seems to be the easier and possibly more elegant way to model this case, but the “alternative types” pattern is of course not wrong. (Although not sure how tooling, including CKM, would present this on EVENT level at the moment.)

Given only POINT and INTERVAL events, I think what David says is a very reasonable alternative pattern, possibly the better or at least more consistent one to what we have - except maybe that it does not enable you to restrict to POINT events explicitly (other than setting the width to 0).

That said, and as a side note while we are at this, the specs at Data Structures Information Model seem to be a bit inconsistent:

EVENT.time: “If the width is non-zero, it is the time point of the trailing edge of the event.”
→ This seems to imply that there is an Event.width, which however is only introduced for INTERVAL_EVENT.

In INTERVAL_EVENT, width is introduced as 1…1 but it says that it is “Void if an instantaneous event”, and while an interval’s width might be zero, and effectively be a point event, can it be “void” here? (Also note in EVENT.time this is “non-zero”.

In Fig 8, “width” is actually named “duration”.

Actually, there is [SPEC-255] - openEHR JIRA from 2008(!) which seems to conclude that this should be changed.

1 Like

I don’t think that is an issue, in fact I think sometimes that could be useful to represent more generic patterns and I’m surprised we don’t use it more. Like having ITEM_STRUCTURE or ITEM in generic archetypes.

Though for instance generation, which is also a use case I have, abstract types don’t give much info, in fact having final archetypes (or finalizing them in templates) would be better in that case. On the other hand, a “smart” generator would take advantage of the abstract type and generate one of the many available possibilities at that path.

My concern is about consistency: all other places where there is an abstract type, the pattern for the archetype is to add alternatives in the attribute.children, but for the EVENT hierarchy, instead of the alternatives, the abstract class is used, and that is the only case as far as I know in which this happens, and we have many abstract classes in the COMPOSITION structure.

In general these exceptions to the common rules generate friction for implementation, since each exception is an extra case to handle, like using C_DATE for constraining DV_DATE.value: String instead of C_STRING, is an exception that should be handled (mentioned that in another conversation here).

All these exceptions or inconsistencies add also complexity to the spec conformance verification process, which is my current focus.

I can see that in the latest spec, though that comment was not in the 1.0.2 data_structures.pdf spec, it seems if was fixed there then reintroduced?

You would only put in specific constraints if you need them. If either kind of EVENT is allowed, and there are no concrete forms of such events, the modeller should just put EVENT.

If only one kind of EVENT is allowed, than that should be used, even if there are no further constraints, i.e. matches POINT_EVENT.

There shouldn’t be any exception; there are other abstract types in the RM (DATA_VALUE, DV_TEMPORAL, DV_QUANTIFIED, ITEM_STRUCTURE etc) and ADL is designed to handle abstract and concrete types properly (and has been shown to do do).

Alternatives is only used when there are concrete constraints to express. Otherwise, using the abstract type is correct. The runtime validator of course should have accessible either a hard-wired or generic model representation of the relevant RM in order to do the checking.

I find this strange because there is no theoretical or practical problem that I know of. The only requirement is that any tool or validator imports the relevant RM definitions, i.e. BMM files or similar (e.g. JSON schema). All the archetype modelling tools other than the Ocean ones do this, to my knowledge.

Aha - the current modelling is good practice. Doing what you suggest would be an error, because it would consider INTERVAL_EVENT as a kind of POINT_EVENT, but it is not. (NB: unfortunately, I caved in a long time ago to pressure to commit this error with DV_TEXT / DV_CODED_TEXT, and we have had endless problems from that since…)

But in general, you want to be able to support abstract types in ADL. Every realistic information model contains them.

Yep - that’s an error, but doesn’t change anything here.

Looks like it - maybe this is another conversion problem.

If(!) then it should be the other way round in my opinion: POINT_EVENT is a special kind of INTERVAL_EVENT with width fixed to 0. This would solve the documentation problems described above - but agree re being very careful with this considering the DV_CODED_TEXT experience (although not all the problems are due to the inheritance choice made).
Either way, I don’t see that we would want to introduce a breaking change for this here.

Fully agree. (But also keep them to a fair amount to avoid undue or unnecessary complexity)

That is a specific modelling approach. But it forces POINT_EVENT to have a width which has no meaning (by definition) for a point event. This is called ‘modelling by constraint’ which is subtractive down the inheritance hierarchy, and contrary to object modelling, which is additive. The constraint approach to object modelling was what killed HL7v3…

A couple of possibly useful posts on this:

Note however, that even if the community wanted to use a specific style of modelling for some part of an information model, it doesn’t change the general argument about abstract and concrete types - there will always be somewhere else with abstract parent types and concrete subtypes.

I’d say it’s only complex for downstream formalisms that can’t handle inheritance (by some means or other).

@thomas.beale I know how it currently works, that is not the question. The question is why the EVENT case is different from all other abstract type cases in archetypes?

Why do we have the abstract type for EVENT in an archetype, then for other cases we have type alternatives inside C_ATTRIBUTE.children. Using type alternatives for an attribute works perfectly with EVENT and it’s subclasses.

As said on the first message: if any kind of EVENT is allowed in, and they share the same internal structure for data, then we could have C_ATTRIBUTE.children having two C_OBJECT, one for POINT_EVENT, another for INTERVAL_EVENT, and if the data structure is defined in the POINT_EVENT, then the INTERVAL_EVENT can have a CONSTRAINT_REF to the structure defined in the POINT_EVENT.

That would be a consistent way of modeling, like all the rest of the classes in the RM. The current approach makes EVENT a special case when it shouldn’t be.

The only thing that is special about the EVENT hierarchy in comparison with other abstract/concrete hierarchies in the RM is that the archetypable data field is in the abstract class, so both concrete subclasses can share the same internal structure for data. Though something similar could happen with ITEM_TREE and ITEM_LIST, since a list is a special case of tree, but those are not all the possible subclasses of ITEM_STRUCTURE, that is the difference with the EVENT hierarchy.

Well if you do that, and one day, a new EVENT descendant was added, all those archetypes will probably be wrong / misleading, since they won’t have the new subtype.

In any case, I’m not clear why we would put any C_ATTRIBUTE children, if there are no special constraints to state. If there are then you have to have those children. But why bother if there is nothing to state in them?

Well, only if you are not constraining any of the other fields of INTERVAL_EVENT.

Ah, I hoped we didn’t need to go there. I would never intentionally suggest this. Constraint modelling in its excessive form has very rightly killed HL7v3 and I don’t even know where to start. But I sure remember the most memorable MIE session with you and Günther Schadow (was it in Pisa?) around this.

So, since I cannot let that stand :slight_smile:

  1. Note I only said that IF we wanted to change the current approach…I don’t want to.
  2. When you say that "a point event by definition does not have a width because it would be meaningless", I say: “It is the definition of a point event that its [interval] width is 0”. If you want to have a separate class for it to make it explicit and have a nice name, fine, but otherwise we can just set width to 0 to mean a point event. You may still disagree with this because it is constraining the width further (but not constraining it out aka removing it!). That’s fair enough from my point of view, but then I think you should likewise reconsider if for example “redefining” the DV_QUANTIFIED.accuracy from ANY to REAL in DV_AMOUNT is the right approach.

I knew you wouldn’t. I visualised some sort of bandits standing behind you with heavy sticks… :slight_smile:

Well, ok, not meaningless but useless to developers. It’s a junk field for them, and that was my primary consideration. But the main point is that this is a specific modelling debate but it doesn’t change the general argument, I don’t think - we can always find a piece of model that ‘correctly’ uses abstract types.

1 Like

Archetypes will be compatible with the RM version they were created for, so I don’t think that is a valid argument, since the new type didn’t exist at the moment the archetype was created.

Why there is no special constraint to state? The first constraint is the type constraint, so the alternatives themselves are POINT_EVENT and INTERVAL_EVENT instead of EVENT. If there are constraints to for the internal, then it can follow the pattern I described before (with the CONSTRAINT_REF).

Even if you constraint the extra fields, the difference I mention is the shared archetypable field in the abstract class. Now I realize that also happens with the protocol for CARE_ENTRY (also an abstract class with an archetypable field).

As a side note, if you see modeling tools, when selecting the type for or or ACTION.description, etc. you can only pick one type like ITEM_TREE, but it doesn’t allow to pick multiple alternatives. That is what I see as different ways of modeling archetypes for similar RM patterns. I guess the issue with ITEM_STRUCTURE is all structures are a special case of ITEM_TREE (a list is a tree, a table is a tree, a single is a single node tree).

But that comes for free with the RM - the only instances that could possibly be created are in fact POINT_EVENT and INTERVAL_EVENT.

I guess what you are saying is that since the archetypable data attribute is in the abstract type, then you can have that constrained without specifying the type for the EVENT. That is correct, not arguing that.

That is what I mentioned above mentioning EVENT is one of the few abstract classes to have an archetypable field, the other one being CARE_ENTRY with it’s protocol attribute, though I never saw an archetype with a CARE_ENTRY. This would happen if in the future other abstract classes have archetypable fields like CONTENT_ITEM or ITEM.

Even though the use of the alternatives in archetypes as I mentioned for POINT_EVENT and INTERVAL_EVENT is totally valid in terms of the specs, and would be more consistent with current modeling for other inheritance hierarchies, there is no single modeling tool that supports that, and in the other direction, there is no modeling tool that allows to use other abstract classes in archetypes.

A related topic would be, IMO the specialization mechanism of archetypes would work better if modeling tools allow to use other abstract classes on them, so one archetype would have a CARE_ENTRY with an archetyped protocol, then a specialization would have an INSTRUCTION or ACTION with the same protocol, and maybe as part of the same workflow.

A truly generic modeling tool would allow those patterns. In fact if those are allowed, we could use archetypes as a metamodel for everything, including as a computable representation of the RM itself then derive all the specific concepts just using the specialization mechanism. So allowing abstract types everywhere would also be more consistent than only allowing one (EVENT).

1 Like

Agree with pretty much everything you said, except one detail…

A few people have thought of that in the past, but the problem is that AOM/ADL doesn’t support everything that you find in an object modelling formalism (like UML, BMM, most programming languages) - including abstract typing, generic types, and additive (rather than subtractive) semantics down the inheritance hierarchy.

Doing this would mean making ADL work like both an OO (additive) and constraint (subtractive) formalism at the same time - like XSD, which is a total mess semantically and impossible to use as a modelling language - it works only as a data description language.

Yes you are right - I had forgotten about the RM version marker.

IN that case, the tools have some internal limitations that are not inherent in the RM, but engineered into the tool. The default choice of ITEM_TREE is a kind of agreement by the modelling community to only use that type, and not ITEM_LIST, ITEM_TABLE, etc, which were types the original clinical participants wanted very early in openEHR history (2000-ish!). Which is fine - but it’s worth being aware that this is purely a tool thing, not some rule coming from the RM.

There are probably some other such restrictions in the tool designed to limit self-harm to modellers :wink:

Perhaps inflicting more pain, I am pretty sure that none of the modelling tools insist that an abstract EVENT is resolved to one of it sub-classes, or that any of the CDRs that I have worked with reject ENTRY as invalid. I suspect it is handled as if it were point-in-time which would account for 99.9% of usage. Perhaps we should just make EVENT concrete?

No - that would be 100% incorrect in the model. There is no need to start removing abstract classes in an object model because the/some archetype tools are doing strange things. We literally could not build any information model of any utility without abstract classes.

Referencing abstract RM classes in archetypes is 100% OK and correct (assuming it represents the modelling intent).

1 Like

My point is that I am pretty sure that it is perfectly possible to commit an EVENT to current CDRs, as if it was a concrete class. i.e. no validation to ensure a sub-class is chosen.