Do all modeling tools remove c_objects with occurrences 0..0?

Hi all, I’m working on the validation part of the openEHR Conformance Verification Framework, and I’m wondering if it’s possible to have OPTs with C_OBJECTs that have occurrences 0…0.

Similar question for C_ATTRIBUTEs with existence 0…0

Thanks!

We might think that, since OPT is the all-inclusive reference for building a final system/interface, we would not need to include those 0…0 constraints. If a C_OBJECT is 0…0 then it will not be rendered, which is exactly the same as if it was not included at all.

But, what happens if those 0…0 constraints are not part of the visible rendering? For example, if for any reason we constrain to 0…0 an object or attribute of the RM which contains context information. It this is not kept in the OPT, then the systems will probably fill it with data. And probably, they will fill it anyway even if the constraint is kept in the OPT, but that’s another story :smile:

Hi @damoca I think visualization is a different problem.

The semantic issue I see is: some implementers use OPTs as final definitions and some as open definitions.

final would be a structure that defines exactly what is allowed in an RM instance, and anything extra is not allowed.

open basically allows anything that is not currently defined in the OPT.

So in the context of final, adding a C_OBJECT with occurrences 0…0 is superfluous, since having that vs. not having the node at all would have the same effect, because what is not defined, is not allowed and 0…0 is saying “this is not allowed here”.

But in the context of an open definition, actually not having the node, means the node is allowed, so if there is any modeling tool that removes the C_OBJECTs with occurrences 0…0 (like Ocean Template Designer) but the implementation interprets OPTs as open then this is creating a semantic mismatch (it’s an error of the implementers to use a modeling tool that doesn’t generate what they try to model). So if the implementation is open it requires all C_OBJECTs with occurrences 0…0 to be included in the OPT.

On top of that as a basic framework we can discuss about visualization and systems filling information they shouldn’t. But it’s basically a problem of interpretation of what the OPT describes: final vs. open.

My question is really to know how different modeling tools represent this, so when we do an openEHR Conformance Verification over a tool, we know what to expect from different test cases. If that is not defined, the result can’t be fully verified.

I mentioned visualization because its an easy way to understand what we are talking about (ie. can a complete new Section appear in my screen and be filled if it is not defined in my template?). At the end, this discussion affects to every place where the templates are used (visualization, validation, etc.)

But using your own terms, interpreting a template as open or final is tricky.

For archetype specializations it’s been long accepted that it is an open world. That’s why we can add at0 nodes everywhere in the specialized archetype. The argument here is that it is the only way to be flexible for new information requirements that are not know when the parent archetypes are first created.

But… a template is not exactly the same as an archetype specialization. It is a configuration of the archetypes for a specific use case. My first thought is that a template is final, we should interpret it as “the information structure for the use case, no more, no less”. It would be unmanageable if we had to explicitly “close” all the nodes we don’t want with a 0…0. And it would be risky to interpret expanded data not originally supported by the template.

But… a template is never a complete definition. For example, the OPT does not include all context attributes from the reference model. We all assume that we can add context information to the instances, even if it is not present in the template.

So we are mixing two different interpretations. Once you have a template you should not add data for RM clinical classes, but you can add additional context information supported by the RM.

I know this is not a good answer for the automation of conformance tests, just my first thoughts.

1 Like

Thaks David, I think you have clarified this for me - the difference between low-level RM attributes being ‘open’ by default but clinical ocntnet being assumed t obe closed , but as Pablo says. not rerally defined as such

I think where ‘open’ vs. ‘final’ come in is around slots and the ‘Composition/content’ attribute which functions like a slot. Right now we have no real way of expressing whether these are open or final and as you say the interpretation of the default may differ.

1 Like

I would say that unless an attribute is explicitly closed in an OPT, it can be filled at run time. There are a lot of attributes that would never be constrained but must be filled at runtime: data/time fields, id fields, and so on.

We probably need to improve the specs on this point.

1 Like

Thanks @damoca for this great explanation. This is the way we have implemented openEHR in our (DIPS) software.

1 Like

If you keep the object with 0…0 in the definition you can validate that the specific node doesn’t exist, which I would assume it’s something worth validating. I’ll have to check, but I’m pretty sure we don’t remove them, even if in export to OPT
Also I would think specializing templates is also something we could do (e.g. IPS template but for my organization) so the info of whatever you have prohibited should be explicitly stated so it’s really a subset of the parent template