I don’t know, but it is not our role to decide what people can or cannot do, or to obligate them to use specific tools. What I miss here is a bit of coherence. If we assume that archetypes are always edited with specific tools, why are we bothering about ADL to be readable or not? Why do we use structures like domain types at the ADL to ease reading? The tool could hide all these things…
well I don’t think we are obliging anyone to use specific tools. In my view, the reason for ADL being human readable is to enable a relatively small number of people - generally not end users - to understand the semantics of the formalism, in the same way that certain people understand OWL by using/studying OWL-abstract resources. So I think ADL source is useful for:
- learning / teaching / self-education - typically by some software developers, educators, standards people
- understanding archetypes in debugging / testing - knowing what the archetype really says, in case a tool seems to be lying to you.
These people are usually doing something specialised, and understand the formalism properly (or are learning it). I don’t think many end users are going to read ADL, other than educators/software developers/theorists. I don’t think this requirement means that the archetype has to stand alone as a modelling construct. I say this mainly because the very definition of an archetype is about constraining an information model, so I am not sure that what it means to have one with no access to the information model.
For example the node
ELEMENT [at0004] – systolic pressure
means… what? If you don’t know what an ELEMENT is? So while ADL is designed to be readable, that alone doesn’t tell you what it means if you don’t know what the RM references (class names & attribute names) mean…
Having the RM available (as you have in your tools as well) is not that hard, and it seems to me non-RM enabled tools are a thing of the past. So I guess I am still struggling to see the context in which it makes sense for working without an RM.
We are absolutely in favour of having the RM available for advanced tooling, but we cannot forget the case when it is not possible. All systems cannot be aware of all possible models, but at least they should be able to work at a pure archetype level.
Hm… this is something I find a bit strange - what use can an archetype be with no access at all to its reference model? Well, it’s true that in a design context like CKM, it could be useful to view the archetypes to look at their clinical design, but I find it hard to believe that the kind of people who would do that - clinical people, surely - would want to look at raw ADL, especially differential ADL. I think they are more likely to look at the mindmap or HTML, and all that is generated.
All that stuff can mostly be generated without knowing nothing about the RM. So, why to require it at the previous step (during the parsing)?
some of those views can be generated, but only if the archetype has already been validated, including against the RM. You can’t generate any view that includes RM elements in it though, and clinicians have been screaming to have that in the modelling tools and CKM. It is now (sort of) in CKM. The reason they need it is because they want a ‘total view’ of the data. If they can’t see RM elements, they start to think those items have not been included.
In the end, archetypes and templates define an object structure + variants; but you can only see the full object structure if you have the RM there as well. Some archetypes are very small, and define very few constraints; on their own, I find it hard to understand what use they are.
In other words, I’m in favour of having the most basic tools (ADL parser, ADL viewer) that are capable of working with “standalone” archetypes. ADL parsers should work solely at the syntax level and not depend on other semantics.
I have to admit that was my view some years ago. Then I got sick of a) not being able to properly validate anything - e.g. does OBSERVATION have a ‘data’ attribute or not? and b) having to include spurious constraints that didn’t say anything, except to signal attribute multiplicity.
Again, here you are mixing two different processes. One thing is the archetype parsing and validation, following the AOM rules and ADL syntax. The archetype should be parsed and syntactically validated to instantiate the AOM. Then, a different thing is to validate it against a RM in a second phase. In fact this is not different from the typical steps of any compiler.
sure - that’s how the reference compiler works as well. But if we want to do early compiler stages with no RM available, the cost is adding spurious non-constraints - extra syntax. So let’s say we do that, and now we get to phase 3 validation (or whatever it is in each tool) and you get a mismatch between that syntax marker for multiple-valued attributes, and what the RM says? Now we have a new source of errors we did not have before.
So the question is (in my mind at least): is it worth having that, so we can perform some basic parsing and partial validation of an archetype? Consider also:
-
the question of how the syntax marker got there in the first place: presumably with an editor. All editors of the future will obviously be RM-aware, because every user of the current tools that are not RM-aware complains about about (and rightly so). So if we assume editors are RM-aware, why would we assume compilers are not RM-aware?
-
I think it is also reasonable to assume that there will only be a small number of good quality compilers in the end, so here again, it seems hard to see why they would not all be RM-enabled.
-
If you agree that the RM is likely to be present in a compiler, even for say stage 3 or 4 validation, then it is available… and it can be used at any point. Why not use it earlier on to detect some basic errors as well?
A marker for an attribute being multiple or single is not the only thing you have to have in archetypes to correctly express attributes. You also have to have [at-codes] on child objects of container attributes. But if there is only one child under a particular attribute in a particular archetype, then it still has to be marked with an at-code, whereas under a single-valued attribute it doesn’t. So what happens if the attribute is marked ‘container’ but the child object(s) do not have at-codes? The only way to deal with that properly is with the RM, because you have to determine what kind of attribute it really is before you can say whether the at-codes have to be there or not. So yes, you could ignore it at that stage in parsing and generate AOM objects, but they are likely to be wrong AOM objects.
These are some of the reasons I find it hard to see much value in a specific marker for container attributes, or doing much parsing of archetypes with no RM present.
I would be interested to know what some others think (I know what I think, and what the UPV group thinks;-)