An alternative ( and maybe complementary/adjunctive) strategy might be to delegate some of the dynamic flexibility/composability to the terminology standards of the archetype adopters.
I thought we were not happy with the distinction between templates and archetypes. Wasn’t one of the primary aims of ADL 2 ? I.e. losing that distinction ?
I may be diverging from the common thinking but I’ve always used the term models to indicate both templates and archetypes. This also explains why I am uncomfortable about these spawning ad-hoc and you not being that concerned.
I think templates do and should carry semantics beyond being an aggregator of archetypes, and if that line of thought makes sense, then so should my concern about their dynamic creation. I admit it may be subjective, but this is a point I’d stick to if I took part in building a clinical software based on openEHR.
Well, if you do it at design time, you can assign different ids for the redefined slot eg at0078.1 for blood_pressure and at0078.2 for heart_rate. That’s much harder to do at run time. Because the template may be in different systems that are unaware of the existence of at0078.1. And I think you want the operational template to have all slots filled/closed. Because you’d want a definitive schema for the persisted LOCATABLE.
Agreed, but that’s not the issue for me. It’s that you’d want a reliable path. For querying and ‘hardcoding’. Can’t think of a real world need, but my intuition says we shouldn’t allow operational templates to have incompletely defined paths.
Well the problem is, selecting the right archetype for the data you want to record requires niche clinical modelling expertise to do it reliably. Example mistakes are recording eval.diagnis where a screening archetype is a better fit. Or persisting an ankle bp for an ankle arm index in a obs.blood_pressure. I’m not saying it shouldn’t be allowed. Just that there’s an increased risk of using the wrong archetype. And we should have a way of managing that at design time, both at the template and query level.
What do you mean with this one? Does it relate to: Jira ?
Adl2 is not losing the distinction between archetypes and templates. What it does is making template inherit from archetype and adding a template overlay attribute. So technically a template is an archetype with potential template overlays. So also semantically this should be true. So the semantics of what a template is are a bit stricter in adl2 vs adl1. But I think it doesn’t have a practical impact.
So also from an adl2 perspective templates indeed carry semantics beyond aggregating archetypes. They also specialise those archetypes. And the template is for/about a specific use-case.
I also agree the template is a model, as is the RM (and the AM itself). But I also agree mostly with Tom’s implication that most of the semantics in ‘the model’ (rm+archetype+template) are defined at archetype level. And thus it’s fine to allow changing the model at run time. Where I don’t agree, as stated in my message above, is that it’s fine for the operational template to have loose semantics. I think operational templates should be conclusive re model and semantics. So a redefinition of a template (‘oet’) by adding a constraint (including a new archetype) should create a new operational template. Ian is suggesting something similar, but want’s to do that virtually. I prefer to do it ‘phyiscally’. But I do see the issue that this is a significant shift from how opts are created, currently in the design time tooling, in the future that would have to be the job of the CDR (or client) at run-time. Given the other changes in adl2 I think you want to go that way anyway. As nedap has done. But it requires that the CDR has access to (versions of) all archetypes in the hierarchy available at run-time. So I can also see the added complexity is not worth it for a CDR implementer.
I’m genuinely not trying to be annoying or stubborn, but maybe we’re talking about different distinctions here The difference between the original templates in adl 1.4, which is an XML based formalism put together by Ocean in a way that does not use adl at all is the distinction that is hard to ignore for my old brain, compared to adl based templates we have in adl 2.0. Once again, subjectivity strikes!
I agree.
That is a natural outcome of the suggestion yes, and personally I don’t think it is a good idea, but I have only so much time to discuss why, and I cannot do much beyond making a point, which is how the specs are developed anyway
If the “you” above is me, then no, I don’t.
I think this is one of those things that will introduce fractal complexity that goes beyond what we can encapsulate into some technical feature in a CDR. It is a leaky abstraction in the sense that downstream use cases will have to consider this mechanism if it is used, but that’s just my gut feeling talking.
There’s a lot of helpful input in this discussion though, to thanks to all for their time and thoughts.
Technically there is (almost) no distinction (templates can contain overlays is about it). However, in terms of domain semantics, we have:
international / national archetypes - reliable statements of info semantics for some topic
(potentially) site / product / something -specific specialised archetypes (e.g. something Nedap or Better or Ocean makes)
templates - combiners and localisers of the above in use-case specific ways. New templates can be made with impunity - mostly locally created for UI forms, documents or other reasons.
Since Templates are specialised archetypes, they can’t break any archetype semantics - whether pre-built or just-in-time (JIT).
Exercise: try to think of any new semantics templates add beyond:
archetype choice
archetype element removal
narrowing (which might just be adding) of constraints, including terminology value-sets
extra annotations
minor UI tricks like hide-on-form
We’re just talking about templates being created JIT based on user choice of some particular Observations or similar. Many archetypes already have slots that allow more or less any archetype of a certain RM class in them.
You can do that if you want, but it’s not necessary, and not done in numerous archetypes that contain slots that are essentially open.
Ok but here we are talking about clinical knowledge. If the GP or whomever doesn’t know what they’re doing, or can’t drive the EMR system, that’s another thing. But choosing at runtime to create an encounter for patient A that happens to include a) auscultation of the chest, because the patient is coughing, b) inspection of the ears because the patient gets infections from swimming and c) palpation of breast due to concern of a new mass is not going to be controversial. And for patient B, and different bunch of things. Similar things happen at discharge, referral writing and so on.
Don’t get me wrong - the importance of clinical modelling cannot be overstated, and all the above things i mentioned need to be modelled properly. But there’s nothing to prevent the design of a good EHR system and application that allows the natural runtime choice of particular information without pre-design. Most patient interactions will be with specialists and all the forms and templates can be pre-designed, just not all of them.
You’re talking about a different level of semantics here. If there are multiple systemic arterial BP archetypes (not the case today), then there absolutely needs to be a way of choosing which one correctly via some use UI interaction.
Well the issue is an ankle BP taken in an ankle arm index, which is test for suspect vascular problems, is not a good surrogate for a systemic arterial blood pressure. Currently there is one archetype for systemic arterial blood pressure and one draft for intravascular pressure. My point is it will be (too) hard to build a system that facilitates the user in picking the right one always. And ‘normal’ doctors (without a clinical modelling quirk will not realise they shouldn’t use the obs.blood_pressure. This specific example can be accounted for, but there are many other non-trivial selections for archetypes. This is not a reason not to allow archetype inclusions at run time, but it does change the semantics a little bit. Which is, I think, partly what @Seref was pointing out. Some semantics are defined in the template. So we should be able to query for the difference.
Why is not necessary? How else do I construct a path that reliably identifies the node in an operational template that includes (only) an obs.blood_pressure? And if I can’t do that, how do I built an app that takes a VERSION with template_id = comp.GP_ENCOUNTER.v1.0.0. With a ui that renders a BP graph from a BP archetype, but can’t render e.g. an ankle arm index?
I think you are asking valuable questions and making valuable points. I think we have a somewhat diffent understanding of what a template is. I think I have some level of understanding of what an adl1 template is/are. What complicates this is the only ADL2 implementation, with which I’m very familiar, compared to all adl1 implemantions, has shifted the operational template into run time territory: in the adl1 tool chain the operational template (.opt xml) is generated by the design tool (right?) and this is the artefact that get’s uploaded to the CDR. There exists a (non operational) template, ‘defined’ in the .oet xml schema, but this is rarely used. Mostly as an export format between template designer tools, right? So in adl1 ‘template’ generally means operational template.
Now in adl2, templates are serialised in adl2 (and defined in aom2 UML) and are technically specialised archetypes. How nedap tooling handles this is you upload the whole specialisation hierarchy into the CDR and at run-time (currently upload time, not data entry) an operational template is generated. So the operation template is generally not a design time artefact. So in (nedap) adl2 template generally means the ADL (not operational) template. (Now further confusing this: template is often called archetype, since technically a template is an archetype and the semantic difference isn’t very relevant for the CDR builders and client app builders).
It was meant to be you, as in the organisation/developer responsible for implementing a CDR, migrating to ADL2. Sorry for expressing myself badly, I didn’t want to imply you have a desire for building it this was. I was trying to say: there are other compelling reasons for building it this way. That I expect will also apply to your implementation once you migrate to ADL2. But no need to argue. If you want to handle it in the current (or some other) way that’s fine. I just want to state that the current way of generating operational templates at design time isn’t the only valid way to do it. So if supporting run time slot filling requires run time generation of operations templates, that shouldn’t be a reasons not to accept dynamic templates in openEHR.
I don’t doubt the challenges here, but you are talking about a pretty specific choice that a) needs careful modelling and b) the doc doing BP on the ankle needs to have a way in the application of reporting that correctly. This is a different topic the original one in this thread.
Because the path through an archetype root point looks like /content[openEHR-EHR-OBSERVATION.whatever.v1.2.3].
There have been discussions on the exact format of the paths, but the archetype id is always available at the root point.
Paths inside OPTs are not very important however. What is important is that the data committed will always have its archetype id at its root point.
Now, I said earlier that having distinct slot codes for different purposes is not necessary, i.e. in the strict technical sense. But it is useful. If you wanted to distingush e.g. principle diagnoses from associated problems in a Composition archetypes, and use the EVALUATION.problem_diagnosis archetype for both, it would obviously be better to have 2 slots, not one, because in this situation, the archetype is more general than the slot meaning.
To me the topic is: how do we support unpredictable archetype inclusion requirements at run time. Right?
I think we all agree the following:
In current implementions the only way to do that is include all potential archetypes in a god template.
The god template is a bad way to do it
A little more controversial: (@Seref might disagree)
We need to be able to include archetypes at run time
We should explicitly support allowing slot fills at run time
We should support node additions at run time
Templates have semantics in itself. No they can’t break the semantics and usually don’t change the semantics of the archetype very much at all: Narrowing the meaning is allowed, but practically this doesn’t change the meaning much: e.g. not allowing a ‘mean arterial pressure’ to be recorded in a blood pressure archetype in a GP encounter template, doesn’t much change the meaning of what a blood pressure is. But they do add a lot of context information to the data. I would go as far as saying the template can’t change the semantics of the clinical concept ‘blood pressure’ at all. Only the semantics of the data conforming to that archetype. But it’s a bit arbitrary, since a specialised archetype e.g. Dutch blood pressure, can definitely change the semantics of the parent archetype significantly (but not break it). And since templates are technically specialised archetypes, I’d say it’s more of a consensus rule than a specifications rule. (We might want to try to specify that rule/ semantic difference, but it’s not clear to me we understand the difference well enough to do so now.) @thomas.beale do you agree with the above paragraph?
My example shows:
letting users pick archetypes at run time will lead to usage of an archetype that is a bad fit.
This creates data: e.g. ankle pressure of 80/50mmHg, incorrectly understood to be defined by the Blood_pressure archetype, so incorrectly assuming 80/50 is a [systemic arterial] blood pressure.
This is a clinical safety issue, for example if a low blood pressure algorithm triggers administration of blood pressure increasing drugs.
It’s not trivial to select the right archetype for data recording. It can’t easily be done computationally. LLM’s can help a lot, but they are probabilistic, so inherently faulty (as are humans btw.) And users (at least normal doctors) are bad at considering data quality when recording data.
Further remarks:
Now the reverse is also true, if you include a limited set of archetype in the template and don’t allow inclusion at run-time, the user will be triggered to fit all data into one of the preselected archetypes.
So it’s not simple, but a trade-off. Since it’s arbitrary and a clinical safety issue this should be a clinical modelling issue.
I agree my example is very specific. I think less than 1% of blood pressures recorded are ankle pressures in vascular patients. But I disagree this makes it off-topic. It’s just an example to show the risks of unconstrained archetype selection by users. But my point is the semantics is different, so we need a way to control both the inclusions at run time and the data interpretation.
Now the question/debate is what is currently supported in the specs. How to apply that safely. And wether the specs need to change.
My position on what we should aim for functionally:
So clinical modellers need to be able to create a template that explicitly specifies the constraints on inclusion of archetypes at run-time. E.g. for a GP encounter add a few standard ones: like eval.story, reason for encounter, a section for observations with a slot for any observation, but not other entries etc. (This is already supported with a regex on the slot).
Do we agree on this aim?
Now re the specs:
I think the above is implicitly supported by (the ADL2) spec. But is not understood/supported that way by implementers. Because the consequences are not understood. @Seref points out a serious safety concern, with high impact on how he views the semantics of the models. So this brings us to the question on how to apply this safely. My suggestion is to generate an operational template, with a unique id, every time a slot is filled, so also at run time. Ian suggested a ‘virtual template’ concept. Which is a different implementation for the same idea imho. It’s probably more aligned with how operational templates are currently handled in adl1 (generated at design time). So easier for adl1 implementers. The downside to me is the operational template itself no longer is sufficient to interpret the data.
Currently the opt2 spec states:
all slot-fillers and direct external references (use_archetype nodes) have been resolved and substituted;
all closed slots are removed;
all attribute (C_ATTRIBUTE) nodes that have existence matches {0} (i.e. are logically removed) are removed;
so it doesn’t say all slots must be filled, only if they are filled they need to be done so conclusively. And if closed, they are removed. Other nodes that can’t be changed (zero existence can’t be further constrained) are removed as well. This to me means you can’t add slots or nodes at run time. Otherwise you may add a node that has already been removed in the template/archetype, breaking conformity with that template. I think stating references should be resolved conclusively, and thus can’t be changed at run-time, is conflicting behaviour with allowing slot fills at run time.
So this to me should be expanded to state that in an operational templates all slots should be conclusively filled or removed and no nodes may be added.
This would (again) give a conclusive schema for an operational template. So if some data refers to a schema that defines it, we now we need only that schema to understand the data.
Now if we agree on the above paragraph. We can only support run time inclusions of archetypes by generating (new versions of) an operational template at run-time.
I disagree that a slot is an archetype root point. The slot is a node itself (in adl2). The slot has a child object which is the root of the archeype. So I would say the path is /content/at0078.1[openEHR-EHR-OBSERVATION.whatever.v1.2.3]. This is also (close to) how nedap implemented it, see the example in the linked jira ticket. Ian told me in adl1 the slot is not considered a node in itself. so it’s an (unintended?) change. But I think it’s valuable. Especially since you’d want to be able to name the slot. Which requires a node id. Especially if the slot included multiple archetypes. E.g. in the comp.gp-encounter template, which has a slot for ‘gp observations’. It’s useful to be able to name (and query!) that. In your example, if there would be multiple slots in the archetype/template, how would you distinguish archetypes included under slot1 vs slot2?
Exactly, your example is better than mine. So how would you distinguish the problem_diagnosis in the ‘principle_diagnosis’ slot form the problem_diagnosis in the ‘associated_problems’ slot ?
Well, it shows that such a thing is possible. My counter-argument to that is: a) it’s not that common a need; b) where it does occur, it can be properly designed for.
I’m not convinced of this. Archetypes are topic-based; we can show the topics on the screen of archetypes that could possibly fill a certain slot at runtime, e.g. the ‘Objective’ heading in a SOAP note. But we can greatly reduce that to the subset that could realistically apply in the situation. If it’s a GP encounter, beyond a relatively small set of observations they can do themselves, anything else will be done by a lab, imaging, or a specialist. So maybe 20-30 possibilities for GP observations in an encounter?
Well that’s the current capability. Clinical modellers should probably define more constrained limits on slots in many cases (it’s often ‘any’ today).
Well you don’t do data interpretation (i.e.once data is committed) with OPTs generally, but with the archetypes. Or am I missing something here?
Some of this might need to be changed. However, one other thing to keep in mind is that there is another way to do archetype inclusion at runtime. If you use a use_archetype[…] node instead of a slot in most places (we got rid of slots that only point to one archetype, because that’s just reuse), the system should allow any specialisation of that archetype. E.g. if you specified an ‘exam’ Cluster archetype, then any examination archetype would be allowed.
Making this work means using specialisation properly, which is not done in the ADL1.4. archetypes, because specialisation doesn’t really work in ADL 1.4.
I am investigating whether use_node refs can be expanded to be of the form use A OR B OR C. Then they do what slots do, but with the (really not very good) regex constraint approach.
Anyway, this approach is something to keep in mind for the overall mix. If we go this way (it makes for much easier templating), then the number of archetype slots really needed is quite small - I think only in the places where runtime choice is needed.
You can’t add slots, but you can still fill an open slot with something, so you can ‘add’ something.
If you accept that archetype inclusion at runtime can be done from the specialisation subsumption of a statically declared archetype in a use_ref node, there can still be a lot of choice available at runtime, but it’s appropriate choice. Right now, this possibility is not available for most archetypes, because there is no specialisation. But there could be (and IMO should be).
Technically, a use_archetype[…] statement specialises an archetype node; it’s not a child node, it logically replaces that node. See here in the spec.
So the path in the archetype is /content[at0078.1] and in the template, where this is filled, it will be /content[openEHR-EHR-OBSERVATION.whatever.v1.2.3] (there are better ways to do OPT paths, I have proposed in the past: /content[at0078.1:openEHR-EHR-OBSERVATION.whatever.v1.2.3]).
So filling a slot doesn’t generate a new object level with respect to the slot.
That will just be done in the usual way - two different id-codes (i.e. in ADL2.4 at-codes being used to identify nodes. I get tired of having to say all that… hence ‘id-codes’).
For sake of discussion, the term “change semantics” should be defined here.
A “dutch BP” archetype can’t change the semantics of a “human BP” archetype. If that happens, someone is doing something wrong in the modeling side.
“Changing semantics” would mean something different is measured, not just that what’s measured is in or has a different context. IMHO “Meaning” can’t change just because context changes.
Please define what “change semantics” mean in this discussion.
I still remember hearing about “100s of blood pressure profiles in FHIR”
I thought that the international BP should cover all humans. What is different in the Dutch version?
I think there’s multiple ‘directions’ of change: there’s a widening of meaning, e.g. intravascular pressure is wider in meaning than (systemic arterial) blood pressure: a systemic arterial blood pressure is a intravascular pressure. A venous pressure is also an intravascular pressure but not a (systemic arterial) blood pressure. Widening change in meaning is not allowed in specialisation.
The reverse, a narrowing in meaning, is allowed in specialisation. So it would be acceptable to make blood pressure a child archetype of intravascular pressure. (Now whether that’s a good modelling choice is a very different question). But it’s definitely a change in meaning right?
I feel there exist also ‘lateral’ changes in meaning, this one is a little harder to define. But archetypes define there meaning in all parts of the archetype. The major parts are off course the ‘concept’ ‘description’ and ‘definition’. The concept name and description is a bit of a definition of the concept. But definitely not conclusive. This is why we there’s also ‘purpose’, ‘use’ and ‘misuse’, which help to further clarify the concept and define the meaning of the (concept the) archetype (models).
Now, when modelling a template the job is to properly interpret and apply this meaning to a dataset. This is usually arbitrary. One example is the screening family of archetypes. e.g. eval.problem_diagnosis vs OBSERVATION.problem_screening. They are about a very similar concept: whether the patient ‘has’ a diagnosis. But the meaning of the information is different: one is saying the one entering the information based on the eval.problem_diagnosis is making the assessment the patient now has a diagnosis, a de novo (new) ‘fact’. The obs.problem_sceening has much looser semantics, the patient says some familie member told her she has a diagnosis. The former may well trigger a medication prescription for that diagnosis while the latter shouldn’t. Now these meanings are a bit of continuum, now distinct. So it’s arbitrary which one is the best fit for a specific template describing a specific clinical ‘form’. So I would say it’s a ‘horizontal’ change in meaning. In this case the horizontal meanings difference is big enough to be its own archetype. But a smaller change in horizontal meaning, could imho be a specialised archetype. Like Dutch-blood_pressure.
The example Dutch-blood_pressure is a little careless of me. Because it doesn’t really exist. There is a Dutch model for blood pressure, and there’s a draft ‘localisation’ of te CKM archetype. Currently in the form of a mapping to and template on the CKM archetype. Bloeddruk | openEHR Nederland In the ‘commentaar’ column you can read a bit on the differences in meaning between the two. I haven’t checked lately but I expect you will find all three types of changes: narrowing, widening and ‘horizontal’ change.
Now I think one lesson (at least for me), is meaning is a human brain thing. So it’s associative not deterministic. And different humans interpret the same definition in different ways. This is also an answer to @thomas.beale why you fundamentally can’t solve archetype selection computationally. The definition of a an archetype is language not logic statements. Practically you can get close enough to be very useful as Thoms described. And there’s other helpers like ontologies.
(Study inter observer variance and neuroscience for leaning how the brain makes decisons e.g. on ‘meaning’, if you want to understand more of what I’m saying here.)
(I will think a bit more on Thoms other remarks before I can respond in a useful way.)
Two different blood pressure archetypes mean that “out of the box” interoperability is lost.
However I’m glad to see that the Dutch Bloeddruk is fully LOINC and SNOMED CT coded. This means that the interoperability is possible (in an automated way)
I hope that CKM archetypes will one day be coded in a similar way. This would support all kinds of automated mapping.
What I understand from what you say is that, for me, when you say “meaning” and wider/narrower, it’s all about the scope, not really the meaning of the concept being modeled.
The concept should be well described in the archetype’s metadata (this is the main semantic definition, since it explains purpose, use and misuse) and node descriptions. If the concept modeled today needs extra information, it can be because:
current requirements (scope) were incomplete and the archetype needs to be updated and it’s version increased (expanding the scope)
scope needs specialization (narrowing the scope)
For instance, if the BP is missing some specific type of BP, but it’s still a BP, it might need to be added to the existing archetype, if the scope of the current BP archetype is to represent all possible BP readings. In this case, specializing might create the same interoperability issues as creating a new archetype.
On the other hand if the new requirement requires, for instance, recording information about a different thing, though it might be similar to an existing archetype, it might need a different archetype.
Another point is that openEHR doesn’t model clinical concepts per se, it models the information recorded about a certain concept with its context.
I still don’t see that all of these are actually changes in “meaning”. The archetype defines the meaning/semantics for recording information about a concept, which is based on some requirements and have some scope, and we all know requirements change and scope change (all the time), but in general, strictly speaking, there is no change in “meaning”, unless the initial requirements or modeling was incorrect in any way (which can happen).
One comment about the original question, and many contents about this. I think it’s OK to have a pretty generic model/template at design time, and have those open/generic constraints defined at runtime. But I don’t think archetype’s or templates can define at design time the containers never for dynamic content at runtime. Then some commented too, that having open containers at design time might make querying more difficult, but if new constraints are added at runtime, then queries could also be built over those runtime constraints, because at some point the system know about them.
Sorry, I don’t think I can add more value to this discussion, just sharing an opinion. <3
Well those kinds of differences are understood as different topics in models anyway. The blood_pressure archetype is documented as being only used for systemic arterial pressure, i.e. the usual surrogate for general health. To be ontological, the thing we are interested in measuring is the continuant ‘systemic arterial pressure’, not CVP, no pulmonary circuit pressure, not some other continuant. We usually do that by taking measurements of a brachial artery, sometimes in other places where we know how to adjust the values to be comparable with other values. Comparability is the test for whether we are measuring the same thing. The standard brachial site way of measuring BP is a specific method: shutting off flow, and then listening for sounds when we re-open it and watching a pressure gauge. Or using a modern machine that does that for us. That’s one way of measuring an IVP at a particular site, that gives us a good proxy value for the pressure in the systemic circuit. So that’s one archetype.
IVP is a technical concept - but what continuant you measure depends on where you measure it.
What is technically allowed in a specialisation of a constraint model is a narrowing of the constraints. But when you talk about IVP and systemic arterial pressure, you are talking about ontological categories, and asserting something like: systemic arterial pressure (which we think is a measurement of a continuant defined by the class intra-vascular pressure AT-SITE {brachial artery OR }) IS-A intra-vascular pressure.
All I am saying here is: don’t confuse the specialisation relation among archetypes (info models) with the IS-A relation among ontological categories. Archetypes are models of epistemic entities (what info can we obtain from the world, which gives us knowledge of individuals) and ontological entities (universals aka categories of the real word).
The interesting question is whether a specialised archetype can have a parent that not only has wider constraints, but is actually an ontologically more general category. It’s not that easy to answer, because the ontological category/ies in most archetypes is currently implied; we don’t yet have ontology markers that connect the archetype elements to the ontological classes or categories to which they stand in the IS-ABOUT relation.
That’s the problem.
These relate to when this information model can be used - it’s essentially trying to make sure users don’t use it to document the wrong ontological entities.
Questions like this can be potentially clarified by trying to distinguish the related ontological categories first, and then sorting out what the archetypes should be. Archetype construction to date has mainly commenced from the practical knowledge of clinical modellers - they know their information, but probably don’t know the formal ontology behind it (I’m fairly sure this is not being taught in med school even today
Today, that’s sort of true, due to the lack of linking of archetypes to ontologies, and in many cases the lack of ontology for the specific entities in question anyway.
Archetypes are mostly ‘ok’ to use though because they are built by people from the domain who despite not being formal ontologists, understand their information pretty well, and have a good intuitive knowledge of some areas of ontology e.g. anatomy (FMA area), physiology (OGMS) and so on.