# ADL formalisms **Category:** [ADL](https://discourse.openehr.org/c/adl/40) **Created:** 2024-06-07 08:59 UTC **Views:** 808 **Replies:** 35 **URL:** https://discourse.openehr.org/t/adl-formalisms/5333 --- ## Post #1 by @joostholslag Following a discussion in the SEC yesterday. Where there was agreements on all the outstanding issues for the migration of the community to ADL2.4. (Huge success for the community! proving a willingness to work together on difficult topics and to make significant compromises/investments for a shared future). A discussion came up what the migration to ADL2.4 will mean for all the different ADL artefacts and its formats/serialisation. This will become important during the migrations of the different software components in the ecosystem (ADL editing, CKM, CDR). It's quite a niche topic so mostly relevant to the SEC and implementors. Currently there are many different data format and serialisations: ADL, json, XML, ODIN file extensions: .opt .opt2 .adl. adls. adlt .json .oet artefacts 1.4: archetype (adl, json, xml), adl1.4 template (oet, Better's native json), adl1.4 operational template (.opt) Better's web template (json?) SDT, TDD, and probably others. artefacts 2.4 (unchanged from [2.0-2.4]): adls (default differential archetype), adl (flattened archetype), adlt (differential template, technically identical to adls, Nedap invented file extension), opt2 (unspecified). In my mind there will be two main artefacts formalism/serialisation in the future: 1: design artefacts: archetypes and templates will be in adl2.4 format and serialisation. File extensions will be .adls for differential archetype, .adlt for differential template (and let's try to deprecate the flattened archetype .adl formalism) from adl3 on I'd like to change the serialisation of these artefacts to yaml, (file extensions to be decided; we can consider json, but I'm not in favour of including it in the specs). 2: operational artefacts: a flattened format as a use case specific dataset format, this will be mainly used at the the api level imho. So probably openapi format makes sense (with json and/or yaml serialisations, I don't care). This should 'replace' the Better specific web template, and the openEHR TDD, SDT, etc. imho. And if needed (probably useful for validation and less implementation/language dependency) we can retain the .opt2 in OpenAPI ADL yaml, that still contains AT codes instead of (English) language key's. So unrecommended for use by client app developers. There's probably a lot of details to fill in, and probably some major controversies, so let's except a bit of chaos in this topic. Very curious for your thoughts @SEC @SEC-experts Edit: the current formats are described in https://specifications.openehr.org/releases/ITS-REST/latest/simplified_data_template.html#_json_formats Please keep discussions on how to achieve/generate/validate the formalism on other topics, like this one https://discourse.openehr.org/t/json-schema-and-openapi-current-state-and-how-to-progress/1385. I’d like to focus this to topic on discussing a desired scenario and how to simplify and standardise the currently available openEHR formalism. Since the amount and complexity is a barrier of entry to openEHR. --- ## Post #2 by @siljelb 3 posts were split to a new topic: [What's new in ADL2.4?](/t/whats-new-in-adl2-4/5334) --- ## Post #3 by @siljelb 2 posts were merged into an existing topic: [What's new in ADL2.4?](/t/whats-new-in-adl2-4/5334/4) --- ## Post #4 by @joostholslag @pablo let's do [the discussion](https://discourse.openehr.org/t/archetypes-in-yaml/5357/17?u=joostholslag) here: [quote="pablo, post:17, topic:5357"] +1 on the comment by @sebastian.iancu I don’t want to start a discussion on which format is “better”, just wanted to point out those formats were mentioned in the NL SEC meeting. [/quote] [quote="pablo, post:17, topic:5357"] is that for JSON there is a schema, which is more convenient for validation, though there are some adaptations to use JSON Schema to validate YAML, and there is certainly a simpler JSON ↔ YAML conversion than XML or other standard formats to JSON. [/quote] [quote="pablo, post:17, topic:5357"] Though my personal preference would be not to have one single “preferred” format, but instead having a standard serialization and deserialization process to/from many formats to the AOM 1.4 an 2.x, that allows then to have bidirectional format transformations like format1 → AOM → format2 (change format1 and format2 to whatever you want). That way we can support multiple use cases. For instance, if I need to display an archetype on a web app, I would prefer JSON because of the browser’s native support for JS. For storing ADL I would prefer YAML because it’s smaller. So using the best format for the job. [/quote] Ideally I'd have ADL artefacts in YAML primarily, because of (arbitrary) legibility over json, do validation using json (or yaml) schema. And work together on an export/serializer to json (in this case of the archetype, not the aom/adl meta schema). Off course I'm not against serialising archetypes to json also. But the risk I see if we don't pick a single preferred format for hand editing, is we end up with a lowest common denominator. [quote="sebastian.iancu, post:13, topic:5357"] yaml comes besides readability also with some advantages over json, like comments, typing, tagging [/quote] These are some important advantages I think we wouldn't want to loose. Also tool support can become confusing. A 'single' conversion algorithm from yaml->json would be much more scalable than each tool having to support both yaml and json editing and conversion. [quote="damoca, post:12, topic:5357"] An archetype is not a simple configuration file, as the examples provided show. It is a very nested structure where the use of brackets would be very welcome, instead of having to control the indentation levels by spaces. Just imagine having to add a sibling node here in a plain text editor. It is not about the number of keystrokes, but about the possibilities of introducing errors. [/quote] This is indeed a major disadvantage of yaml. --- ## Post #5 by @pablo [quote="joostholslag, post:4, topic:5333"] Ideally I’d have ADL artefacts in YAML primarily, because of (arbitrary) legibility over json, do validation using json (or yaml) schema. And work together on an export/serializer to json (in this case of the archetype, not the aom/adl meta schema). Off course I’m not against serialising archetypes to json also. But the risk I see if we don’t pick a single preferred format for hand editing, is we end up with a lowest common denominator. [/quote] Even if practically it happens, archetypes are not meant to be edited manually, that's why we have editors and why IMHO the format is not important, not even for legibility. There is high risk in editing things manually that have an underlying schema, since any change can break the whole thing, then make it incompatible with editors, and why we should rely on editors. Technically we need formats for storage and exchange, even for displaying. For storage and exchange you would use the smallest format, that's YAML in most cases, while for displaying you would use the most native format. For instance, for web visualization, that's JSON (since JSON is JavaScript and JS is supported by all browsers, so you don't even need to parse it). In the other hand, if some tool uses an XML database and wants to store archetypes, maybe the XML representation is better. I prefer to discuss based on use cases than on personal preferences, because personal preference can't be discussed. If we don't prefer one format over the others, but just have standard serializers and parsers to each format, we can convert from-to any format (that's "model based transformation", for instance YAML ==(parse)=> AOM instance ==(serialize)=> JSON or JSON ==(parse)=> AOM instance ==(serialize)=> YAML). I would discourage direct format transformation (YAML ==> JSON) since it's costly and more difficult to maintain (a single change on one format affects the whole thing). --- ## Post #6 by @borut.jures [quote="pablo, post:5, topic:5333"] that’s “model based transformation”, for instance YAML ==(parse)=> AOM instance ==(serialize)=> JSON or JSON ==(parse)=> AOM instance ==(serialize)=> YAML [/quote] If we only discuss YAML and JSON as possible formats, the conversion should be direct, using standard libraries for converting between them, and thus removing a dependency on "less" used and tested AOM serializers. [quote="pablo, post:5, topic:5333"] I would discourage direct format transformation (YAML ==> JSON) since it’s costly and more difficult to maintain (a single change on one format affects the whole thing). [/quote] This doesn't sound right. A well supported and maintained free & open source library is not more expensive than asking openEHR vendors to do the same conversion with custom serializers? Such conversions are widely used by non-openEHR projects so we could expect that the libraries will be maintained and tested. [quote="pablo, post:5, topic:5333"] For storage and exchange you would use the smallest format, that’s YAML in most cases, while for displaying you would use the most native format. [/quote] Both YAML and JSON are text formats and "small" compared to other data we store (e.g. images). I would expect that each system stores archetypes/OPTs only once. The difference in % might be large, but in KB it should be insignificant. --- If only editors are used to edit archetypes, they could save them in YAML and JSON. Or even only JSON. p.s. As an engineer it seems logical to only use JSON for archetypes and OPT2, however Silje and Thomas are saying there are "many" editors who edit archetypes by hand. It would be great to hear what "many" means in this case and if they only make "small" changes by hand (which wouldn't be that painful in JSON). --- ## Post #7 by @joostholslag [quote="borut.jures, post:6, topic:5333"] however Silje and Thomas are saying there are “many” editors who edit archetypes by hand. It would be great to hear what “many” means in this case and if they only make “small” changes by hand (which wouldn’t be that painful in JSON). [/quote] Very arbitrary, but: My best guess would be it's maybe a hundred to a thousand people worldwide. Most of those will be developers who (arguably) would be ok with editing json as well. The main audience should be clinical modellers, but most (>90-99% of those) will only edit using a GUI. [quote="joostholslag, post:10, topic:5357"] I don’t think writing adl by hand to produce an archetype is really done by anyone. Mostly it’s doing 99%from a gui editor. But making specific edits for edge cases or tool failure by hand in adl is a key requirement imho. [/quote] So yes, it will mostly be small changes. But already the jump from GUI to editing ADL by hand is huge. I think the jump to JSON will be too discouraging for people like me. FWIW I edited quite a few (>10) yaml files, but never json, it feels just too hostile/ machine like. --- ## Post #8 by @ian.mcnicoll It is certainly not 'common' to have to edit ADL manually but it is definitely needed occasionally. I've edited the YAML samples to use 2 spaces and I think the nesting is probably visually not an issue and I presume a decent editor will help validate incorrect spacing e.g if new content is added. One disadvantage of JSON is that comments are not supported but TBH these are only ever used AFAIK to orientate node Identifiers in ADL e.g. ``` ELEMENT[at0003] occurrences matches {0..1} matches { -- Document type value matches { DV_TEXT matches {*} } } ``` Perhaps the use of SNOMED-like piped rubrics on the nodeId would be a better option. This was discussed in connection with ADL paths. Always optional and always ignored. ``` children: - rm_type_name: "DV_TEXT" node_id: "at0003 | Document type |" ``` I'm pretty agnostic re JSON and YAML. I think both would be editable to the level we require, as long as some of the 'long-hand' expressions from raw AOM are compressed e.g. multiplicity and slot constraints --- ## Post #9 by @borut.jures Unsupported comments in JSON are a "feature". I use all the "comments" and documentation in my generators to include as much helpful into in the generated artifacts. The node identifiers are taken from the terminology section and added to the nodes. We will not loose these in YAML/JSON. p.s. I have edited around 100 archetypes myself (mostly fixing small inconsistencies that were caught by strict validation in my tools). --- ## Post #10 by @ian.mcnicoll Can you give an example of your 'commented' json archetype --- ## Post #11 by @damoca In these discussions about the ADL we tend to forget that ADL is just a format for serialization and exchange, but the real modeling formalism is AOM. I mention this because probably we should not waste time discussing about choosing one format or the other, but assume that the systems should support several formats. For example, we all assume that a CDR will accept a JSON, an XML or an SDT as data instances. We should also assume that a system could accept an ADL, a YAML, or a JSON (or even XML), and the users/consumers of the models will choose the most appropriate format depending on their use case or technological stack. Just as FHIR does: ![image|690x241](upload://kXamlIy44kRoRV8erHG0Dm9uzsB.png) The work for the SEC should be to provide the correct schemas for the accepted formats, and not impose a single format for all systems. --- ## Post #12 by @borut.jures [quote="ian.mcnicoll, post:10, topic:5333, full:true"] Can you give an example of your ‘commented’ json archetype [/quote] Comments in JSON could use a similar approach as types: ```JSON "_comment": "some text" ``` I'm not suggesting to add `_comment` to the JSON – I prefer the current `documentation` properties/attributes. I'm not using comments in JSON. I only wanted to point out that `-- Document type` in your example would not require a "comment" in JSON since the used text is found in terminology for the `at0003`. --- ## Post #13 by @thomas.beale [quote="damoca, post:11, topic:5333"] In these discussions about the ADL we tend to forget that ADL is just a format for serialization and exchange, but the real modeling formalism is AOM. [/quote] If we talk about the cADL part of ADL (the `definition` part), ADL is a human level format like a programming language. All the other serial formats are direct serialisations of in-memory AOM structures. So if we think about hand-editing JSON, YAML etc, you have to know the AOM. There is no 'syntax' to help you remember what to do, and the structures for things that are very simple in ADL like `'occurrences matches {0..1}'` are quite voluminous. For the rest of an archetype, other than expressions / rules, the AOM structures are mostly maps & lists, and both JSON and YAML will seem fairly natural formalisms, because they are natively based on maps and lists. EDIT: for validated archetypes & templates, any serial object dump format (XML, JSON, YAML, ...) can be conveniently used to persist and read the artefact, bypassing the original validation - as long as no-one has touched the artefact in the meantime. This means these formats (I'm talking about 100% JSON or YAML, i.e. no cADL or EL left) are good for use in operational systems, persisting validated archetypes in libraries (like CKM) and so on. It is very useful to be able to deserialise an archetype straight from (say) JSON into memory and not have to use any AOM level validation. [quote="damoca, post:11, topic:5333"] I mention this because probably we should not waste time discussing about choosing one format or the other, but assume that the systems should support several formats. [/quote] This would normally be the case. [quote="damoca, post:11, topic:5333"] The work for the SEC should be to provide the correct schemas for the accepted formats, and not impose a single format for all systems. [/quote] I would agree with that. --- ## Post #14 by @pablo [quote="borut.jures, post:6, topic:5333"] If we only discuss YAML and JSON as possible formats, the conversion should be direct, using standard libraries for converting between them, and thus removing a dependency on “less” used and tested AOM serializers. [/quote] I don't think we are settled just for those two. We might need to consider also XML that has been the alternative to ADL for AOM serialization format for a while. One element to consider is that our formats tend not to be 100% what the model is, but are slightly optimized, which makes generic library-based transformations a little difficult, since there is no canonical transformation between the formats we use. Consider the XML and JSON schemas for RM for instance, it's not straightforward to do a direct transformation between formats. I'm not sure I understand the argument about "less used and tested AOM serializers", we are actually talking about creating those from scratch, which require proper testing as any software piece. If there are no canonical transformations (just using a library, without introducing custom mappings) between the formats, then direct format transformation has no advantages to model based transformation, and generates coupling between different formats for different purposes. What we know is that those should always be able to be transformed to/from a valid AOM instance. On the other hand, model based transformations, allow to support any future formats without touching the current ones, even allow to support different flavors under the same format, like a simplified JSON or XML if needed. [quote="borut.jures, post:6, topic:5333"] [quote="pablo, post:5, topic:5333"] I would discourage direct format transformation (YAML ==> JSON) since it’s costly and more difficult to maintain (a single change on one format affects the whole thing). [/quote] This doesn’t sound right. A well supported and maintained free & open source library is not more expensive than asking openEHR vendors to do the same conversion with custom serializers? Such conversions are widely used by non-openEHR projects so we could expect that the libraries will be maintained and tested. [/quote] You are assuming such conversion is possible, we don't know that yet since the schemas are not defined. Based on my experience with RM transformations, I did implement direct transformations between XML and JSON and that was a pain to maintain for years. When I put the model in the middle an separated the logic to deal just with XML and JSON parsers and serializers, it was easier to maintain and use. The issue with the RM is that the XML and JSON schemas are not 1 to 1 compatible, so generic library-based transformations need some tweaks to make the transformation work. Also consider not all vendors will be able to use those open source libraries you mention (though you didn't mention them specifically), since there are vendors in Java, PHP, .Net, etc. IMO we can't choose the libraries vendors should use neither. We should focus on the spec side and let vendors choose which technology to use (openEHR reference implementations, external libraries or their own). Then we can focus on the formats, the schemas, etc. I understand this generates great debate because of personal preferences and different experiences, I think the focus should be in the specs and use cases more than in the specific technologies and our own preferences. Either way, the goal of my proposal in the last SEC meeting was to try to get away from custom openEHR serialization formats and try to adopt more common options for the new AOM. ![Screenshot_2024-06-17_19-30-45|690x252](upload://jElvtqNJDFHclJyGCXld9jVX2dv.png) REF: https://openehr.atlassian.net/wiki/spaces/spec/pages/2201812993/2023-11-15+16+Arnhem+SEC+Meeting [quote="borut.jures, post:6, topic:5333"] however Silje and Thomas are saying there are “many” editors who edit archetypes by hand [/quote] I would wonder why modelers are doing such things, maybe modeling tools are not good enough for their needs? ![design vs reality|505x500](upload://njNwPUkRWGxis0WuquxFLhWRPse.jpeg) --- ## Post #15 by @thomas.beale [quote="joostholslag, post:7, topic:5333"] Very arbitrary, but: My best guess would be it’s maybe a hundred to a thousand people worldwide. Most of those will be developers who (arguably) would be ok with editing json as well. [/quote] It's nothing to do with JSON v ADL. The definition part of a JSON archetype looks like this: ``` "definition": { "_type": "C_COMPLEX_OBJECT", "rm_type_name": "Direct_observation", "node_id": "id1", "attributes": [ { "_type": "C_ATTRIBUTE", "rm_attribute_name": "data", "children": [ { "_type": "C_COMPLEX_OBJECT", "rm_type_name": "Node", "node_id": "id38", "attributes": [ { "_type": "C_ATTRIBUTE", "rm_attribute_name": "value", "children": [ { "_type": "C_COMPLEX_OBJECT", "rm_type_name": "Text", "node_id": "id174" } ] } ] }, { "_type": "C_COMPLEX_OBJECT", "rm_type_name": "Node", "node_id": "id7", "occurrences": { "lower": 0, "upper": 3 }, "attributes": [ { "_type": "C_ATTRIBUTE", "rm_attribute_name": "items", "children": [ { "_type": "C_COMPLEX_OBJECT", "rm_type_name": "Node", "node_id": "id8", "attributes": [ { "_type": "C_ATTRIBUTE", "rm_attribute_name": "value", "children": [ { "_type": "C_COMPLEX_OBJECT", "rm_type_name": "Coded_text", "node_id": "id175", "attributes": [ { "_type": "C_ATTRIBUTE", "rm_attribute_name": "term", "children": [ { "_type": "C_COMPLEX_OBJECT", "rm_type_name": "Terminology_term", "node_id": "id223", "attributes": [ { "_type": "C_ATTRIBUTE", "rm_attribute_name": "concept", "children": [ { "_type": "C_TERMINOLOGY_CODE", "rm_type_name": "Terminology_code", "node_id": "id9999", "constraint": "ac1" } ] } ] } ] } ] } ] } ] }, ``` That's a direct dump of AOM (object meta-model) instance of an archetype. It's not that devs don't know JSON, the problem is that you have to do mental somersaults to create the AOM structure in your mind in order to understand what to do. The same archetype content in ADL is: ``` definition Direct_observation[id1] matches { -- Audiogram test result data cardinality matches {1..*; unordered} matches { Node[id38] matches { -- Test result name value matches { Text[id174] } } Node[id7] occurrences matches {0..3} matches { -- Result details items cardinality matches {2..*; unordered} matches { Node[id8] matches { -- Test ear value matches { Coded_text[id175] matches { term matches { Terminology_term[id223] matches { concept matches {[ac1]} -- Test ear (synthesised) } } } } } ``` Any developer can learn this - it's a block-structured language like all the other ones they use, and there's not much mental work to understand what is going on. It's the same reason we program with source code languages like Java and C# (Go, Python, whatever) rather than in bytecode, MSIL, or other post-parse machine formats. Humans learn formalisms through such languages; post-parse meta-model representations are hard to mentally deal with. JSON and YAML are pretty readable for the description and terminology parts of an archetype, but no-one's going to read either for the definition part. That's why a human- (and machine-readable) archetype should be in YAML + cADL (+EL) or JSON + cADL (+EL), whereas a pure machine-readable form can be 100% JSON (I don't see the point of YAML for this form, but it would work). --- ## Post #16 by @siljelb [quote="pablo, post:14, topic:5333"] [quote="borut.jures, post:6, topic:5333"] however Silje and Thomas are saying there are “many” editors who edit archetypes by hand [/quote] I would wonder why modelers are doing such things, maybe modeling tools are not good enough for their needs? [/quote] It's several different reasons: * modelling tools not doing everything we expect them to do (like switching the original language with one of the translations, or add terminology based (as opposed to UCUM) units to a DV_QUANTITY) * modelling tools not doing things we don't necessarily expect them to do (like being able to do a search-and-replace for a specific word or phrase throughout the archetype) * modelling tools lagging a bit behind specs (such as new units added to the units file) * modelling tools and CKM or CDRs not agreeing on what the correct syntax is --- ## Post #17 by @pablo [quote="thomas.beale, post:13, topic:5333"] For the rest of an archetype, other than expressions / rules [/quote] Expressions and rules could be represented in a declarative way instead of embedding an imperative expression in the syntax. Declarative would be to represent what looks like a programming language expression like "if x the y" as data (JSON, XML, etc ). Just as an example https://jsonlogic.com/ I think the advantage of that strategy is that: 1. There is no need of a custom syntax 2. So there is no need of a custom grammar 3. Then there is no need to generate custom parsers 4. And then to integrate custom ASTs in our code in order to run that logic A declarative expression will just parse as JSON, XML, YAML, etc. and engines could be created in any language to evaluate the expressions. From experience of building a rule engine that way, that doesn't require a lot of work and would be similar to the last step of using the ASTs of parsed expressions of the embedded language, without the custom parsing part. Just an idea to think about! --- ## Post #18 by @pablo Thanks @siljelb that's what I thought. With a correct modeling tool there wouldn't be a need for manual adjustments. Maybe more modelers should be involved in the design of the modeling tools. --- ## Post #19 by @ian.mcnicoll The tools are actually very good IMO, but there will always be gaps, bugs and omissions. That's why a few of use will always like to be able to visualise and occasionally edit the raw files. I agree with Thomas that we need two broad types of representation 1. A pure machine processable form of the AOM (essentially what we have right now with the archetype xml and .opt, for use internally by openEHR CDR and tool builders 2. Something more like ADL that avoids this, and applies equally to third-party devs and to clinical modellers. >> It’s not that devs don’t know JSON, the problem is that you have to do mental somersaults to create the AOM structure in your mind in order to understand what to do. I understand the argument for using cADL+ e.g. YAML but I think we really have to get away from needing a custom low-level parsing to make use of this type of file. It remains a considerable barrier to use by third-party devs. @Joost - I'm not sure the jump to YAML vs JSON from ADL is really all that different. In both cases you need to understand the way that objects, arrays and nesting work in each case, and TBH in most circumstances we would only be editing a small fragment (I hope!). It is the way that we represent the cADL sections that are the challenge, and would be broadly similar in both YAML and JSON. There definitely need be a variety of outputs but I would (just) favour JSON. Here is a snippet of Web template JSON. ``` "children" : [ { "id" : "weight", "name" : "Weight", "localizedName" : "Weight", "rmType" : "DV_QUANTITY", "nodeId" : "at0004", "min" : 1, "max" : 1, "localizedNames" : { "en" : "Weight" }, "localizedDescriptions" : { "en" : "The weight of the individual." }, "aqlPath" : "/content[openEHR-EHR-SECTION.adhoc.v1,'Body mass metrics']/items[openEHR-EHR-OBSERVATION.body_weight.v2,'Weight']/data[at0002]/events[at0003]/data[at0001]/items[at0004]/value", "inputs" : [ { "suffix" : "magnitude", "type" : "DECIMAL" }, { "suffix" : "unit", "type" : "CODED_TEXT", "list" : [ { "value" : "kg", "label" : "kg", "validation" : { "range" : { "minOp" : ">=", "min" : 0.0, "maxOp" : "<=", "max" : 1000.0 } } } ``` --- ## Post #20 by @thomas.beale [quote="pablo, post:17, topic:5333"] Expressions and rules could be represented in a declarative way instead of embedding an imperative expression in the syntax. [/quote] This is already the case, with the BEL (Basic Expression Language) embedded in ADL, in commercial use by at least Nedap. [GDL2](https://specifications.openehr.org/releases/CDS/latest/GDL2.html#_an_example) also does 'when condition then action' rules. Doing expressions really properly might look more like EL/DL - [see examples here](https://specifications.openehr.org/releases/PROC/latest/process_examples.html#_breast_cancer_decision_protocol). Everyone coming this problem always says the same thing: why don't we use (JS, Java, Python, ...)? There is a 40 year history of languages in decision support, from Arden 2 onward, to address the limitations of existing languages. Some of the things general purpose languages don't do: * have any notion of terminology * have any notion of 'binding' data items (e.g. 'is_diabetic', 'date_of_birth') to data sources * Allen operators, i.e. time operators like 'before', 'during' * no notion of 'currency', i.e. how stale a variable (e.g. SpO2) is with respect to reality * tabular decision logic * close-to-domain conceptual model Trying to use any general purpose programming language gets painful very quickly because of the lack of support for the above. For this reason, there are still contemporary attempts to create decision / process languages to solve some of these needs: * HL7 CQL - has Allen operators, terminology, and an approach to binding * OMG Decision Modelling Language (DMN) - related to BPMN and CMN - developed for the insurance industry - supports tabular decision programming * OMG FEEL - simple expression language underpinning DMN * Gello - an older HL7 attempt * openEHR GDL, BEL, EL, DL, Task Planning. * various languages in use in CDS products like SaVia Health, Lumeon and others Attempts to cross-compile expression and decision language to existing languages can work. It's a bit of work, but it's a workable approach, depending on the choice of actual language. A few years ago, when we were working on Task Planning, Better implemented this approach with (from memory) TypeScript as the target language. It worked, sort of, although wasn't very performant. [quote="pablo, post:17, topic:5333"] I think the advantage of that strategy is that: 1. There is no need of a custom syntax 2. So there is no need of a custom grammar 3. Then there is no need to generate custom parsers [/quote] Writing a new language is easy these days. With tools like Antlr4, you can create and debug a powerful grammar in days / weeks, and the tools will generate much of the code of the parser. In my experience, fighting against a commodity language to implement domain-specific concepts is always worse than having a proper language to do the job. If you think about it, a custom language is just a way of formalising how to 'use' to do the job, and (usually) vastly reduces the amount of code. [quote="pablo, post:17, topic:5333"] A declarative expression will just parse as JSON, XML, YAML, etc. and engines could be created in any language to evaluate the expressions. [/quote] I think approaches like jsonlogic etc might be reasonable targets for cross-compilation. [quote="ian.mcnicoll, post:19, topic:5333"] I understand the argument for using cADL+ e.g. YAML but I think we really have to get away from needing a custom low-level parsing to make use of this type of file. It remains a considerable barrier to use by third-party devs. [/quote] Do you mean a parser for cADL? It's just a parser like any other, and there are open source parsers such as Archie available to parse them. [quote="ian.mcnicoll, post:19, topic:5333"] There definitely need be a variety of outputs but I would (just) favour JSON. [/quote] That's probably what devs working with templates would want to use. Clinical modellers editing description meta-data or doing translations might not love JSON as much ;) If JSON can be supported, YAML can be supported. --- ## Post #21 by @pablo [quote="thomas.beale, post:20, topic:5333"] [quote="pablo, post:17, topic:5333"] Expressions and rules could be represented in a declarative way instead of embedding an imperative expression in the syntax. [/quote] This is already the case, with the BEL (Basic Expression Language) embedded in ADL, in commercial use by at least Nedap. [GDL2](https://specifications.openehr.org/releases/CDS/latest/GDL2.html#_an_example) also does ‘when condition then action’ rules. Doing expressions really properly might look more like EL/DL - [see examples here](https://specifications.openehr.org/releases/PROC/latest/process_examples.html#_breast_cancer_decision_protocol). [/quote] My point is to use the same format that is used as a serialization for the AOM to represent the logic too. That is (from the examples you shared https://specifications.openehr.org/releases/PROC/latest/process_examples.html#_decision_logic_module) to represent `(tnm_t > '1a' or tnm_n > '0')` as, for instance in JSON, something like: ``` { "or": [ { ">": [ "tnm_t", "1a" ] }, { ">": [ "tmn_n", "0" ] } ] } ``` Analogous with YAML, XML or whatever. [quote="thomas.beale, post:20, topic:5333"] Everyone coming this problem always says the same thing: why don’t we use (JS, Java, Python, …)? There is a 40 year history of languages in decision support, from Arden 2 onward, to address the limitations of existing languages. [/quote] I wouldn't get into that, coupling a specific language in our metadata artifacts is bad for anyone not using that specific language. In order to be ecumenical, something generic enough is better. That is why I mentioned using declarative expressions in the same serialization format to simplify parsing the logic part. --- ## Post #22 by @thomas.beale [quote="pablo, post:21, topic:5333"] My point is to use the same format that is used as a serialization for the AOM to represent the logic too. [/quote] right - but that's an object dump of in-memory meta-model objects, not a syntax. So it's fine for machine read/write, but no use for humans to understand or write. I'm generically assuming that object dump (of meta-model objects) representation in JSON or anything else is always technically available. It's just not a format that humans would ever use. [quote="pablo, post:21, topic:5333"] In order to be ecumenical, something generic enough is better [/quote] There isn't anything, only general purpose languages mal-adapted to the problem. Users have to figure out how to make these work for the HIT / CDS space, and it's not trivial. --- ## Post #23 by @pablo [quote="thomas.beale, post:22, topic:5333"] [quote="pablo, post:21, topic:5333"] My point is to use the same format that is used as a serialization for the AOM to represent the logic too. [/quote] right - but that’s an object dump of in-memory meta-model objects, not a syntax. So it’s fine for machine read/write, but no use for humans to understand or write. [/quote] It's not exactly an object dump from memory, and it's not meant for humans to understand, read or write manually, as it shouldn't be for the cADL part neither, though as said before, modeling tools are not there yet so modelers have to do manual tweaks. The problem with logic expressions is for that to be written manually you need more than the syntax, like in any programming language, you have, for instance, a compiler/interpreter/transpiler and a debugger. You can't just define the syntax and say "implement this", and there is no way to check the expressions if the other elements are not defined. What the declarative notation allows is to avoid manual writing of expressions in favor of higher level tools, like visual programming with blocks. So humans would really drag and drop building blocks like in scratch (https://scratch.mit.edu/projects/238763/editor/) so the expression will be syntactically correct at design time. [quote="thomas.beale, post:22, topic:5333"] I’m generically assuming that object dump (of meta-model objects) representation in JSON or anything else is always technically available. It’s just not a format that humans would ever use. [/quote] I would prefer to adopt "things" in favor of simplifying implementation and avoid manual intervention as much as possible in formats representing openEHR artifacts. I see this as a pro not as a con. The manual intervention today is mostly because of shortcoming of modeling tools, not because it's a hard requirement from modelers. We could even have a binary format for archetypes and that would be OK IIF modeling tools are expressive enough that no modeler needs that manual intervention. [quote="thomas.beale, post:22, topic:5333"] [quote="pablo, post:21, topic:5333"] In order to be ecumenical, something generic enough is better [/quote] There isn’t anything, only general purpose languages mal-adapted to the problem. Users have to figure out how to make these work for the HIT / CDS space, and it’s not trivial. [/quote] I'm not sure. If you choose Java to be your expression language, then .Net devs will need to transpile the expressions to C#, making their lives a little more miserable in the process. The problem is not technical, since modern programming languages don't have many shortcomings in terms of representing logic. The argument is more political than other thing: instead of favoring one group over all others, we can make an abstraction that is easily mappable to any technology, and we could even provide reference implementations in different languages. Behind all this is the issue of having different languages for things that are very similar, since we need to also manage their versions. We have archetype assertions, two expression languages, I think there are also two GDL expression languages, etc. each with their own versioning and their own syntax, which IMHO is a mess to maintain going forward. That is why I would prefer a declarative approach (and because it's something I already tested and worked OK, I can provide details of my implementation if anyone is interested). --- ## Post #24 by @thomas.beale [quote="pablo, post:23, topic:5333"] I’m not sure. If you choose Java to be your expression language [/quote] The core expression language is the easiest part of the whole equation. Nearly every host language does expressions in the same way. There might be differences to do with e.g. `'&&'` versus `'and'` or snake_case v CamelCase v kebab-case, but that's about it. Basic logic expressions aren't the problem. It's when you want to do more than write a simple expression, e.g. use temporal logic operators, use coded terms as first order elements, write a decision table, convert numeric ranges to symbolic ones (a basic necessity in lab systems) that you have to do a lot more work. You can do these kinds of things using more basic capabilities of each native language - but they will be a) a lot of code and b) not interoperable, since they solution will be different in each language. So there is no sharing of guideline logic modules (for example) possible. This is what companies do right now - and one reason why we have no shared guidelines language. [quote="pablo, post:23, topic:5333"] We have archetype assertions, two expression languages, I think there are also two GDL expression languages, etc. each with their own versioning and their own syntax, which IMHO is a mess to maintain going forward. [/quote] That is true. That is why we started work on [Expression Language (EL)](https://specifications.openehr.org/releases/LANG/latest/EL.html), to unite all this, but also support needs not supported in general purpose programming languages. EL isn't finished, but there is a pretty complete set of [grammars and test cases here](https://github.com/openEHR/openEHR-antlr4). We can start that effort again, or try to make all this work in a general purpose language, but the challenges will not disappear - we will just be moving deck-chairs around on the Titanic. [quote="pablo, post:23, topic:5333"] That is why I would prefer a declarative approach [/quote] I think you are solving a different problem here - the question of a standardised machine representation. But the jsonlogic solution won't address most of the list above e.g. temporal operators, coded terms and so on. So it doesn't get us that far. It still might be useful for sharing simple expressions though. --- ## Post #25 by @pablo [quote="thomas.beale, post:24, topic:5333"] The core expression language is the easiest part of the whole equation. Nearly every host language does expressions in the same way. There might be differences to do with e.g. `'&&'` versus `'and'` or snake_case v CamelCase v kebab-case, but that’s about it. Basic logic expressions aren’t the problem. [/quote] My point is: you always need a mapping between the expression and the language that implements the evaluation of the expression. Then if for some that is 1-to-1 and for others requires extra work, that's unfair (what I mentioned before about the political argument). Though technically you can do a syntactic mapping (expression syntax to programming language syntax, A.K.A. transpiler), the expression will require some extra context to be able to run, which is not included in the original expression. You could also do a mapping into an expression interpreter. Besides all the tech part, which I thin is clear, what I would like to be at least considered, is the possibility to explore syntax-less expressions with a declarative approach. [quote="thomas.beale, post:24, topic:5333"] It’s when you want to do more than write a simple expression, e.g. use temporal logic operators, use coded terms as first order elements, write a decision table, convert numeric ranges to symbolic ones (a basic necessity in lab systems) that you have to do a lot more work. [/quote] I think I understand what you mean from the use case perspective, what you mention are higher level constructs. I think there we have two big alternatives: a) To build our basic expression model and make all the higher level constructs built based on that basic expression model, so all the temporal logic, coded terms, conversions, decision tables/trees, etc. are just different arrangement of the same operations/clauses/constructs. So an engine that can process the basic expression model can process any other higher level construct. This approach is difficult for adding initial extensions since there is a more constrained building block set, but easier to manage, run and standardize. b) To build our basic expression model and a set of extensions with defined APIs, so that those can be used by the expression model. That would be easier to extend since there are no much constraints for adding functionality, though it will be more difficult to manage and standardize. [quote="thomas.beale, post:24, topic:5333"] You can do these kinds of things using more basic capabilities of each native language - but they will be a) a lot of code and b) not interoperable, since they solution will be different in each language. [/quote] I agree, and might be similar to the approach b) mentioned above, which is more difficult to standardize, though with a good set of rules and tools everything is possible. That is partly (though not the main reason) why I would like "the approach of having a syntax-less expression model represented natively as a common serialization format" to be considered. [quote="thomas.beale, post:24, topic:5333"] That is why we started work on [Expression Language (EL)](https://specifications.openehr.org/releases/LANG/latest/EL.html), to unite all this, but also support needs not supported in general purpose programming languages. EL isn’t finished, but there is a pretty complete set of [grammars and test cases here](https://github.com/openEHR/openEHR-antlr4). [/quote] I see a lot of value on the model behind the expression language, but I don't like to much having a "language", I would prefer to have an expression model and manage artifacts in a syntax-less way. Now I'm starting to repeat myself, I'm getting old... [quote="thomas.beale, post:24, topic:5333"] [quote="pablo, post:23, topic:5333"] That is why I would prefer a declarative approach [/quote] I think you are solving a different problem here - the question of a standardised machine representation. But the jsonlogic solution won’t address most of the list above e.g. temporal operators, coded terms and so on. So it doesn’t get us that far. It still might be useful for sharing simple expressions though. [/quote] I'm not pushing for JSON logic, just provided an example so people not familiar with the declarative approach can take a look. I think the expression model should be defined in order to comply to all the requirements you mentioned, and then the evaluation flow should also be defined so a runtime actually does what it's suppose to do in order to comply with your requirements. Note I didn't mention syntax at all, since the syntax itself it no so important IMHO because all the requirements could be met without a specific language. If we take a step back and think about that and what you mentioned about the little syntactic differences between the languages, I think most languages will look almost the same in their AST form, which is the model behind the syntax. That is why I insist on focusing the model and not on the syntax :) This is what I tried to do 12 yeas ago: 1. Design a model that can: a) declare and resolve variable (for instance executing a query in a CDR and extracting single or aggregated values in one opertaion), b) have functional modules (functions like time logic, calculations, assignment), c) have flow control, d) have actions (send an HTTP request, print, log, ...). 2. Have an engine that can manage and evaluate rules on demand, providing input data if needed. REF: https://github.com/ppazos/cabolabs-xre-core/tree/master/src/com/cabolabs/xre/core 3. Have a simple representation of rules for storing and sharing. REF: https://github.com/ppazos/cabolabs-xre-engine/blob/master/rules/rule9_logic_functions.xrl.xml (I know this rule is stupid, bare with me) 4. And had a platform that allows to see which rules are loaded, see their execution log, see errors, test them, etc. REF: https://github.com/ppazos/cabolabs-xre-iu/tree/master I also had a test app that was a client of the rule engine, to display alerts about women that didn't have a PAP test in the last two years. Here is a presentation with some details about the rule evaluation and execution: https://www.slideshare.net/slideshow/xre-demo-presentation/14594694 The missing part of that work was a visual rule editor, which today would be easy to build. I know I would do a lot of things different today, but I have learned a lot from the process of designing and building that platform. --- ## Post #26 by @thomas.beale [quote="pablo, post:25, topic:5333"] My point is: you always need a mapping between the expression and the language that implements the evaluation of the expression. [/quote] Or... you build the evaluator. It's easy. I've done many, including a financial rules engine that evaluated expressions with vector and scalar variables. If you don't do this, you have to write a transpiler. Both ways work, but I'd raither maintain a native execution engine than a transpiler. [quote="pablo, post:25, topic:5333"] make all the higher level constructs built based on that basic expression model, so all the temporal logic, coded terms, conversions, decision tables/trees, etc. are just different arrangement of the same operations/clauses/constructs [/quote] This is pretty much what a transpiler has to do. Every higher level operation is compiled into 50 lines of code that awkwardly represents one line of source code. It's not out of the question by any means - it's what any compiler does, in fact. My main interest is in the source language. Whether it is transpiled into TypeScript or whatever, or else executed natively is just a technical choice. But to just test the language and make sure the intended semantics are really working, writing a native execution engine (i.e. an interpreter) is usually what is needed. Not hard at all. [quote="pablo, post:25, topic:5333"] I see a lot of value on the model behind the expression language, but I don’t like to much having a “language”, I would prefer to have an expression model and manage artifacts in a syntax-less way. Now I’m starting to repeat myself, I’m getting old… [/quote] So you might not want the language per se, but we will need a meta-model - [see here](https://specifications.openehr.org/releases/LANG/latest/bmm.html#_expressions). [quote="pablo, post:25, topic:5333"] I think most languages will look almost the same in their AST form, which is the model behind the syntax. That is why I insist on focusing the model and not on the syntax [/quote] They don't (if they did, we wouldn't have JVM updates every 6 months, or 21 versions of Java), but you are right, that is the correct general argument. That's why the meta-model is the thing of primary importance. A/the syntax just makes it easy to understand, and write. But you are correct, it is not necessary for those who want to author logic expressions etc purely through GUI tools. The meta-model of a language like Haskell very different to that of Java and similar languages... [quote="pablo, post:25, topic:5333"] This is what I tried to do 12 yeas ago: [/quote] This sounds like a description of what we were aiming for with Task Planning and Decision Logic, so i think we are nearly on the same page ;) --- ## Post #27 by @pablo [quote="thomas.beale, post:26, topic:5333"] [quote="pablo, post:25, topic:5333"] make all the higher level constructs built based on that basic expression model, so all the temporal logic, coded terms, conversions, decision tables/trees, etc. are just different arrangement of the same operations/clauses/constructs [/quote] This is pretty much what a transpiler has to do. Every higher level operation is compiled into 50 lines of code that awkwardly represents one line of source code. [/quote] If "code" is the specific language that serves to implement or evaluate the rule/expression, what I tried to say is what happens before going to specific implementation. I meant that the higher level constructs you mentioned could be based on a basic set of expressions, part of the same abstract expression or rule ~~language~~ model. If that is an option we want to consider, it can make everything we create on top of that compatible and manageable. Like every archetype being based on the same RM. [quote="thomas.beale, post:26, topic:5333"] So you might not want the language per se, but we will need a meta-model - [see here](https://specifications.openehr.org/releases/LANG/latest/bmm.html#_expressions). [/quote] Maybe it's a dumb question, but why is that a "meta" model and not just a "model"? Checking the UML it appears to the the AST model for the syntax. When I talk about the model, is the representation of the rule itself, not the syntax. The model represents the constructs that can be used in the rules. Something like this (sorry I don't have a diagram) https://github.com/ppazos/cabolabs-xre-core/tree/master/src/com/cabolabs/xre/core/logic [quote="thomas.beale, post:26, topic:5333"] They don’t (if they did, we wouldn’t have JVM updates every 6 months, or 21 versions of Java), but you are right, that is the correct general argument. [/quote] Note I'm not referring to specific language features, but to common parts like variable declarations, assignment, comparison, flow control, loops, code blocks and functions. [quote="thomas.beale, post:26, topic:5333"] those who want to author logic expressions etc purely through GUI tools. [/quote] I think that is worth exploring at least. Another thing could be to create expressions programmatically, I can imagine that could be of great value for testing the model and the evaluation without the syntax part, and maybe for conformance verification. Note I never said "ditch the syntax", I'm just saying it's not required for rules to work. [quote="thomas.beale, post:26, topic:5333"] This sounds like a description of what we were aiming for with Task Planning and Decision Logic, so i think we are nearly on the same page :wink: [/quote] It didn't have the task planning execution and status management, it was pretty stateless and focused on rules triggered by events with each rule was totally isolated and was autonomous (if all the data needed was available, it was able to execute and trigger other actions and/or return a result to the caller), it was also REST, so an app could connect, fire a set of rules and get some feedback, for instance for CDS to show recommendations or alerts to clinicians. So it was more like a very simplified GDL with decision logic. The action part I think it was very powerful since more actions could be defined by devs, so the rule model was pretty extendable for specific needs, and still be syntactically similar to rules using the basic set of actions (it was just XML). As always, lack of time and the need to feed the family cut my research time short. I hope I can get back to that some day. It was a nice piece of engineering, and sadly I couldn't share it with the openEHR folks. --- ## Post #28 by @joostholslag I’ve had a thought. The goal of this topic is to find a formalism, incl. serialisation and file extension for expressing design time information model artefacts of clinical concepts: archetypes and templates. (and additionally a use case specific technical model that’s easy to implement for client apps, conforming to those templates) I agree with Thomas cADL/odin is easier to read and edit than json or even yaml, a key requirement for a design time formalism. The target audience is clinical modellers: clinicians with some IT affinity. So the primary interaction with this formalism should be (and is) through a GUI. So it requires a schema and serialisation in order for multiple tools to edit the same artefact. And in order to work in the openEHR tool chain there should be a common formalism incl serialisation and file extension, between the tools for editing, reviewing, publishing, importing to CDR etc.. And as clearly stated by many, and summarised by Borut, all our DSLs, like Odin, hamper adoption of openEHR. And they require specialised software that requires maintaining by a tiny group of overly busy people (those in this topic mainly). Now either yaml and json are good candidates to meet all those requirements. Only point is that we concluded being able to hand edit the formalism is a key requirement. As analysed by many, especially Silje. Now if hand editing is done by clinical modellers, it’s important the formalism is legibly and ‘easy’ to understand for non programmers. ADL is in this regard better than json, but still hard. It seems to be that hand editing the ADL (json/odin whatever) should be done in any text editor (that supports schema validation). But I think we don’t need this for those cases that require hand editing by clinical modellers. I think it makes much more sense to have the current GUI editing tools support hand editing of this formalism. And suddenly many downsides of a standard serialisation as described in this topic (accidentally breaking the indentation hierarchy, micro syntaxes, invalid json etc.) can be fixed by the tool. At the same time ADL is still hard to understand for clinicians due to e.g. meaningless codes for nodes. ADL would be so much easier to edit if instead of “at0009” the node would be displayed as “systolic bloodpressure”. Another one would be a “live preview” of manual changes to a model, ‘easy’ in a gui tool, very hard to do in a way that any editor supports it out of the box. So I propose to specify one required formalism for design time artefacts that fully conforms to standard serialisation and aom schema, and let GUI tools support hand editing that formalism. Both Nedap archetype editor and Better Archetype Designer has a rundemetary implementation of this idea: https://archetype-editor.nedap.healthcare/advanced/joost/archetypes/openEHR-EHR-OBSERVATION.karnofsky_performance_status_scale.v0.0.1/edit_adl Or “adl” tab in Archetype Designer. I’m very curious for the people actually involved in building these tools like: @borut.fabjan @VeraPrinsen @sebastian.garde --- ## Post #29 by @ian.mcnicoll > At the same time ADL is still hard to understand for clinicians due to e.g. meaningless codes for nodes. ADL would be so much easier to edit if instead of “at0009” the node would be displayed as “systolic bloodpressure”. I agree in principle but there are all sorts of issues around the stability of that generated key (if the human name changes) and in any caee that would need to be an aDL3 issue. My interim suggestion would be either to 1. Allow the use of "at0009 | Systolic |" as node identifier. Everything that is piped is ignored 2. Just add another pseudo-node like '@node_name' alongside the node identifier, generated from the archetypeNodeID ontology, and just to be used as a navigational aid. Either would solve the problem of navigability of the tree based solely on the atCodes, without needing comments, as are currently in ADL. There is separate argument for human-readable keys but also a lot of issues to think through. --- ## Post #30 by @borut.jures [quote="joostholslag, post:28, topic:5333"] “at0009” the node would be displayed as “systolic bloodpressure” [/quote] I understood this as a GUI tool looking up “at0009” and displaying it as “systolic blood pressure”. [quote="ian.mcnicoll, post:29, topic:5333"] if the human name changes [/quote] The GUI tool would always use the current human name from the terminology section. It wouldn’t serialize it to the JSON. [quote="ian.mcnicoll, post:29, topic:5333"] My interim suggestion would be either to 1. Allow the use of “at0009 | Systolic |” as node identifier. Everything that is piped is ignored 2. Just add another pseudo-node like ‘@node_name’ alongside the node identifier, generated from the archetypeNodeID ontology, and just to be used as a navigational aid. [/quote] These two approaches are still possible (and nice to have in the future), but not necessary if the GUI tool displays the “systolic blood pressure” found in the terminology section of an archetype. --- ## Post #31 by @ian.mcnicoll [quote="borut.jures, post:30, topic:5333"] The GUI tool would always use the current human name from the terminology section. It wouldn’t serialize it to the JSON. [/quote] Of course, that is assumed,. This is about where someone has to look at the json/yaml directly. What, I think, @joost was saying is that a major problem is that is difficult to navigate the tree when the nodes are only identified by atCodes alone - so you have to keep doing lookups to orientate. ADL solves this by adding comments against the nodeIds. ![image|690x131](upload://cQLDDgIMEJJ0SoIsaxVPcjcMXG1.png) So I can easily find 'Overall description' via a simple text search, or eyeballing. --- ## Post #32 by @joostholslag I'm not saying replace the at-code with a human readable identifiers in the adl. I'm saying, let the tool *show* the human readable identifier in the place of the at-code. so exactly this: [quote="borut.jures, post:30, topic:5333"] I understood this as a GUI tool looking up “at0009” and displaying it as “systolic blood pressure”. [/quote] [quote="borut.jures, post:30, topic:5333"] The GUI tool would always use the current human name from the terminology section. It wouldn’t serialize it to the JSON. [/quote] So I'm saying if a clinical modeller has to look at the json/yaml/adl directly: let them do it from a tool, that visualises the 'code'/constraints/model according to the suggestions above. If you open the formalism in a regular IDE, it will show the 'normal'/raw yaml/json. [quote="ian.mcnicoll, post:31, topic:5333"] ADL solves this by adding comments against the nodeIds. [/quote] This is where the idea came from, but it's still not ideal. Visualising the human name where an at-code is, would help a lot. We would also need a way to actually see the code, e.g. with a hover, or a toggle between codes and human name. --- ## Post #33 by @ian.mcnicoll [quote="joostholslag, post:32, topic:5333"] So I’m saying if a clinical modeller has to look at the json/yaml/adl directly: let them do it from a tool, [/quote] Of course I agree but I'd still like the fall-back position of being able to open a plain old text editor and a syntax that allows me to easily identify the nodes - this needs to work for developers and other users not just those of us who use GUI tools. --- ## Post #34 by @borut.jures [quote="ian.mcnicoll, post:33, topic:5333"] the fall-back position of being able to open a plain old text editor and a syntax that allows me to easily identify the nodes [/quote] I agree. This can be solved by your suggestion: [quote="ian.mcnicoll, post:29, topic:5333"] Allow the use of “at0009 | Systolic |” as node identifier. Everything that is piped is ignored [/quote] GUI tools can implement a lookup for the codes and when serialized add the "piped" version to them. It shouldn't cause an issue with implementers since we already have the code to skip the "piped" part. --- ## Post #35 by @joostholslag [quote="ian.mcnicoll, post:33, topic:5333"] Of course I agree but I’d still like the fall-back position of being able to open a plain old text editor and a syntax that allows me to easily identify the nodes - this needs to work for developers and other users not just those of us who use GUI tools. [/quote] Sure, but for these edge of edge cases for clinical modellers, and frequently for developers, probably the native yaml/json as created by Sebastian and adl workbench is fine? [quote="borut.jures, post:34, topic:5333"] I agree. This can be solved by your suggestion: [quote="ian.mcnicoll, post:29, topic:5333"] Allow the use of “at0009 | Systolic |” as node identifier. Everything that is piped is ignored [/quote] GUI tools can implement a lookup for the codes and when serialized add the “piped” version to them. It shouldn’t cause an issue with implementers since we already have the code to skip the “piped” part. [/quote] I'm fine with this, but it's a little beside my point of accepting microsyntaxes etc. only in openEHR specific tools, and keep the required formalism for archetypes and templates vanilla json/yaml. --- ## Post #36 by @thomas.beale [quote="pablo, post:27, topic:5333"] I meant that the higher level constructs you mentioned could be based on a basic set of expressions, part of the same abstract expression or rule [/quote] Some things can be made to work this way, e.g. probably to make Terminology code work as a built-in type. But some semantics are just not supported by a basic language meta-model. E.g. lambdas, monads etc. [quote="pablo, post:27, topic:5333"] Maybe it’s a dumb question, but why is that a “meta” model and not just a “model”? [/quote] Just means 'model of a language / formalism' where, distinguished from 'model' which is an instance of the language, but is also a model of something from a domain. So in this scheme of things, AOM is a meta-model, and the heart rate archetype is a model. These terms are relative of course. THere's a good [paper on levels of meta](https://www.sciencedirect.com/science/article/pii/S1532046422002568?via%3Dihub) - see particularly figure 7. [quote="pablo, post:27, topic:5333"] [quote="thomas.beale, post:26, topic:5333"] those who want to author logic expressions etc purely through GUI tools. [/quote] I think that is worth exploring at least. [/quote] This is currently how GDL2 is created - only via GUI tools. Certainly works but also limits what you can do by other means. [quote="joostholslag, post:28, topic:5333"] DSLs, like Odin, hamper adoption of openEHR [/quote] ODIN is annoying now, because you can do the same thing in JSON, YAML etc, i.e. there is an alternative (and that's why we should replace ODIN with one or both of those where it is currently in use). However, not all DSLs have obvious industry-standard equivalents, in our case, cADL, and GDL. [quote="joostholslag, post:28, topic:5333"] At the same time ADL is still hard to understand for clinicians due to e.g. meaningless codes for nodes. ADL would be so much easier to edit if instead of “at0009” the node would be displayed as “systolic bloodpressure”. [/quote] That's true, hence the ADL3 proposal to move to symbolic keys plus asserted IS-A relationships (since once you leave the dotted code form, you lose the implicit knowledge that at0.4.20 is a specialisation of at0.4 etc). --- **Canonical:** https://discourse.openehr.org/t/adl-formalisms/5333 **Original content:** https://discourse.openehr.org/t/adl-formalisms/5333