openEHR Conformance / Conformance Levels / Conformance Scopes

Hi all,

I’ve been working for a while in the CONFORMANCE framework for HiGHmed, and now we are having interest in this area from Solit-Clouds, and having some extended conformance discussions internally. So there are some areas we need to discuss with the community since this will be a CONFORMANCE framework anyone can use to validate their implementations.

Current conformance framework

The current framework is focused on testing conformance with the Service Model via it’s only standard ITS spec, the REST API.

This framework is composed by a test suite design (ehrbase/doc/conformance_testing at develop · ehrbase/ehrbase · GitHub) and an implementation in Robot Framework (ehrbase/tests/robot at develop · ehrbase/ehrbase · GitHub).

The rational behind those tests is:

  1. Design should be technology agnostic (it doesn’t mention any specific implementation or even a REST API)
  2. It is independent from the interchange format (it just mentions the system under test should at least support one openEHR exchange/serialization format)
  3. Is based on the Service Model (openEHR Platform Service Model), so it defines test cases over each interface and function there.

For Solit-Clouds it is important to test the specific formats an openEHR system supports.

This opens the door for different discussion topics:

a. Should testing for formats be part of a CONFORMANCE test suite?

b. Should we differentiate levels of CONFORMANCE? for instance, CONFORMANCE with functionality vs. support for exchange formats.

c. Should we differentiate scopes of CONFORMANCE? the format specifications are still part of the standard but are a different level of spec (ITS).

Another thing mentioned today was the requirements for openEHR data validation, that is which data errors are checked by systems receiving openEHR data and how errors are reported back to the source.

Conceptually, I like to divide this into two categories: syntactic (validation against schemas), and semantic (validation against templates). In this context we are thinking mostly of COMPOSITIONs, but this should include FOLDERs, EHR_STATUS, EHR, CONTRIBUTION, PARTY, etc.

There is of course the challenge of having templates for classes that are not COMPOSITION, which is an area we have shelved for a while.

So there could be another area of CONFORMANCE about how data is validated and error reports are sent back to the source.

Internally we are trying to figure out what is important for different stakeholders.

For instance, developers would want to know if a system is compliant with certain format, but, is that required for checking CONFORMANCE for a procurement?

So is “technical conformance” at the same level of “functional conformance”? Or do we need to separate CONF specs in terms of the needs of the person trying to evaluate an openEHR implementation?

Besides those questions, I have an opinion on some of the topics.

For instance, about procurements, I think functionality CONFORMANCE is key here, the system should provide certain functionalities via one implementation (any) of the Service Model, because that is the spec that defines what an openEHR system should provide.

IMO it’s better to have an abstract / platform independent conformance test suite specification, so the implementation can vary. Today we have Robot, but, if needed, tomorrow this could be implemented in anything, even as an Insomnia or Postman script.

So that alone could work for procurements to express requirements and to check what is actually provided by a solution presented for that procurement.

Though, we need to improve the current SM to use it as a formal definition of what an openEHR system could/should provide (one single system might not implement all the services).

Then, another test suite is required for a technical conformance checks. That would be a set of tests to verify ITS specs. This includes: formats, data validation, validating schemas against BMM, …

From that, another interesting idea: for each openEHR spec component we might need to define a test suite! (to verify systems implementing those, are really implementing things right).

I know this is long but we would like to hear from the community to have more input. I think we got enough experience on this area to move it further, and be closer to say “how compliant” a system is with openEHR. This is also a step closer to formalize procurements, have an official openEHR system certification in place, and improve the required specs to get this right (specially SM and CONF).

What do you think?

1 Like

Hi, Pablo

We have some sentences about CONFORMANCE tests:
May we separate test by CORE, STANDART, and OPTIONS test by tags, we think it’ll allow us to concentrate by tests CORE for example, and expanding the test suite in the next step.
We think that it’ll be great to add some test suits that haven’t been included earlier so it will be required some changes in test documentation.
What do you think about it?

2 Likes

Good post - I agree with nearly everything you said. I’d suggest that since to actually perform a conformance test, at least one (or some particular mixture of) concrete serial formats are implicated, that the serial formats are one dimension of a ‘technology profile’.

For example, vendor A’s system might run on Oracle only, and support JSON and XML.
Vendor B might run on any mainstream RDBMS, but only support XML.
Vendor C might only run over Linux/MongoDB/etc, and support JSON.
Vendor D - .Net, …
and so on.

From a procurement perspective, the particular technology infrastructure usually includes things like OS, DB, virtualisation / container management, and arguably, serial formats.

I’m sure this is not the only way to analyse this…

1 Like

Hi Nataliya,

Using tags is a good way of separating what to test in a Robot implementation. In terms of CONFORMANCE, I think we still need to work on defining what is CORE, STANDARD and OPTIONS, or some other kind of tiered checks.

One consideration mentioned in the initial post is: different kinds of systems might implement different parts of the openEHR specification, so defining a CORE as a set of functionality ANY system should provide is a little risky. If we want to be 100% formal with this, we might need to define a classification for those kinds of systems then define what is CORE for each class. For instance, this could be a classification:

  • CDRs
  • Communication brokers
  • Analytics / BI
  • Information recording applications (e.g. an EMR)
  • Information visualization applications (e.g. reporting)

Then for a CDR, what is core might be:

  • openEHR VNA
  • Manage EHRs and Compositions
  • Versioning
  • Service Model (any implementation, and maybe only a subset)
  • Query Formalism (e.g. AQL or any other formalism)

But for an Analytics / BI system:

  • Service Model Client (any implementation)
  • Support EHR and Composition data (processing)

I guess the point is: CONFORMANCE validation is more complicated than just creating a set of tests.

Of course, we can discuss each test case idea to analyze if that corresponds to CONFORMANCE validation or to another test suite and improve the documentation. The key is just to maintain the documentation abstract from implementation / platform independent and trying to avoid any hard rules on areas not all systems should implement. That is why we have statements like these:

  1. The server under test should support at least OPTs, 1.4 or 2, but OPT 1.4 if more frequent since modeling tools supporting this were around for a long time. Could also support ADL, 1.4 or 2.

  2. The server should support at least one of the XML or JSON representations of COMPOSITIONs for committing data, and integrate the corresponding schemas (XML or JSON) to validate data syntactically (before validating against an OPT).

Note the current test cases have at least one success case, one fail case and the required border cases, and are not focused on testing every single possibility for each service, that is something for a different suite like integration testing or unit testing. The conformance testing should verify a system complies with openEHR, not that a system is free of bugs. We need to be strict with this to avoid generating problems for any implementer.

Hope that helps!

Hi Thomas, exactly.

That is why the documentation has this:

  1. The server under test should support at least OPTs, 1.4 or 2, but OPT 1.4 if more frequent since modeling tools supporting this were around for a long time. Could also support ADL, 1.4 or 2.

  2. The server should support at least one of the XML or JSON representations of COMPOSITIONs for committing data, and integrate the corresponding schemas (XML or JSON) to validate data syntactically (before validating against an OPT).

The specification of the test suites is independent of any technology profile, but in the implementation we could have some parameters that program the execution of the tests with a specific profile, so what is tested is still the functionality and the specific SM implementation, but specific serialization formats used will be the ones the System Under Test supports.

As an output we will get the validation of the functionality, independently of the tech profile, so that is comparable between vendors that have different tech profiles.

Yes, but we need to differentiate the request from the proposal. In “general”, the request doesn’t includes a specific tech infrastructure. For instance, it might state “we need an openEHR CDR”, so they will need something, the CONF spec, to state what an openEHR CDR is or should contain. Then the proposal can add the result of the conformance test suites execution, using a specific tech profile, and state what is the infrastructure the vendor is offering.

Actually the CONF spec helps to define the request, and the CONFORMANCE tests implementation helps to validate the proposal. We need to be sure we are differentiating those two cases.

This helped me a lot, I didn’t think about tech profiles :slight_smile:

1 Like