openEHR Conformance / Conformance Levels / Conformance Scopes

Hi all,

I’ve been working for a while in the CONFORMANCE framework for HiGHmed, and now we are having interest in this area from Solit-Clouds, and having some extended conformance discussions internally. So there are some areas we need to discuss with the community since this will be a CONFORMANCE framework anyone can use to validate their implementations.

Current conformance framework

The current framework is focused on testing conformance with the Service Model via it’s only standard ITS spec, the REST API.

This framework is composed by a test suite design (ehrbase/doc/conformance_testing at develop · ehrbase/ehrbase · GitHub) and an implementation in Robot Framework (ehrbase/tests/robot at develop · ehrbase/ehrbase · GitHub).

The rational behind those tests is:

  1. Design should be technology agnostic (it doesn’t mention any specific implementation or even a REST API)
  2. It is independent from the interchange format (it just mentions the system under test should at least support one openEHR exchange/serialization format)
  3. Is based on the Service Model (openEHR Platform Service Model), so it defines test cases over each interface and function there.

For Solit-Clouds it is important to test the specific formats an openEHR system supports.

This opens the door for different discussion topics:

a. Should testing for formats be part of a CONFORMANCE test suite?

b. Should we differentiate levels of CONFORMANCE? for instance, CONFORMANCE with functionality vs. support for exchange formats.

c. Should we differentiate scopes of CONFORMANCE? the format specifications are still part of the standard but are a different level of spec (ITS).

Another thing mentioned today was the requirements for openEHR data validation, that is which data errors are checked by systems receiving openEHR data and how errors are reported back to the source.

Conceptually, I like to divide this into two categories: syntactic (validation against schemas), and semantic (validation against templates). In this context we are thinking mostly of COMPOSITIONs, but this should include FOLDERs, EHR_STATUS, EHR, CONTRIBUTION, PARTY, etc.

There is of course the challenge of having templates for classes that are not COMPOSITION, which is an area we have shelved for a while.

So there could be another area of CONFORMANCE about how data is validated and error reports are sent back to the source.

Internally we are trying to figure out what is important for different stakeholders.

For instance, developers would want to know if a system is compliant with certain format, but, is that required for checking CONFORMANCE for a procurement?

So is “technical conformance” at the same level of “functional conformance”? Or do we need to separate CONF specs in terms of the needs of the person trying to evaluate an openEHR implementation?

Besides those questions, I have an opinion on some of the topics.

For instance, about procurements, I think functionality CONFORMANCE is key here, the system should provide certain functionalities via one implementation (any) of the Service Model, because that is the spec that defines what an openEHR system should provide.

IMO it’s better to have an abstract / platform independent conformance test suite specification, so the implementation can vary. Today we have Robot, but, if needed, tomorrow this could be implemented in anything, even as an Insomnia or Postman script.

So that alone could work for procurements to express requirements and to check what is actually provided by a solution presented for that procurement.

Though, we need to improve the current SM to use it as a formal definition of what an openEHR system could/should provide (one single system might not implement all the services).

Then, another test suite is required for a technical conformance checks. That would be a set of tests to verify ITS specs. This includes: formats, data validation, validating schemas against BMM, …

From that, another interesting idea: for each openEHR spec component we might need to define a test suite! (to verify systems implementing those, are really implementing things right).

I know this is long but we would like to hear from the community to have more input. I think we got enough experience on this area to move it further, and be closer to say “how compliant” a system is with openEHR. This is also a step closer to formalize procurements, have an official openEHR system certification in place, and improve the required specs to get this right (specially SM and CONF).

What do you think?

2 Likes

Hi, Pablo

We have some sentences about CONFORMANCE tests:
May we separate test by CORE, STANDART, and OPTIONS test by tags, we think it’ll allow us to concentrate by tests CORE for example, and expanding the test suite in the next step.
We think that it’ll be great to add some test suits that haven’t been included earlier so it will be required some changes in test documentation.
What do you think about it?

2 Likes

Good post - I agree with nearly everything you said. I’d suggest that since to actually perform a conformance test, at least one (or some particular mixture of) concrete serial formats are implicated, that the serial formats are one dimension of a ‘technology profile’.

For example, vendor A’s system might run on Oracle only, and support JSON and XML.
Vendor B might run on any mainstream RDBMS, but only support XML.
Vendor C might only run over Linux/MongoDB/etc, and support JSON.
Vendor D - .Net, …
and so on.

From a procurement perspective, the particular technology infrastructure usually includes things like OS, DB, virtualisation / container management, and arguably, serial formats.

I’m sure this is not the only way to analyse this…

1 Like

Hi Nataliya,

Using tags is a good way of separating what to test in a Robot implementation. In terms of CONFORMANCE, I think we still need to work on defining what is CORE, STANDARD and OPTIONS, or some other kind of tiered checks.

One consideration mentioned in the initial post is: different kinds of systems might implement different parts of the openEHR specification, so defining a CORE as a set of functionality ANY system should provide is a little risky. If we want to be 100% formal with this, we might need to define a classification for those kinds of systems then define what is CORE for each class. For instance, this could be a classification:

  • CDRs
  • Communication brokers
  • Analytics / BI
  • Information recording applications (e.g. an EMR)
  • Information visualization applications (e.g. reporting)

Then for a CDR, what is core might be:

  • openEHR VNA
  • Manage EHRs and Compositions
  • Versioning
  • Service Model (any implementation, and maybe only a subset)
  • Query Formalism (e.g. AQL or any other formalism)

But for an Analytics / BI system:

  • Service Model Client (any implementation)
  • Support EHR and Composition data (processing)

I guess the point is: CONFORMANCE validation is more complicated than just creating a set of tests.

Of course, we can discuss each test case idea to analyze if that corresponds to CONFORMANCE validation or to another test suite and improve the documentation. The key is just to maintain the documentation abstract from implementation / platform independent and trying to avoid any hard rules on areas not all systems should implement. That is why we have statements like these:

  1. The server under test should support at least OPTs, 1.4 or 2, but OPT 1.4 if more frequent since modeling tools supporting this were around for a long time. Could also support ADL, 1.4 or 2.

  2. The server should support at least one of the XML or JSON representations of COMPOSITIONs for committing data, and integrate the corresponding schemas (XML or JSON) to validate data syntactically (before validating against an OPT).

Note the current test cases have at least one success case, one fail case and the required border cases, and are not focused on testing every single possibility for each service, that is something for a different suite like integration testing or unit testing. The conformance testing should verify a system complies with openEHR, not that a system is free of bugs. We need to be strict with this to avoid generating problems for any implementer.

Hope that helps!

Hi Thomas, exactly.

That is why the documentation has this:

  1. The server under test should support at least OPTs, 1.4 or 2, but OPT 1.4 if more frequent since modeling tools supporting this were around for a long time. Could also support ADL, 1.4 or 2.

  2. The server should support at least one of the XML or JSON representations of COMPOSITIONs for committing data, and integrate the corresponding schemas (XML or JSON) to validate data syntactically (before validating against an OPT).

The specification of the test suites is independent of any technology profile, but in the implementation we could have some parameters that program the execution of the tests with a specific profile, so what is tested is still the functionality and the specific SM implementation, but specific serialization formats used will be the ones the System Under Test supports.

As an output we will get the validation of the functionality, independently of the tech profile, so that is comparable between vendors that have different tech profiles.

Yes, but we need to differentiate the request from the proposal. In “general”, the request doesn’t includes a specific tech infrastructure. For instance, it might state “we need an openEHR CDR”, so they will need something, the CONF spec, to state what an openEHR CDR is or should contain. Then the proposal can add the result of the conformance test suites execution, using a specific tech profile, and state what is the infrastructure the vendor is offering.

Actually the CONF spec helps to define the request, and the CONFORMANCE tests implementation helps to validate the proposal. We need to be sure we are differentiating those two cases.

This helped me a lot, I didn’t think about tech profiles :slight_smile:

1 Like

Hi, everyone!

It’s not clear to me what we exactly understand about the CONFORMANCE test. If we have sad that it’s not about bug detecting so what about ‘bug’ by deviation from specification (for example invalid uid that doesn’t match the pattern 8-4-4-4-12)? Is it a conformance test or bug checking? Can you describe particular qualities CONFORMANCE test for understanding each other, because I think we are speaking by different language now?
Can we use one standard format for checking all vendors (for example canonical JSON)?
May you describe the main problem with conformance testing, for me, it’s not clear at all, because we have a CONFORMACE specification that describes conformance behavior?

Hi @natfliusman

The specification has different levels of constraints, and testing has different levels too.

Some considerations:

  1. We need to separate data (formats, validation) from behavior.
  2. There is internal behavior and external (SM/API).
  3. What is tested on a test layer shouldn’t, in general, overlap what is tested in another layer.
  4. In unit tests internal behavior and data should be tested (related to each component independently of the other components).
  5. In integration tests internal behavior, data and external behavior should be tested (related to many components working together). This could be anything you need to test your system works as expected, and include your specific APIs and data exchange formats. It’s your choice if you want to retest stuff already tested on unit tests for internal behavior and data.
  6. In conformance test only external behavior is tested and is tested against a definition, which is the SM.
  7. Since the SM could be implemented in different ways by an ITS spec, today we only have the REST API, the definition of the conformance test cases [CONF TEST SUITE DEF] shouldn’t include ITS-specific test cases, only SM is considered.
  8. So the [CONF TEST SUITE DEF] should be platform independent (work for any openEHR system, and the test cases could be implemented on any technology)
  9. The [CONF TEST SUITE DEF] should have test cases for each conformance item defined in the SM, no more, no less. Because of this and 7. the cases in the [CONF TEST SUITE DEF] do not include any hard requirements on exchange formats.
  10. Since external behavior in the SM can be implemented in many ways, there is a need of having technology profiles included in the conformance tests implementation [CONF TEST SUITE IMPL], as we mentioned, this includes the exchange formats between [CONF TEST SUITE IMPL] and [SYSTEM UNDER TEST].
  11. When running the [CONF TEST SUITE IMPL] against a [SYSTEM UNDER TEST], a technology profile should be selected, since a system might only support one exchange format, which is enough for testing external behavior conformance.
  12. For conformance testing, the minimal requirement would be to have a positive/success or negative/fail result, so if something fails, a general reason why it fails is enough, the specific reason, like a wrong UUID format is not so important for conformance at this moment, since that is not included in the SM. On the other hand, that should be already tested by unit/integration tests. That is why the goal of conformance is not to find bugs in an implementation, but to generally validate that an implementation is complete and behaves as expected.
  13. In the example of the UUID format, I wouldn’t add a test case for that in the [CONF TEST SUITE DEF] if that is not explicitly mentioned in the SM. Though, I would have a unit test case and integration test case for my system to verify it. On the other hand nothing stops you to create your own [CONF TEST SUITE DEF] including that case at the conformance test layer. This decision is totally arbitrary. I guess the point is: if we do this for all the test cases we have in the integration tests, then we are mixing testing scopes, and every time we add a new integration test to our suite, that will count for conformance, and sometimes it shouldn’t.
  14. Also what works for you might not work for others, and the conformance layer should work for any implementation. A case would be, some implementation instead of UUIDs could use OIDs, because both are UID. An OID would be considered an invalid UUID in terms of format, but is a valid UID. So how can we make this work for any implementation without biasing the conformance spec? IMO, and, I might be wrong, but it helps to keep the scopes between unit/integration/conformance separated and maintaining the [CONF TEST SUITE DEF] platform independent.
  15. Though that could be extended, the place to extend it is in the SM and the CONF specs, not in the [CONF TEST SUITE DEF], which is based on the SM and CONF specs, and defines at least one success and one fail case, and the required border cases for each conformance item in the SM.
  16. From your question, you can’t use one format to test all vendors, that is why the technology profiles in the [CONF TEST SUITE IMPL] is needed, so before running the conformance tests, you can say “the SUT uses these formats”. But that should be in the [CONF TEST SUITE IMPL] not in the [CONF TEST SUITE DEF] which is platform independent. Remember formats are ITS specs, so are platform specific.
  17. The CNF spec is work in progress, incomplete, not viable for current use. There was an open discussion on how the specific test cases should be defined, so the initial idea was to post scripts with requests to the REST API, which at some point helps moving things forward, but at the same time ties conformance testing to an ITS spec, the REST API, and that IMO is not correct. CNF should be tied to SM. Actually the work I did for HiGHmed to create a [CONF TEST SUITE DEF] was initially based on the CNF spec, but extended it a lot since it was incomplete and we needed to define a full CNF spec to verify conformance of the systems used in HiGHmed. The side goal of this work (ehrbase/doc/conformance_testing at develop · ehrbase/ehrbase · GitHub) is to help finalizing the CNF spec, since my work is like a CNF but more complete.

This is the whole rationale behind it. Hope that helps.

1 Like

I think we can make the long story short:
Users need to know what a CDR provides regarding functionality. This can be tested using any format but the formats themselfs can be subject to testing. We then have to decide if we want to be able to execture all tests with all different formats or we use one format and then do some testing for the serialisation formats as well. Of course this decision will have an impact on the measure of completeness (the more tests we define for one format, the more will fail, when the format is not supported but a different one).

Hi @natfliusman I was talking to Birger and he mentioned Solit was interested in a data validation conformance testing.

This is my first try at this kind of conformance test spec: ehrbase/CONSTRAINTS.md at feature/529_constraint_tests · ehrbase/ehrbase · GitHub

This is not full, but, after trying several options, I think the structure is correct and the rest should be very similar.

Please let me know if that helps and if you have any questions.

Best,
Pablo.

Hi @pablo
I’m reading your test spec and I have a question about COMPOSITION:
You’ve written about content in the titles (2.1-2.12), but in the paragraph, you’re describing a context entry. I get the idea about isolation tests and I don’t understand which kind of entity you’ve kept in mind, so It’s a little confusing to me. May you describe this a little bit more?
And may you describe how context entry is affected by content entry, if I have understood right your mind? Is there any dependence or correlation?
Same question with OBSERVATION.protocol: how this is affected OBSERVATION itself and why you’ve used it for tests?
Would you suppose to combinate EVENT tests, for example, with parent-class HISTORY tests with cordiality maybe this expects as isolated at all?

Best,
Nataliya

Hi @natfliusman

On the COMPOSITION section the test cases describe the intended constraints present in the archetype/template for the COMPOSITION attributes.

Then in the table there are different combinations of COMPOSITIONs instances that are committed to a server. These combinations are related with the constrained attributes in the test case description. For the COMPOSITION those will be content and context. Those values are separated, from each other, for instance “no entries”, “no context” would be an empty COMPOSITION. The “entries” is referring to the “content” attribute, which is in the column title of the table. So each line in the table describes a COMPOSITION instance to be tested against the archetype/template constraints described in the test case title.

Same happens on each internal class. OBSERVATION has 3 attributes that can be archetyped: data, state and protocol. So the test case title says which constraints are defined over those attributes and the tables specify combinations of values for each of those 3 attributes of OBSERVATION. The same pattern is repeated for EVENT, etc.

I’m not sure what you mean with “how context entry is affected by content entry”, because ‘content’ and ‘context’ are independently valued.

Hope that helps.

Hi @pablo !

We would like to thank for the specifications of the validation tests, we additionally analyzed it and encountered difficulties and technical limitations and decided to rework the tests in the way we see them. @natfliusman and I have prepared a list of tests based on your specification. Could you please review our tests, as well as see the template we have collected for their implementation?

Best,
Aleksey

Hi Aleksey, from what I understand reading your doc, your are defining cases for data validation against the RM, not against the template.

For instance, take into account the first table on page 2. Attributes language, territory and composer, are not part of the template. The category is the only one part of the template because it tells the type of composition (event, persistent).

For the section/entry sections I see similar test cases to the ones I defined. Would be interesting to know what you modified and why, maybe I missed some test cases. Do you have a diff that allows me to compare your doc to mine?

One comment about formality in test case design, in the Preconditions instead of specifying actions, it’s better specify the state of the SUT before executing the test case. With that specified, each test case can create it’s own setup operation, then run the steps.

Thanks!

1 Like

Hi @amolkov I’m working on a script that takes an OPT and generates alternatives for the internal constraints of existence and cardinality, then I will extend it to the rest.

Not sure if this would be useful for you since you pick a different path for testing data validation. I think there are a huge number of possibilities for OPT constraints and creating test data by hand is just consuming a lot of time.

The big goal would be to have a generator for complete OPTs with alternative constraints, without providing an OPT sample, but it is just less difficult to start with a sample and make the script do all the possible combinations of constraints.

@amolkov I have added a JSON Composition generator that generates Compositions with errors to be able to test the cardinality constraints. I will try to add more error generators to test other constraints.