Visual Acuity Archetype Discussion

Hi Lars,

Great progress. A few more thoughts, if I may…

One of the key considerations in designing this archetype is how we define and represent a single ‘test’. The answer may vary depending on clinical expectations versus implementation and data sharing needs.

From my first principles perspective, a single test should include all variations and results for each eye assessed under a shared protocol or method—captured as multiple events representing each eye/state combination. While this may result in numerous events, it preserves the integrity of one coherent test result set conducted using a single, consistent protocol. That said, I’m also not an expert in eye testing, so maybe we do need to adapt the approach as you suggest.

I remain concerned, conceptually, that modelling each eye as a separate test instance is counterintuitive. It adds complexity for implementers, risks confusion by fragmenting related results, and could compromise clinical safety if test components derived from the same method are stored across separate instances.

Ultimately, what would other ophthalmologists expect? How would a single test typically be recorded on paper? And from a data extraction and querying perspective—particularly for national health records or shared care—what structure would be safest and most intuitive to use at scale?

The original archetype draft included a ‘Test name’ that effectively identified the selected protocol configuration. This ensured results were captured in a clearly defined context, supported coherent ordering and display in user interfaces, and provided essential context when shared with other systems or providers. I think there is value in adding this back in, if it is possible to create a relevant value set. It also fits with other test result archetype design patterns.

The protocol should therefore capture the fixed context or configuration for the test (e.g., chart used). If the chart changes, that likely constitutes a different test altogether. What other protocol-level elements would trigger a new test result definition, as opposed to simply recording parameter variations within a single test?

Within this structure, the ‘state’ attributes could then describe the variable conditions under which each result is captured.

And each full named test result could then comprise multiple events—each event recording results for a particular configuration of the ‘state’ variables, per eye tested.

Silje and I have had similar discussions with a group working on hearing test data—it’s complex! It’s not easy to explain in writing, so hopefully Silje can help convey the idea more clearly when you next meet in person.

I would strongly push for the Eye examined to be mandatory - it makes no sense to record results without stating what eye was examined. If each eye has identical results in a legacy system, even if the eye examined is not identical, then perhaps it could be recorded using a null flavour. However if the results are vary and we are not sure what the results mean, it may be better to record the legacy data simply as a text blob.

Remember this archetype is designed for use in all contexts. If anonymisation of the eye examined for research needs to occur then getting the research data set fit for use should be managed by an external export/process which would include removal of the eye examined data

Again from first principles, so please correct me if I’m wrong, the terms ‘right eye position’ or ‘left eye position’ feel ambiguous to me—position may also refer to the anatomical alignment and orientation of the eye, particularly in relation to the other eye, Also the notion of standard gaze ‘positions’.
Do we need to create a cluster with all the parameters/adjustments described in the trial frame spec you suggested and each type of lens used.
Does ‘Trial frame’ need to be included as part of the naming? Is it considered a recording of subjective refraction results?

If we design the archetype to separate the concepts it does not preclude implementers to present them in a single UI widget. However to conflate value sets for quite separate concepts into a single data point in the archetype is problematic if we need to become more specific in the future. I’d advise against this.
If this was only free text recording against ‘Confounding factors’, this is not such an issue. In principle, the issue of the reliability of the participant is often critical to interpretation of the results eg a child who is distracted or a patient with early dementia. In this situation, the reliability is relevant for the whole test and possibly should be recorded in protocol rather than state!

Hope this is useful food for thought…

Regards

Heather

2 Likes

Hi Heather, thank you for your very helpful reply!

I’ll make sure we prioritize these issues today’s meeting and will get get back to you.

About the “Test Name”: I’m just worried about trying to broadly categorize a test that like VA that is inheritely “pick and mix” regarding it’s different components, as trying to pick which concepts should or should not be part of the category definition and pre-coordinating their combinations is a rabbit hole that would be great to avoid.

If I had to choose just 2 concepts to combine with each other in the “test name” categories they would be “Type of Correction” and “Test Distance”, to have categories like “Far Distance vision with own glasses”, but even that may become difficult in cases with 2 correction types such as own glasses + pinhole. Plus we already have 2 seperate elements for these Correction Type and Distance, the latter of which es even able to distance as a quantity and not just “Far”.

Another way to explain my thinking: In my mind each VA test is a Lego house made up of 5-8 components “bricks” and there are many different ways to partly take it apart, so that it is very difficult to agree upon which bricks should “stick together” when being categorized. Depending on the clinical circumstances ophthalologists disagree which combinations of concepts are the “important” ones and should be part of categorical definitions like “Test Name”, i don’t really see any consensus about that is useful to us at the moment.
There was an attempt to design an ophthalmic data exchange standard in the 90s in Germany that came up with 34 different pre-coordinated “Test names” vor VA testing, not ideal if you ask me, and very difficult to agree on whether we should sort VA tests into 4, or 34, or 120 Categories. So my approach with the Archetype was to completely split the VA test up into the “atomic” bricks as these are much easier to agree upon than combinations.. I hope that makes sense.

And if Implementers want a drop down menu of exactly 10 different “types” of VA test by combining 3 concept elements, can’t they construct that on an application level, and each combination is assigned to a specific combination of the 3 elements in the composition?

Or maybe I just need to understand better how such a “Test name” categorization could harmonize with more specific “one-concept”-data elements within the same archetype?
Will keep you up to date on the discussions

again, many thanks, i really appreciate your advice and will rack my brain trying to come up with compromises.

Hi Lars,

Again, great discussion

Happy to withdraw that suggestion if it makes no sense to you. The thought was generated by a previous iteration of the archetype containing ‘Test name’ but it actually was a category and may be best represented by events!
Test Name (Optional): The name of the exact visual acuity test performed. This generally represents a broad category of applied refraction. Specific refraction details can be described using ‘Refractive Correction’.
Comment: Details of the exact correction applied, or where multiple corrections should be captured via 'Refractive Correction’.
Choice of:

** Pinhole visual acuity [The test is performed with pinhole refraction applied.]
** Usual corrected visual acuity [The test is performed with the patient’s usual refractive correction i.e spectacles or contact lenses.]
** Best corrected visual acuity [The test is performed with the patient’s optimal refractive correction.]
** Unaided visual acuity [The test was performed without visual aid.]

I assume these concepts are also being considered?

Another way to consider this is to add a ‘Test label’ - ie something the clinician user or system wants to label a particular or commonly used configuration of the ‘bricks’ you describe so that all test configurations of the same kind can be queried or displayed.

Perhaps this is heading in the same direction as your comment below…

We haven’t got an example of this in any other archetype, but may be a useful consideration.

Regards

Heather