dependent evaluations

Good to see the lists hotting up with some questions..

here is one I have had to ponder recently.

I have an evaluation being made directly from an observation.
Specifically - I have a particular type of athletic session
from which an 'maximum power' number is being derived.
Whilst the calculation is automatic, the coach is
'evaluating' the automatically generated score before it is
put into the system (to make sure that the number is
sensible given what he knows of the athlete).

Some questions

1) is this even an evaluation?? Where does observation finish
   and evaluation start in a world where computers do more and
   more analysis..

2) how can this linkage between evaluation and observation be
   recorded. If someone does a delete on the composition
   containing the Observation (perhaps they uploaded the wrong
   athlete data), the evaluation should also clearly be deleted. Can
   this linkage be expressed?

3) would tracking linkages cause us to descend into madness??
    Obviously many evaluations are based on the sum total of information
    present in the EHR - the clinician doesn't explicitly indicate each
    piece of information that led to the evaluation.. so the general case
    seems to not require linkage data. Is the case of
    Specific Observation -> Specific Evaluation even worth worrying about?

Andrew

My go at the correct interpretation.

Good to see the lists hotting up with some questions..

here is one I have had to ponder recently.

I have an evaluation being made directly from an observation.
Specifically - I have a particular type of athletic session
from which an ‘maximum power’ number is being derived.

The ‘evaluation’ is done by an algorithm and creates a second (derived) observation (‘maximum power number’) via post-processing.
For the healthcare provider (coach) this was and stays an observation.
This observation is in the record in the in-tray.

It is my opinion that only the owner (a human) can commit data and information to the record proper.
Admission to the record must be a conscious human act.
Until formally committed is stays in the In-Tray , in a state of organised limbo. (status= received, not committed)

Whilst the calculation is automatic, the coach is
‘evaluating’ the automatically generated score before it is
put into the system (to make sure that the number is
sensible given what he knows of the athlete).

The coach observes in the In-Tray the data item with the state received, not committed and commits it.
The next step is that he makes a judgement to change or not to change the value.
This creates or does not create a new version of the value.

Some questions

  1. is this even an evaluation?? Where does observation finish
    and evaluation start in a world where computers do more and
    more analysis..

We must remember: the EHR is not what happened exactly with and around the patient in its healthcare provision process.
For the EHR we are not to model the complete healthcare process in its most fine grained detail.
It is what gets documented by a responsible person.

In the use case above we only see observations with a committed and not-committed state,
observations that are or are not changed (versioned).

Evaluation is what happened as the result of the interpretation in the head of a healthcare provider (coach).
The observation is processed in the context of the patient and the expertise and knowledge of the coach with the aim to reduce the list of possible diagnosis, to make a plan, to set a goal.

Computers are computers. Always a human must be in the loop (in general) to commit data to the EHR.
EHR’s facilitate the obligation of a responsible person to document the treatment of its patient.

  1. how can this linkage between evaluation and observation be
    recorded. If someone does a delete on the composition
    containing the Observation (perhaps they uploaded the wrong
    athlete data), the evaluation should also clearly be deleted. Can
    this linkage be expressed?

Any change will lead to a new version of the composition. Reasons for change can be recorded via the functionality in the Reference Model.
In addition the RM allows to have semantic links between any part in the record. This an extra facility to record what things recorded earlier played a role in a decision/change, etc.

  1. would tracking linkages cause us to descend into madness??
    Obviously many evaluations are based on the sum total of information
    present in the EHR - the clinician doesn’t explicitly indicate each
    piece of information that led to the evaluation.. so the general case
    seems to not require linkage data. Is the case of
    Specific Observation → Specific Evaluation even worth worrying about?

Many times links between observed facts, evaluations are implicit.

The composition indicating that the patient is dead as an evaluation by itself is enough.
I image that in certain circumstances (contexts) extra information must be given.
Extra information explicitly (fully structured): This can be in the form of semantic links to supporting items in the record, when we have to (or want) to record explicitly the reasoning of the author.
Extra information implicitly (not or partially structured): This extra information can be given as well in a comment field or structured via a specific template not using semantic links.

Remark: Although the OpenEHR RM has the feature of inserting semantic links this is available not present in most systems.

Further to Gerard’s comments:

The value maximum power is always an observation: a value that is possible to derive from the data at the device level - but you want to enter the data specifically by the clinician. I would then have the device data on power - (Information provider = device) and then another observation of power which has the maximum (over the period ie an interval event) that has the Information provider set to the clinician that is saving the entry.

As Gerard says - if the clinician commits the data, it is possible for the application to ask them to verify the max power specifically as part of this process - it would then be implicit - but the application would know it was explicit.

No need for an evaluation - if you want to state in the assessment that the readings appear valid - this could be recorded as a clinical synopsis.

Cheers, Sam

Andrew Patterson wrote:

(attachments)

OceanCsmall.png

Sam, and colleagues,

A somewhat tangential addendum to Sam's observations in the dependent
evaluations thread, and because we've been silent on Devices since the
Brisbane meetings (31 August 2007 02:05:39 UTC).

Here's an update on how we (ISO TC215 WG7 - Devices, CEN TC251 WGIV -
Devices, IEEE11073, HL7-DEV-SIG, IHE-PCD and (indirectly) Continua)
might anticipate the ISO/IEEE 11073 medical device terms getting into
clusters then archetypes - without getting into the quagmire that David
Rowed alluded to.

Although we believe the first implementation focus will be on simple
devices such as NIBP, SpO2, Glucose, etc. we are aware that the generic
representation has to be adequate to permit a linked set of device
observations to be conveyed when presented by a composite device (e.g.
NIBP + SpO2). The 11073 terminology explicitly differentiates measured
data from computed data in the terms, but the source of the computation
is important because it can make a big difference to the uncertainty
applied to any reported metric.

We are at present examining the flattened device model that we report in
HL7 v2.5 to see it there is more that we can safely strip out for the
purposes of clustering into clinical archetypes. However, the end-point
of this is *likely* to be that for very simple devices like those for
NIBP there will be 3 or 4 metric values (for Sys, Mn, Dia, Puls), a
timestamp (ISO format), a device attribution (NIBP monitor) and a
globally unique device ID (EUI-64). For clinical purposes the last two
are not important until traceability is required.

Why is this taking us so long? Well, because we don't like to have a
dozen different solutions to a single problem, so (as implied by the
long list of participating parties above) we are consulting across the
board. The last stage of the first round is likely to be at the
Continua Summit next week.

I hope that this update helps - the next stage is likely to be a couple
of structured samples based on a set of generic cluster data.

Best regards,

Melvin.

In mail of Fri, 19 Oct 2007 16:16:00, Sam Heard

The value maximum power is always an observation: a value that is possible to derive
from the data at the device level - but you want to enter the data specifically by the
clinician. I would then have the device data on power - (Information provider = device) and
then another observation of power which has the maximum (over the period ie an interval >event) that has the Information provider set to the clinician that is saving the entry.

I probably over simplified my example - the maximum power is not the straight
'maximum' as recorded over the time period by the device - it is a computation
based on rolling averages of power over a time period, combined with
various fitness data also in the patient's record as
observations (measures of fitness - vo2max etc).

Does it become an evaluation if it is a computed score based on other
observations (with the observations not necessarily made at the same time i.e
they only do the fitness scoring observations every 6 months and then use them
for all the computations of 'maximal power' in that period)?

Andrew

Hi Andrew

Yes - that sounds a little more complicated. I am not concerned about derivation - we use height for many years with most people as a basis for BMI for instance - rather if the measurement relates to a point in time and could itself has some summary statement made. If it is a genuine summary based on human opinions and it stands for a very long period (such as employment) and you might like to have an aggregated summary with more information attached - then I would see that as moving towards and evaluation. I would have to talk about this particular measurement more but helpful questions where yes → observation and no → evaluation are:

  • Could you aggregate this result in terms of modal, sum, average etc?
  • Could you reasonably repeat the ‘finding’ tomorrow and the day after?
  • Does the ‘finding’ have a very clear unitary time point?
  • Would it be silly to see this value having persistent relevance without repeating it?
    There are probably many more…Sam

Andrew Patterson wrote:

(attachments)

OceanCsmall.png

Andrew Patterson wrote:

Good to see the lists hotting up with some questions..

here is one I have had to ponder recently.

I have an evaluation being made directly from an observation.
Specifically - I have a particular type of athletic session
from which an 'maximum power' number is being derived.
Whilst the calculation is automatic, the coach is
'evaluating' the automatically generated score before it is
put into the system (to make sure that the number is
sensible given what he knows of the athlete).

Some questions

1) is this even an evaluation?? Where does observation finish
   and evaluation start in a world where computers do more and
   more analysis..

Hi Andrew,
Observation v Evaluation in openEHR is not to do with computers v
humans. Observations are to do with gathering evidence, Evaluations are
to do with inferring something from the evidence. Either can be done by
human or machine. However the activities are different: Observation is a
measurement / data sensing activity; Evaluation is an activity of
comparing data to a knowledge base (may be in the physician's head) to
determine how to classify the patient (i.e. diagnose, otherwise describe
them).

2) how can this linkage between evaluation and observation be
   recorded. If someone does a delete on the composition
   containing the Observation (perhaps they uploaded the wrong
   athlete data), the evaluation should also clearly be deleted. Can
   this linkage be expressed?
  

An Observation and an Evaluation (if both are needed) can be recorded in
the same Composition.

- thomas beale

This FAQ is also pertinent to those who have not seen it -
http://www.openehr.org/FAQs/t_entry_types_FAQ.htm

Andrew Patterson wrote: