At Karolinska we are also stumbling upon these issues now, so reviving this thread…

We will want to run our entire backend platform (CDR + PMI + FHIR services and a LOT of other things) on UTC to make e.g. event/log correlation and many other things easier.

I have (as @thomas.beale above) argued for that the time inside the main clinical data e.g. COMPOSITION.context.start_time / end_time and timing in entires should be in local time, including timezone info stored (including daylight saving changes etc). I have also said like @sebastian.iancu and others that it is the responsibility of a client to supply timezone suffix in clinical content when storing data.

On the other hand I believe it is reasonable to use UTC (our backend platform standard), with the Z suffix, in the AUDIT_DETAILS.time timestamp that I believe will always be set by the CDR server, not the client.

Does the above with “clinical content” vs AUDIT_DETAIL timezone usage sound reasonable or just confusing?

P.S. I or my colleagues will get likely get back with more questions regarding e.g.:

  • configurability of default server timezone behaviours for missing mandatory timestamps (e.g. int POSTusing simplified formats)
  • default AQL query/execution timezone behaviours with and without timezone suffixes in the AQL WHERE parameter (it looks like some implementations just do partial string handling rather than implementing real time-zone-independent comparable time, e.g. @joshua.grisham knows more)

@joshua.grisham just supplied more details in the thread: Should Compositions be written with local timezone or UTC datetimes? So we can continue with relevant details there instead of here.

From a clinical application building perspective, agree wholeheartedly. Clinically relevant datetimes in UTC would lead to a lot of confusion. I don’t care as much about the datetimes above the level of context/start_time.

1 Like

That’s how we do it (how being local time with tz). Our view is that clinicians should always get the format that has the least cognitive load. For various purposes, especially for analytics, we end up normalising to UTC alongside the local date/time.

1 Like

I completely agree that users should always visualise dates with local timezones inc time-saving, however my understanding is that it is now straightforward to translate between local TZ and UTC, even for historical dates.

My preference would be to standardise the data and the querying on UTC, but the mixed approach described should work. As long as the software rules are all described clearly, enforced and understood both approaches should work. We did have recent issue with an integration where one part of the incoming data had been exported as ‘normalised to UTC’ by replacing the local timezone with ‘Z’!! Chaos!!

Do you mean you carry both dates in the database?

As I remember we do the same thing. Datetimes are normalized to UTC to share a common variant for analytics and queries.

This will make most use cases work out of the box. Some cases like query for events happening at ie 10 local time you need to make some query plans.

@bna, in the most common DIPS use cases where I assume you control all the UIs, I guess converting from UTC to local time every time is easier to enforce than in a more diverse ecosystem with many developers/applications from different suppliers.

Where does DIPS usually make the converson - in App UI code, reused UI widget code, middleware, CDR server or elsewhere?

@pablo did I understand your previos post correctly that you, like DIPS, also store everything as UTC in the CDR, but convert it to local time on the fly (serverside?) before sending it to clients - so that client application code only see local time?

And a repeating a previous question to all, but especially @thomas.beale who concieved and wrote things like section 4.6 in EHR Information Model :

1 Like

Normally (= in the past) times in audits for commits, reads etc of contents of a DB would be in the local time of the ‘location’ of the DB, but of course today, that actually means the ‘location of the tenant owner’. Even then it might be that with a) data being served from anywhere, b) mergers & acquisitions among hospital systems and so on it is no longer safe not to use either UTC or else TZs everywhere.

Personally the TZ approach seems more logical, since it provides an indication of the location of the original creation of the data, whereas UTS doesn’t.

Note: some databases don’t support timezones to be stored in native datetime types. Some do support TZ via extensions and some fully support TZs in datetime types.

With that being said, the DBMS used might lead to the technical need of storing all in UTC or have a separated field/column, with the corresponding size penalty, to store the TZ.

In any case, if there is a TZ, implicit or explicit, in the datetime, then it can be converted to any other TZ, so data query, retrieval, analysis and visualization shouldn’t be an issue when the TZ is known or it has a sensible default. Of course, conversions are always needed.

This is a long discussion in informatics in general. My conclusion is that there is no best solution, it’s arbitrary, and any solution can work as long as the TZ is known.

1 Like

Datetime is one of the hardest thing to get right.

We don’t have timezones in Norway. Or more precisely we have timezones two times a year when changing from winter to summer time and back. Summertime in Norway is UTC+2, while Wintertime is UTC+1. Healthcare professional working the night, when the clock is changed, need to be able to see chronological events for that night.

At some of our customers infrastructure we also had a situation with issues related to how the servers where configured when it come to TZ. This lead to some situations where data suddenly got a few hours wrong. This where soon identified and fixed. Based on these situations we had to make the openEHR stack of components “self-contained” and robust when it comes to datetime handling.

We save the datetimes using the timezone and index the data in the CDR using UTC time. Reading data the client will then be able to sort/order the cronological sequence of events.

So to answer your question: The conversion is made far behind in the server and the clients need to be TZ aware when displaying data.

Disclaimer: It’s a few years since I worked with the technical details about datetime handling - which I found challenging to get my head around. If you need a more in-detail and precise description on how we handle this I need to consult one of the system architects who actually implemented this.

@bna so if we look from an openEHR REST API perspective at the DIPS CDR, do I understand correctly that all the timestamps in the data going in and out via that API are in local time including timezone info (suffixed w/ +01:00 or + 02:00 in your case) and that the UTC-index is used mostly internally (inside the CDR) for things like efficiently sorting the returned data in e.g. AQL-queries that request time sorting?

Yes, please ask your system architect if needed, I think this discussion could be helpful to many.

Same question to @Pablo what time format (UTC or local time with TZ info) is seen via the openEHR REST API in your openEHR products?

IIRC in Atomik if you do a GET /composition the compo has the original time expressions, and when you a data query, the datetime data comes all in UTC since datetime is indexed transformed to UTC. The idea is to avoid results that are on different TZs (getting a time series and the client needs to standardize that every time is painful).

1 Like

Great that the /composition maintains the original time expressions, I was a bit worried it didn’t.

Also, it’s the easiest implementation since it doesn’t require to do TZ transformations on incoming data. Though when data is indexed for querying, it’s standardized, at least in our implementation.

1 Like