Semi structured narrative data

Thank you for your elaborate response. I also like that you proposed a syntax. It makes it easier to discuss.
What I don’t like is that it’s yet another micro syntax and yet another way of dealing with mappings in openEHR. And, as stated before that it’s less feature rich than TERM_MAPPINGs. But mostly that it will require a custom parser for DV_TEXT data instead of just treating it as markdown. Besides the coding work (we’ll have to do some anyways) it gives uglyness, if e.g. you sync the text to a non-openEHR system. You’ll have to clean out the openEHR micro syntax. Or what if markdown introduces a different function for the openEHR unique syntax/
The thought that just came to mind, could we do it the other way around, instead of annotating part of the text with a mapping, could we add an attribute to the TERM_MAPPING that’s part of the DV_TEXt that records what part of the text the mapping refers to?

Welcome to the (horrible) world of markdown :wink:

Theoretically yes. You’d have a single DV_TEXT for the sentence
“dysuria warranting a urinary sediment to exclude a UTI” with TERM_MAPPINGS carrying some sort of position data like:

mappings:

  • [1]
    • charpos = |1…7|
    • target = [snomedct::49650001]
  • [2]
    • charpos = |51…53|
    • target = [snomedct::68566005]

That would not be super-reliable, since people would get mixed up on whether the character positions referred to the number of characters in the output, or in the input (potentially full of other markdown text like links etc).

Another possibility might be to adopt a kind of referencing/citation approach. E.g.

“dysuria[1] warranting a urinary sediment to exclude a UTI[2]”

The numbers [1] and [2] refer to the 1st and 2nd items in the mappings list. We assume that when there is a single word to have a mapped term, we just do “word[n]”. If there are more words, then we need some delims, e.g. “dysuria[1] warranting a urinary sediment to exclude a (urinary traction infection)[2]”

We’d have to mess around to figure out which kind of brackets or other delimiters would work best, but this is at least readable, and would not even break the RM.

1 Like

I’ll be honest and say right way that capturing these snomed codes like this without context, and just as context-free mappings, is probably a very bad idea!!.

But (if you insist!!), why not just use something like my simple URI type of idea, and just lob the list of SNOMED codes into mappings against he whole DV_TEXT element. I don’t think you really need character positions , the codes themselves align the correct positions. And if there are duplicate SNOMED codes - well so what!!

1 Like

Yes, my first thought was something like the charpos as you described. But I agree it can get mixed up pretty easily. Maybe it could be solvable in an acceptable way by implementers, since it doesn’t have to be super reliable, for today’s usecase. But I also like the other possibility, since mappings are conceptually quite close to citations, and it reuses the TERM_MAPPINGS. I agree it’s very well readable. Would there be a way not to break the markdown syntax: not just valid, but also something that makes sense without openEHR. There is the footnote syntax ([^1] ) but it doesn’t have delimiters afai can tell. And we still have to go from footnote to TERM_MAPPING. We could put a uri in the footnote that reference the term mapping.
But how about putting an Ehr_scheme uri in the url part of the markdown link syntax?
e.g.
"[dysuria](ehr://system_id/ehr_id/top_level_structure_locator/path_inside_top_level_structure|mapping1) warranting a [urinary sediment to exclude a [urinary tract infection] (ehr://system_id/ehr_id/top_level_structure_locator/path_inside_top_level_structure|mapping2)"

Why do you think it’s a bad idea? Whole of SNOMED is without context, right? The value of the feature is clear right, would you solve it another way?

You could be right about not needing character positions. It indeed is mostly about the report as a whole that should be highlighted in a query. But you probably still would want to highlight the matching characters in the text. But that could be done by matching again when rendering the result of the query, may not be necessary to store the characters the original match relates to. But I struggle to accept that such a simple problem can not be solved well :roll_eyes:
I’m curious for the view of others, maybe @bna has an opinion?

dysuria warranting a urinary sediment to exclude a urinary tract infection

Bad idea - it is very, very easy for the wrong SNOMED codes to be picked up - this is a good example because actually this is arguably not a diagnosis at all, but an indication for an investigation.

Or better example

[Painless haematuria](openehr_mapping://snomed.info/sct/ 197938001) but warrants a urinary sediment to exclude a urinary tract infection

In passing, I should note that I agree with this.

At an HL7 meeting years ago, I was asked by the very eminent lead of a research group that had produced a product that NLP-processed text notes and added SNOMED codes to them, each code connected to specific words in the text, to throw him an example.

I proposed: ‘patient expresses a fear of lung cancer’.

The software, hitherto having been tested on thousands of notes at Mayo clinic supposedly without error coded for ‘lung cancer’.

When it should have coded for anxiety.

I rest my case :wink:

3 Likes

This is a very interesting topic which we have visited many times over the last decade. Currently we are doing work on NLP capabilities for a smart editor. We call it “EHR Notes”. The EHR could be a metaphor for “air” and also the EHR. What we want to achieve is an editing capability which feels as lightweight as air, and still have the power to detect structure in the content.

The editor work in the space above openEHR data. Since the content might address any type of clinical concept it will have to be able to inject any type of archetype/template/clinical model. I.e. Patient admitted with pain in left kne. Temp. 37 C, BP: 120/80.

We’ve learned from many research programs that the training of AI robots take lots of time and resources. When finished they only cover specific domains within health and care. Lessons learned from this is that the editor must be able to support different types of robots (functions, etc.). This is why we are exploring a way to define a generic API which takes a corpus as input and a structured result as output. The output will be handled by the editor to add links in the content and also create structured content for the openEHR CDR.

As many has commented above; this is a very complex problem and the specificity of the NLP functions are not that good. This is why we think on them as assistants. They are the newly educated doctor and you should treat them as such. The output from an NLP function should be considered as a potential advice. We have to let the clinician be the one to decide if the advice is to be used.

I will publish a video showing this kind of features later.

As part of this work I implemented a very simple NLP service. The source code is here: GitHub - bjornna/ehrnotes-ask: The NLP service

The NLP engine is based on SpaCY and is trained to do NER (named entity recognition). There are multiple input sources like: SNOMED-CT for anatomy, ICNP with its axis, a medication list, etc.

Currently we are involved in research programs to work out improved NLP functions. They will be trained and developed by real NLP experts. The source code above is done by an amateur (me). Still it works reasonably well.

5 Likes

I finally found time to create and publish a video on our NLP based EHR Notes: https://twitter.com/bjornna/status/1456359961383026694?s=20

3 Likes

Hi Thomas,

As Ian remarked, the example I gave intentionally has an ambiguous SNOMEd coding. Probably 314940005 Suspected urinary tract infection (situation) would have been better. But my point is, since snomed is only terminology, not information, an archetype indeed is the smalles unit of information. So when building queries you can never draw conclusions/compute based on only the snomed code. You’ll need to check the snomed code is recorded as part of an EVALUATION.problemd_diagnosis to compute that the patient has a UTI. But you can suggest to the user a certain narrative report has ‘something do do with’ UTI. In this case it’s not a big problem that the wrong snomed code was recorded. And there still is a lot of value here, right? Or how would you solve the problem I’ve shown with the prototype screenshot?

2 Likes

Hi Bjørn, thanks for the great prototype on EHR Notes, this is very similar to what we have in mind. Could you please share a bit more about how you technically record the mappings from the plain text in the note, to ICNP and openEHR OBSERVATIONs?

Yes, within openEHR system environments whose software and models were written by semantically conscious people, this is all pretty reasonable.

Just be aware that when the data get sucked into some other environment, users there may make the assumption that the codes embedded in the data express the whole and true semantics of the data. If they do, incautious use of codes in our nice openEHR environment may have unintended consequences later.

This is not to say don’t do it; just that this is the kind of risk being run. It might be a low / no risk.

4 Likes

Aah yes, that’s a valid concern. And another reason I hate mapping to outside systems. Assumptions that make sense in one system are crazy dangerous in another. I hope we can agree that in an openEHR system mapping (a piece of) a DV_TEXT in a EVALUATION.clinical_synopsis.synopsis to a snomed clinical finding, it’s not a diagnosis. (And I would argue the same for other SNOMED uses, a terminology is not a fully computable information system, why else would we need openEHR archetypes.)
Then I’m willing to take the risk other people do something stupid.
But having said that. Could it help to add a char to TERM_MAPPING.match indicating an approximate match, for example:~? This would make the intention of the mapping even clearer in openEHR. And if we would do uri like Ian suggested by markdown url with protocol set to openehr_mapping::// there is an indication for an implementer in another system to have a look at the information in the openEHR TERM_MAPPING class and the ~ match should be a second warning not too issued too much.

No. Most parts of SNOMED CT has a context, although the context often is expressed as a default context. See for example 6.2.3. Default Context - Search and Data Entry Guide.

Btw, I think that the entire SNOMED CT Search and Data Entry Guide would be interesting for this discussion.

1 Like

I think that you oversimplify SNOMED CT here. SNOMED CT is both a terminology and an ontology and you can therefore perfectly well interpret and draw conclusions based on the meaning of a SNOMED CT concept.

Hi Mikael,

I agree - my concern was not so much about the power of SNOMED CT itself but the ability of any NLP to correctly pick up the appropriate context and apply it, or associate other parts of attribution like dates. I know there has been a lot of interest in this approach and I have a UK colleague working on it - I’ll see if I can get him to do a demo of their narrative-> SNOMED CT solution

1 Like

Hi @mikael , interesting, I didn’t know about snomed default contexts. Thank you for educating me.
I read the default context for a finding (e.g. UTI) to be:

* The finding has actually occurred (vs. being absent or not found).
* It is occurring to the subject of the record (the patient).
* It is occurring currently or at a stated past time.

But this still leaves a lot of context out to be able to you need to programatically conclude a patient ‘has’ a UTI. e.g. is it a diagnosis? who made the diagnosis (doctor/nurse/neighbour/facebook)? Is the diagnosis clinically significant or just a mild bacteriuria. etc. etc.
Otherwise we wouldn’t need information models at all, right?

The downside of this default context is that terms that do not match that context ‘family history of UTI’ are not codedable in snomed.

The search and data entry guide sure seems interesting. Any recomandation how to approach it? Aside from start at page 1 and spent multiple weekend days before you end up at page 65? (a)
I do now better appreciated Ian’s concern about automated snomed encoding. But this is also goes for average users, they won’t understand default context, which means the scope of usage of snomed is much smaller than I hoped.

1 Like

Yes @ian.mcnicoll, I know that there are good NLP solutions that can tag text with SNOMED CT concepts and I agree that it is capturing the context that is the hart(est) part of the process.

However, I have also seen less than good NLP solutions for SNOMED CT tagging that have missed the default context and similar SNOMED CT features. Hence my comments.

3 Likes

Keep your comments coming - I think we are somewhat out of date on some aspects of recent Snomed technology, so don’t be afraid to correct us.

1 Like

Hi @joostholslag,

Yes, you have understood the default context correct. I also agree that SNOMED CT, despite the default context, leaves quite much to the information model to specify.

It is also perfectly fine to override the default context of the clinical findings and procedure concepts in SNOMED CT. It is therefore the context is only a default and not a stated context. However, I would argue that the override needs to be done in some machine readable format. It would be perfectly fine to inside a Family history attribute in an archetype use the SNOMED CT concept 254837009 | Malignant neoplasm of breast (disorder) | and it would be formally interpreted as Family history of malignant neoplasm of breast. However I would strongly advice against in free text do some tagging like

The patient has a family history of <code="254837009 | Malignant neoplasm of breast (disorder) |">malignant neoplasm of breast</code>.

Because then the override of the default context would not be stored in a machine readable format and information systems would then, for good reasons, assume that the default context is present. This is the main reason why I think that we should be very careful with allowing partial tagging of free text.

It is true that quite few Family history of X exists as stand alone concepts in SNOMED CT. (Currently there is 680 of them. :smiley:) However, it is possible to use the SNOMED CT Compositional Grammar to express Family history of X with a post-coordinated expression, like

416471007 | Family history of clinical finding (situation) | : 
     246090004 |Associated finding (attribute)| = 254837009 |Malignant neoplasm of breast (disorder)|

for all clinical findings and procedures.

(In this specific case, there actually exists a SNOMED CT concept that express 429740004 | Family history of malignant neoplasm of breast (situation) | and a classifier would automatically understand that this concept is semantically equivalent with the post-coordinated expression above.)

I haven’t read the Search and Data Entry Guide for a while, but I think that chapter 6. Data Entry is the most relevant for this use case.

2 Likes