Well firstly, it has proved nearly impossible to get HL7 to be involved in anything like that, so it is something we would have to do in our community. FHIR as a target is not easy - like HL7 CDA and HL7v3, it is an ad hoc design. There are some well-known resources (e.g. to do with medication) that map not too badly, but many others that don’t, and there is no coverage at all of a vast amount of the clinical data space.
It is worth keeping one key thing in mind: the design of FHIR is for data retrieval, not data commit. It is thus factored to obtain one or a few data items on an ad hoc query basis, not to specify full data sets as you would for committing. That’s why the Observation resource, for example has one logical ‘value’ in it in FHIR, while for models designed for persisting data, it allows for much more complexity. This is a perfectly correct design aim by the way, I’m just pointing out that it arrives at significantly different results than a commit / persistence design.
Another complicating factor is the lack of inheritance. This is particularly acute with the Admin resources, and indeed companies like Mitre have created a whole new layer on top of FHIR to compensate for this (standardhealthrecord.org).
Practically what this means is that there is no straight path to providing a high-fidelity FHIR layer over the top of data created the 11,000 or so clinical data points in the 600 archetypes in CKM, at least not without a) new resources and b) refactoring to correct the model issues in FHIR (many examples documented here).
This does not mean that specific data-sets for FHIR-based apps cannot be extracted from openEHR systems, in fact it’s not hard, using AQL and the rich data underneath. It’s just that every such application is a new piece of manual development work. Right now, I don’t see any machine-based way to simply generate FHIR profiles from archetypes, unless they are all profiles of some generic structure like FHIR Questionnaire. But that would just be supplying data trees, using FHIR data types and a few other bits of meta-data - they don’t really help do any semantic computing.
Anyway, @yampeku may have other things to say on this topic, as they have done a lot of work at UPV / Veratech.