paul: Management
will take the input and propose a possibly altered plan … This is our opportunity as member to give
input on where the org should put our energy. … Multi-year
paul: Comments are
required by Apr 7. … Revised doc will go to the board in Madrid
Next meeting
paul: April 2
ADJOURNED
Minutes manually created (not a transcript), formatted by scribe.perl
version 244 (Thu Feb 27 01:23:09 2025 UTC).
Diagnostics
Succeeded: i/Approved unan/Topic: Approval of
minutes
Warning: ‘i/Approved unanimous/Approved by
unanimous consent: ITS minutes from 12-Feb-2025
https://confluence.hl7.org/spaces/ITS/pages/321160302/2025-02-12+ITS+Minutes’
interpreted as inserting ‘Approved by unanimous consent: ITS
minutes from 12-Feb-2025
https://confluence.hl7.org/spaces/ITS/pages/321160302/2025-02-12+ITS+Minutes’
before ‘Approved unanimous’
Succeeded: i/Approved unanimous/Approved by
unanimous consent: ITS minutes from 12-Feb-2025
https://confluence.hl7.org/spaces/ITS/pages/321160302/2025-02-12+ITS+Minutes
Succeeded: s/consent, the minutes/consent, the
RDF subgroup minutes/
dbooth: The JSON and XML dicom media types explicitly do not define fragment semantics. Therefore anything goes, and we could also propose to the DICOM folks to add fragment semantics to a future version of the mime type specs. https://www.iana.org/assignments/media-types/application/dicom+json
dbooth: In summary, we think we're on the right track.
ACTION: Erich to add a comment to the issue, describing the proposed solution for the application/dicom media type to define syntax and semantics of the fragment
David Booth, Erich Bremer, Jaleh Malek, Jim Balhoff
Regrets
-
Chair
David Booth
Scribe
dbooth
Meeting minutes
Intros
Jaleh: Post-doc researcher, U of Luxembourg. Background, stds for interopability, PhD in Med Informatics, and background in software eng. … Working EU project for metadata sharing.
Verify round-trippability of FHIR RDF
jim: Round trip check had been disabled for a while. Asked Grahame (a while back) if that needs to be updated, and he said no. … The RDF output that I've done for the example, is very hand crafted, because they want it to look a specific way. Would be better to work on Jena. … I thought EricP was updating that for R5, but don't know status. … I think the round-tripping test should use that library. … The one called HAPI FHIR uses Jena. But the one in the FHIR build code org.hl7.fhir.core (called by kindling) doesn't use any std RDF libs. It takes the FHIR data model and turns it into Turtle. Very specific -- not generic RDF lib.
jaleh: Want to link DICOM metadata to other metadata
erich: You're preaching to the choir! … A couple weeks ago I released my software: ebremer/dcm2rdf … We work with ~400TB of image data
dbooth: Where can people learn about our proposed DICOM RDF? … A draft proposed DICOM RDF spec?
erich: Will be described in the paper I'm writing … Also want to buld a SHACL descripiton
<dbooth> s/description/representation/
erich: to validate those who are constructing stuff from scratch. Also to validate DICOM files. SOmetimes they're broken. … Also would be nice to have a DICOM RDF Playground.
erich: Want to be careful of the politics though. … Want them to work with us, not give them a fait a complit … If they don't take it on, then we should draft a doc.
jaleh: Does it also link to other metadata?
erich: No, it gets the DICOM metadata into RDF. … DICOM RDF can also be crosswalked back to the original.
https://www.w3.org/groups/wg/lws … SOLID project is about federated storage and identity. … W3C Linked Web Storage WG is standardizing it.
jaleh: One part of our project is for lung cancer. Also about genomics data.
erich: DICOM was extended to handle pathology whole-slide images, but it's not very mature yet.
jaleh: We're working on secondary use of data. DICOM is more for primary use. We're more interested in extracting more research-related data in a federated env. … Currently working on genomic extension of OMOP.
erich: I've implemented the geosparql approach, using ^^geo:wktLiteral datatype.
dbooth: The polygons would be opaque to SPARQL processors that do not implement the geosparql extension.
erich: Need to define coordinate system in the file. … I'd have to convert those to meters to interpret them correctly. … geosparql was designed for geo data, but there are efforts to make it work for cartesian coordinate systems. … They're trying to push QTD coordinate systems. … Then you could microns, and life would be good. But they're not fully compliant yet.
detlef: Would that be a problem with current data?
erich: It would parse it, but need to change it to change the coordinates to meters, and use the default geographical system. … If the coordinate system is not specified, it defaults to meters (I believe). I need to change the numbers to comply w the default coord system. … geosparql shows how to specify the coordinate system, but not sure what implementations support it.
eric: Do we need to preserve the lexical values?
erich: When we bring this to the DICOM group, we'll see what they say
erich: DICOM JSON allows numbers as strings to be turned into native JSON integers … so that gives precedent.
dbooth: Would we be closing off other possible future solutions with this approach?
erich: No, because the type would still be indicated, even if it is different.
ACTION: I'll make this change (of numbers to meters), and see if I can use exponent notation in the numbers.
Round tripping of FHIR RDF examples
jim: When working on R5 changes, we identified two libs that produce Turtle from FHIR. One is for the FHIR build. It is very bespoke. Doens't use std RDF lib. … I didn't update the parser side. It had been turned of for a long time. … The other lib is in HAPI FHIR, based on jena.
eric: Based on 1% of jena -- interface for having a graph store. Mostly it's idiomatic to HAPI that has a parser and serializer. … But it's very much a slave to the FHIR metamodel. … Having a resource with attributes that contain attributes, etc.
jim: That lib saves you from having to read/write RDF formats, so you can work only at the triple level?
eric: Correct. But there's no JSON in it. … For parsing, you pass it a jena triple store, and it will populate a POJO for it. … Going the other way, all we did was add triples and at some point there much be calls to turn it into Turtle, which you could extend to output FHIR JSON.
eric: There are FHIR parser/serializers for XML, JSON and Turtle. … In the HAPI code base.
David Booth, Detlef Grittner, Erich Bremer, Jim Balhoff
Regrets
-
Chair
David Booth
Scribe
dbooth
Meeting minutes
DICOM
erich: I've updated my code to convert values to meters. Need to test it. … i'm only converting the polygon data. … Maybe should use complex datatypes that Olaf proposed. … We might have a complete conversion to RDF for all of the metadata, but not the data itself, like images. … The polygons make sense to convert, to use geosparql.
detlef: If we don't know what an array is, then wonder what to do with it. … The proposed complex data type puts the data into a list. There's also a proposal to add sparql operators to work on them. … Is there a way to detect these arrays of data?
detlef: Everytime the data value multiplicity is > 1, then it's an array.
erich: I'll take a look at it, and try it. … As a command-line option … When we discuss with the DICOM group, the options will be discussion points with them.
DICOM issue Using the entity-relationship model of the real world of DICOM #159
detlef: If there is a link, it should be to the whole object, not the part.
erich: Could we use this to factor out data that is common to all? detlef: Yes … E.g., study
detlef: Everyone knows which attributes belong to the series. … Want the triplestore to auto-merge the items in the series -- factoring out the redundant info. … That makes querying much faster.
erich: I need to look at this. We have > 25M files. Some files have illegal xsd datetypes. When the data is generated (>5B triples), having the redundant data won't be efficient. What you're proposing is the way to go.
detlef: That's what we've done.
erich: I wasn't aware of the DICOM DIR files previously. … But I don't yet apply the transforms that you're proposing.
detlef: The SOP instance UID is the UID of a very large object. … We have a series resource, and say that it contains some other resource.
ACTION: Detlef to add an example to the github issue
erich: Things like age need to stay with the SOP instance.
detlef: Patient age belongs to a study, not to the patient. … But you need to know which attributes belong to study, etc. They have normative XML files that define the DICOM schema
detlef: The OWL representation that we generate from the normative XML files are pretty much 1:1. … Wonder if a SHACL representation might be more suitable.
David Booth, Deepak Sharma, EricP, Gaurav Vaidya, Jim Balhoff, Rob Hausam
Regrets
-
Chair
David Booth
Scribe
dbooth
Meeting minutes
Validating FHIR RDF examples
ericp: Iovka and CLaude and I are getting together in Lille to work on the latest implementation of ShEx in Jena, to improve error messages. … I don't expect it to be perfect before our face-to-face. Then we'll proceed to Madrid for the FHIR connectathon, and work on the HAPI FHIR impelmentation. … We'll work on running the examples and the ShEx that Deepak generated from the structure defs, through the Jena ShEx impl. … The plan is to validate all the examples, and maybe do some fuzz testing and negative testing too. … We expect to iteratively get bad error messages, improve them, and try again. … We'll be cycling on examples, the shex that validates them, and the code for the xhes.
jim: I'll be in Geneva around that time, but it isn't close enough to Lille.
(EricP and Jim exchange contact info for talking when Jim is in Geneva)
deepak: I'm recommending the use of FHIR, and want to use shex. But the FHIR servers only return XML or JSON
ericP: Because you're using profiles, we can verify conformance using a 5 line HAPI FHIR program.
deepak: How to search patient data?
ericp: You want to use the schema language as a query language? Deepak: Yes.
ericp: That will depend on the Jena API. How many records in the search?
deepak: Mayo has millions of records. … Changes happen daily … The AI model works with features, i.e., constraints on the data. Each feature will be one set of constraints. … Thinking of finding those data measures as small shex data elements, then figure out the cohort. Like SQL but using ShEx. … The data is not in RDF.
ericp: You either need to translate the data into RDF, or translate the shex constraint into something that depends on the native store. … You have a scaling problem w multiple millions of records. … Shex could be useful. Or SPARQL. And there's a FHIR SPARQL tool that I wrote in typescript. … It sees what kind of query it is. It takes the sparql query, breaks it into resources, then does FHIR REST API queries over the parts of that query that are expressible that way. … That would work if Mayo has an efficient way to do FHIR REST API queries.
deepak: They have decently fast ones.
ericp: Then you could use SPARQL instead of shex. … Look for FHIR-SPARQL. But the downside is it's written in typescript, using an inefficient JS sparqle engine. Best would be to finish porting that code to java. … Claude started that, but didn't have funding to finish.
dbooth: Maybe you could get funding to do that?
ericp: Claude arrives May 7 in Lille. Madrid is May 9. … I don't have funding to go, so I need to figure out how much I can couch surf. … Want to loop in Jose Labra too, because he's doing another implementation that is portable across platforms, written in Rust.
dbooth: What help will you need from Deepak?
ericp: Need to be able to contact him for guidance on fixing problems we find.
jim: I'll be leaving Geneva Fri May 9.
ericp: We'll want to start on this while we're in Lille, because Jim will be in the same timezone.
erich: I'm writing a java lib to make Amazon storage look like linked web storage. … 30-40 billion triples … Stress testing the use of RDF lists. … Complex datatypes would help, such as being discussed now.
ericP: We used all turtle in Janeira Digital. One thing that will be confusing: FHIR is using weak media types but claiming them as strong. … They're violating the rules of strong etags, in a way that makes them weak.
erich: How FHIR compliant are EHRs like Cerner?
ericP: A lot of systems are pretty far behind, like DSTU 2 or3. … Paris system is using FHIR R4, but have extensions for R5. … Business interests want vendor lock-in, rather than standards compliance.
dbooth: I've heard that F5 doesn't have much uptake, but they expect R6 to have more.
gaurav: Playground has a bunch of IRI code, and some tests. I wrote a PR that fixes that.
jim: round-trip validation has been disabled for a long time, because it is too slow.
ericP: ShEx validation is also disabled. … Grahame made a change that broke it, and it took a lot of time to figure out why. … But it also took a long time to run. … Iovka is working on an efficient validator for current FHIR RDF ShEx (using EXTENDS). … Claude and I will work w her in Lille on that (in a few days). … Then it could be re-enabled in the build process.
dbooth: EricP is planning to get this working again soon, and hoping to get it back into the FHIR build process.
Add a top level node for each FHIR code ontology? #57
ericP: The way forward would be to come up w a compelling use case, then go to Grahame.
dbooth: That's assuming we don't change the URIs. Another possibility would be to change those URIs, to put them under subdirectories that we can more easily control.
erich: Complex Data Types (CDT) is implemented in Jena … And RDF lists can be turned into CDT arrays. … CDT is a proposal … We also need a standardbinary representation of RDF. … RDF HDT allows query on it. … They treat all literals as a string, which doesn't work well for a lot of numeric data, but it's very fast. … Great for huge amounts of data. … I used it with Apache Arrow that I made. … HDF5 is use in the scientific world, like an intelligent zip file. Chunks can be compressed, but you can pull out parts. … I'm putting the guts of HDT into HDF5. … OpenLink Software won't touch it if it isn't a standard. … I harvested all of the imaging commons data and it was 90 TB of data, and pulled out all the metadata in a week. … I'm working on a SOLID project -- Java library that wraps a read/write storage for linked web storage. … Want to use it for image annotations and displays.
erich: The HDT source code is GPL. I'd rather have it MIT or Apache 2 license. … but it squeezes the data down a lot. … I doesn't need to be serialized/deserialized. Like a direct memory copy. … HDF5 is used for a lot of learning models. … The ordering of the HDT data is not currently usable. … I'm stuffing HDT data into HDF5.
ADJOURNED
Minutes manually created (not a transcript), formatted by scribe.perl version 244 (Thu Feb 27 01:23:09 2025 UTC).
Diagnostics
Succeeded: i/hierar/erich: Need to remove redundancy in the file
David Booth, Detlef Grittner, Erich Bremer, Jim Balhoff
Regrets
Gaurav Vaidya
Chair
David Booth
Scribe
dbooth
Meeting minutes
DICOM
erich: Running DCM to RDF converter on Stoneybrook's largets DICOM repo.
Representation of DICOM "arrays" #149
erich: I've been putting in array handling as part of CDT. … including multidimensional array access … Also asked if CDT slicing could be added, supported by Apache Arrow … Want to extend to multidimensional case. … Olaf saw the value, but doesn't want to change the spec yet.
detlef: There's implementation in Apache Jena, but no SPARQL support except Jena. … Virtuoso said "when it's a standard we'll implement it"
dbooth: How much of a show stopper is it for you, to not (yet) have those mulit-dim access functions?
erich: Not much, because I'll implement myself. … CDT is a possible future solution. … Another suggestion is what detlef does, to represeent it as JSON. … which is similar to CDT, but you don't have the SPARQL functions to connect it with other data
erich: Still want DICOM to take this on, but we're doing a starting point.
Using the entity-relationship model of the real world of DICOM #159
dbooth: To gen a DICOM ont, is there a machine readable rep of the ER def?
detlef: Yes, the authoritatieve version is in a set of XML files of the ER model. … We did it using XSLT, but I cannot give you the code. … And we use it to factor the DICOM metadata.
erich: I'm trying to express the DICOM std in SHACL. It would be interesting if we produce the same result.
ACTION: Erich to try generating an OWL ont from DICOM ER model, in addition to SHACL
detlef: HL7 v2 is connected with DICOM, so FHIR must have at least that same connection to DICOM. … There's a workflow for our hospital info sys, to create a study instance ID, and used by all images. … And for reporting, from DICOM perspective, they can put HL7 v2 as a document, so maybe there is also a way for it to ref FHIR doc.
ACTION: Detlef to look into linking FHIR with DICOM
detlef: You can rely on the pt ID in DICOM, and can link that way to FHIR graph. … That's already possible. … Might want more workflow support. … with RDF support.
dbooth: Also suggest figuring out who in the FHIR community is closest to this.
Relationship of FHIR and DICOM RDF to Linked Web Storage #162
erich: Every POD has its own metadata to expose the data it holds. How do we advertise what's available in a particular POD? … Maybe advertise the SHACL shapes for the POD. … Do we need to put anything into FHIRDF RDF to make LWS people happy?
erich: EricP is one of the LWS chairs, and I'm in the WG.
HDT code
detlef: The libraries and code are actually licensed under the lessor GPL license.
erich: Greg Williams is also int w HDT, but some of the docs don't align w the implementation. … I'm reimplementing HDT, with a slight variaion. I'm using the spiritual design into a HDF5 file. That makes it easier to put the guts of HDT into it, but in an HDF5 file. … I'm debugging the writing, then will build the reader. … If that works well, I'll weave it into Jena to do SPARQL queries, and extend to support quads. … Want to park the HDT into other kinds of file formats. Could even park it into DICOM itself. Blocks, super blocks, etc. all end up being vectors in the end. Could park them in the DICOM and then have software that knows how to interpret them. … I'm using the HDF viewer as a debugger. … I think the world needs a binary RDF to deal with the volume of medical data.
erich: HDT gives you index-sorted memory structures. … Very compact, and doesn't require deserializatoin. But it's read only. Great for bulk data.
David Booth, Detlef Grittner, Erich Bremer, EricP, Gaurav Vaidya
Regrets
-
Chair
David Booth
Scribe
ericP, dbooth_
Meeting minutes
IRI Stems
gaurav: chasing votes on add notice saying that IRIs should not be used as values in the system; only meant to be used as types mon Codings … looking for new proposals for IRI prefixes … e.g. SNOMED, because we don't have key/value for subselection
dbooth: Private elements are uniquely identified by PrivateCreatorID + dcm:id … Do we want to pursue the idea of making an algorithm to make a unique IRI from that pair?
erich: Maybe not make the perfect be the enemy of the good.
ACTION: Erich to implement this in his DICOM RDF converter
erich: This is a conversationi placeholder. How should this data look in a SOLID pod?
eric: Janeiro Digital did a project putting FHIR in a SOLID pod. … We put FHIR data into pods, and other heath data into pods. … Solid application interoperability (SAI) app store on android. App store would have resource trees that associated a dir to a shex schema. … That allowed you to find the file. … We extended that. We never dealt w imaging. … If we had a FHIR resource we'd shove it in there. … Also: TermInfo problem. In terminologies, you have varying levels of detail. SNOMED has a compoitional grammar and precoordinated terms. … Therefore even if you had a code, it could have laterality, severity, body site, etc. all packaged into one code. … One code saying 3 out of 4 allergic reactions. In other words, it has things that would not normally be in the observation. … How can you uniformly query that? … Do you make your queries really smart? Or do you normalize the data on ingestion? … Some of the DICOM extensions are for diagnostic info, right?
detlef: yes
erich: I'm int in pathology whole slide imaging. They need to be loaded, managed, marked up for deep learning pipelines. … Calculated polygons overlaid … System I'm building is RDF-based. Started aligning to SOLID. … Need to load DICOM images and metadata. … How can SOLID pod extend the metadata?
eric: Metadata would be markup of regions? erich: Yes.
erich: Howshould this look in the POD? I'm usng geosparql feature groups for polygons, annotated w SNOMED URIs. … Want to know pixel width and height are, tile sizes, micros per pixel (for scaling).
eric: Do you want that on a HEAD request or a GET?
erich: The pathology files are huge. You're not going to dowload the entire thing. … I want to extend metadata to select deeper into the files. … Want to expose that metadata
eric: You make every dir have a catalog.ttl w metadata, but nobody would find thier way back from the resource to that catalog.
erich: I could show folks what I've made, and get feedback.
David Booth, Detlef Grittner, Erich Bremer, EricP, Gaurav Vaidya, Jim Balhoff
Regrets
-
Chair
David Booth
Scribe
dbooth
Meeting minutes
DICOM
erich: Still working on implementing. The spec is complicated.
Validating FHIR RDF examples
eric: We got validation working, with Yovka and Lia and Claude. Working in Jena. Never heard back from Deepak on how to execute the ShEx generator, but the remaining work is to run the examples, round trip through HAPI implementation through the validator.
ACTION: DBooth to follow up w Deepak on connecting w Eric (and Jim?)
jim: Looks like everything I submittted was merged in March. … If the shorthand syntax for boolean, int and decimals is in the examples, then they're up to date.
erich: Halcyon is full slide imaging. Biopsy, high-res scanning. Tissue samples are often stained to highlight materials. … Over 100kx100k image size. … Pathologists mark up parts of it to make ground truth, then use that as training data. … After training, we want to apply these models to new data. … Models made from deep learning. … How to build system to handle all this? … For me, it's a matter of rebuilding it. … RDF all the way. It's now bringing in both image data and other kinds of patient data. … Basic arch uses jena w TDB2, Fuseki, Jetty. Storage based on SOLID. Using IIIF protocol, tiling engine. Zephyr multiview. … Previously had keycloak, but pulled it out. … Use SPARQL queries. … Big images are split into tiles, typically 256x256 images. … Also using image pyramids for scaling up and down. … Halcyon uses HTTP range requests. … a particular area at a particular scale. … Lots of polygons. I use well-known-text, GeoSparql. … Want to be able to do query for male smoker with tumor with 10 nm of a specified region. … I use Hilbert curves. … Nice property of 2D locality. … There's also z-curve technique. … Basically it maps 2D space into 1D space. … A polygon can be expressed as s series of intersections w hilbert curve. … Hilbert curves can be extended to n-dimensional space. … This helps w query performance. … Using GeoSparql, with prov. … Annotating w classification based on probability threshold. … Problem is that the amount of data is growning. One image 100M triples. … Some devices generate multiple channels, bringing it up over a B triples per image. … Don't know how I'll handle trillions. … Everything is being indexed together. … Maybe move each feature set into its own store. … Looked at HDT. … But it represents all literals as string, and lots of my data is numbers. … I went off on my own working on it. … BeakGraph is backed by Arrow. … Halcyon recognizes this as a reasearch object. … Central jena store has a ref to BeakGraphs, and loads them when needed. … I've approached limits even with Arrow. … Now working on a new version, that's HTTP wrapped in HTD. … This also allows me to focus on the theoretical concepts to get them right. … to get HDT implemented right. … HDT allows bit packing, and that helps a lot. … What about stuffing RDF into DICOM? … Would like to be able to stuff billions of triples into HDF5 file. … For UI, I'm using Apache Wicket -- java driven. … When you pass a java object to the framework, it generates the display. … I'm adding in jena, making Vandegraph. … Wicket has an interface that i used to make RDF a first-=class citizen. … Object passed to Wicket would be a jena graph, or a resource. … Like a resource "MyURI" … It might list the propoerties in alphabetical order. … SHACL comes into play when I want it to display a certian way. … When the data means a particular something, I can make it display how I want. … SPARQL result set gets passed to Wicket. … Trying to set up login to be able to switch it to SOLID. … After login I can see results of sparql select using jena. … RDF wrapper around dimensional renderer … There's a sparql endpoint too. … Not full compliance w SOLID, but it's the architectural plan. … It will fire up sparql endpoints as needed when going to beakgraph. … Federating queries. … TDB2 is basically a dataset of datasets. … At the moment jena does those federated queries one by one. I'd rather do them in parallel. … Found another group also working on similar stuff. Meeting w them tomorrow. … Josh Moore is the connecting agent. … They want to stuff zar files into HDT. He's an RDF fan. … I also developed a 3D graph viewer, to see if I was doing the data correctly. … We can look at images w multiviewer. … It talks to the tiling viewer. If you navigate one image, the others move in sync. … Working well. But it was a dead end, becaues a microscope can focus at different levels. Now we have stacks of whole-slide images representing layers. … I can zoom in and out fast because it's low res. As you zoom in, you'll ask for higher res, but constraining your field of view. That multiscal pyramid helps w that. … But the problem is shifting to 3D space. … Zephyr is new viewer, seeing an image tiltedin 3D. … So its a mix of multi resolution images, blended. … Now having 3D I can also use it for 2D.
eric: Can you pan at an angle?
erich: Yes.
erich: Using SNOMED URIs for annotating … Rendering is controlled by SHACL applied to the display data. … Also using Dash. … Still need to do modeling of image and feature stack, DICOM pathology WI support, GeoSparql operaotrs, and code cleanup.
eric: Any interest beyond 3D, like adding time dimension, or different study axis?
erich: I haven't had to deal with more than 3D yet, but keeping it in mind. … Want to implement composite datatypes. … and map it to HDF5. … There's a group that did pyradiomics. Extracted those features from DICOM images. … They defined the operators well, but we discoverd we can use them for pathology. … Might make RDF version of it. … Roadmap: 1. Federated learning; 2. LLM-driven query interface using SHACL-RAG. 3. Alignment of storage w W3C LWS.
(erich does demo)
erich: I like Wicket because I can use both java and JS.
Minutes manually created (not a transcript), formatted by scribe.perl version 244 (Thu Feb 27 01:23:09 2025 UTC).
Diagnostics
Succeeded: s/DICOM/Erich's DICOM use case/
Warning: ‘i/Halcyon is full slide imaging/Erich's slides: https://lists.w3.org/Archives/Public/www-archive/2025Jun/att-0001/Halcyon-2025-06-12.pdf’ interpreted as inserting ‘Erich's slides: https://lists.w3.org/Archives/Public/www-archive/2025Jun/att-0001/Halcyon-2025-06-12.pdf’ before ‘Halcyon is full slide imaging’
Succeeded: i/Halcyon is full slide imaging/Erich's slides: https://lists.w3.org/Archives/Public/www-archive/2025Jun/att-0001/Halcyon-2025-06-12.pdf
David Booth, Detlef Grittner, Erich Bremer, EricP, Gaurav Vaidya
Regrets
-
Chair
David Booth
Scribe
dbooth
Meeting minutes
DICOM
(Erich does demo)
(Erich discusses RDF representation of private data elements issue #145
detlef: You can use the same private creator ID across groups, but only one in each group. … YOu need at a minimum the group and the private creator ID.
ACTION: Erich will update his code, per discussion w Detlef
gaurav: No progress on the FHIR request. It has enough votes, but no action yet. … Continuing to come up w list of next IRI stems to request.
DICOM
dbooth: Maybe want to have a constant string like "CreatorID:" at the beginning of the fragID: urn:oid:1.2.3#CreatorID:CTP/0013 … The fragID might need to be percent encoded, if it is coming from a DICOM long string.
erich: If we use the private element ID instead of the creator ID, it will be unique and will not need percent encoding in the fragID
ACTION: Detlef to add a proposed example to the github issue
dbooth: Should we generalize the generation of a fhir:link everywhere the value is of type xsd:anyURI ?
ACTION: EricP to look at what other properties are affected by this
FHIR RDF questions
detlef: Confused by seeing fhir:name.family in one place but fhir:family in another place. Which is it?
ericP: Originally we used fully qualified property names, because FHIR originally did not enforce consistent property naming. But that's now changed in v5, to shorter property names.
ericP: I think you could us HAPI to convert R4 to R5 FHIR RDF.
dbooth: Should we try to provide something to convert R4 to R5?
ericP: You would have to pull in a POJO for each type you want to handle.
detlef: I have R4 data, but need to also now use R5. … I'll try writing the R4 data as JSON and see if I can read it as R5.
David Booth, Detlef Grittner, Erich Bremer, EricP, Jim Balhoff, Tim Prudhomme
Regrets
-
Chair
David Booth
Scribe
dbooth
Meeting minutes
DICOM
detlef: Re UIR for private elements. I put a proposal in issue 145 … I think you can just use the element name. Create a URI from these two elements. … But to search it, you need to go back to the private creator ID and the ID of that element. … But since we don't use the URI for anything else, it's okay.
tim: This is where fhir:link is used (as different properties with the same name)
"TestScript.metadata.capability.link: Links to the FHIR specification" ,
"GraphDefinition.link: Links this graph makes rules about" ,
"DeviceDefinition.link: An associated device, attached to, used with, communicating with or linking a previous or new device model to the focal device" ,
"Person.link: Link to a resource that concerns the same actual person" ,
"Bundle.entry.link: Links related to this entry" ,
"Bundle.link: Links related to this Bundle" ,
"DiagnosticReport.media.link: Reference to the image or data source" ,
"TestScript.metadata.link: Links to the FHIR specification" ,
"Patient.link: Link to a Patient or RelatedPerson resource that concerns the same actual individual" ,
"URI of a reference" .
tim: If you are adding an RDF fhir:link, you should not do it where the other kind of link is used.
ericP: How about calling it fhir:n instead of fhir:link?
dbooth: If we put our fhir:link directly under capital R Reference, then there is the possibility that the other FHIR folks could put their own link property there, and we would have a clash. … If so, could we distinguish them by type?
jim: Need to put the version in the query string for fhir:canonical
AGREED: generate a fhir:link as a sibling of fhir:v in each primitive uri object.
AGREED: as a special case, the version needs to be moved to a query string in canonical
ericP: I'll also proto the fhir:n option
169 Make R5 (and R6) HAPI parser/serializer for FHIR RDF easier to find
ericP: I was waiting until the shex and example fixes were done. … I'll make the PR after those are done, to get the right version into HAPI
tim: The java core library does this to produce the spec. Does HAPI use the core one?
ericP: No, they do two different things. The one for the spec does pretty printing, keeping comments, etc.
jim: The core parser does not parse any more. Not used. It only serializes.
jim: There are round trip tests for RDF, but they're disabled.
dbooth: We should make sure the HAPI round tripping works.
ericP: It parses JSON, serializes as turtle, parses as turtle. Then it compares both of those objects for equality. … It failed on a bunch of things because of the core POJOs regarding contained. The operator equals test requires the same order of elements in a list. … But that is fixed now by James Agnew.
David Booth, Eric Jahn, EricP, Gaurav Vaidya, Jim Balhoff, Rita Torkzadeh, Tim Prudhomme
Regrets
-
Chair
David Booth
Scribe
dbooth
Meeting minutes
INtros
ericP: Long tie RDF geek
gaurav: Semantic web technologist at U of NC
jim: Also at U of NC. We've been contributing to FHIR RDF through a grant under Mayo clinic. Done a lot of bio ontology work.
tim: Healthcare software dev. Strong interest in ont and RDF. Working for Firely.
erich: Work at Stonybrook U, long time RDF boy. Been working on a DICOM RDF represesntation. Also int in FHIR RDF.
eric-jhan: Interop arch at BitFocus. Moving heaviliy into OWL. Implementing FHIR more and more. Also int in how FHIR and other RDF representations relate. Been following since 2014.
Rita_Torkzadeh: On a call w Brian Pech and David Booth before. Int in DICOM RDF rep.
DBooth: Started this FHIR RDF effort with EricP and others.
ericP: canonical and Reference don't use the same resolution path. … They requred differnet kinds of queries. … We invented fhir:link … RDF itself does not have relative URLs, but each RDF format does. … So in converting them to abolute URIs, we had to prepend a base with "../" … Values of FHIR "uri", "url" and "canonical" types need to have a fhir:link generated as an RDF node, like <http://example.org/fhir/..foo> … There was a debate about how to do this with Reference.
(Jim shows code changes he did to implement this) … Something like "uri" needs a prefix added, to be absolute.
ericP: Not sure that profiles are being generated properly anyway, so I suggest disableing profile generation of examples.
dbooth: Does the problem only appear in profiles?
jim: I will check.
ericP: I noticed that MolecularDefinition has a problem. … Wonder how example errors are handled in the FHIR build
dbooth: Would be good if there's an error in an example, to generate an RDF example with a comment saying the example could not be generated due to an error in the original.
dbooth: Should ask the other FHIR folks what to do if an error is found in an exampe.
tim: This has a valueset binding. … ValueSets are not yet linked to Code systems, but adding fhir:link would handle that. … Wouldn't be able use OWL to validate, becuase of OWA, but ShEx could validate it. … Right now the shex only says it needs to be CodeablConcept. … Could improved the shex to check it. … But would probably be worth putting it into the ontology. … Also related to another issue: 168
tim: When you define a valueset, you can do it w a rule. … E.g., if you have a code here, it needs to be a descendent of a SNOMED code. … Would be nice to represent the descendant rel as an RDF subclass rel. … That would allow OWL to do valueset expansion.
tim: For finding the rel, OWL could do that. But for validation shex would still be needed, for CWA.
ericP: I don't think you'll get validation without connecting to a terminology server. … Could use a semantic action to talk to a terminology server.
tim: Could put this into FHIR R6. They want R6 changes by Nov 2025.
ericP: The addition of fhir:link breaks a lot of my shex code. Want to consider changing the name of fhir:link to fhir:n .
ericP: Some FHIR resources have a "link" property, which causes shex problems for the fhir:link property that we added … I propose we rename our fhir:link to fhir:n
dbooth: How much will this disturb existing FHIR RDF users?
ericP: In convo w Dutch folks who were using FHIR RDF R4, they never adopted R5.
dbooth: Also concerned that FIHR folks might use n as a property name.
ericP: We could also put fhir:v and fhir:n in a separate namespace.
dbooth: Sounds annoying.
dbooth: Could also use fhir:l (letter ell)
ericP: Intending to update HAPI and shex tooling, to pull whatever name we decide upon, from one source.
ericP: Would be good to ask Grahame also.
dbooth: We might want to use fhir:n for some other purpose in the future, such as if we do something else with lists in the future. … I'm leaning toward fhir:l
AGREED: Change from fhir:link to fhir:l
ACTION: Jim to change fhir:link to fhir:l in HAPI
jim: Havne't yet looked at handling those relative URIs in HAPI … Also wondering about the ones that are fragment identifiers.
dbooth: they are also relative URIs
dbooth: Looks to me like the bnode on line 26 in the example Jim is showing, needs to be a <#1111> relative URI instead of a bnode.
Minutes manually created (not a transcript), formatted by scribe.perl version 244 (Thu Feb 27 01:23:09 2025 UTC).
Diagnostics
Warning: ‘i/EricP to draft/jim: Looking at this example: https://github.com/fhircat/org.hl7.fhir.core-turtle-examples-writer/blob/hcls-fhir-rdf-issue-121-link-diff/fhir-examples-out/plandefinition-example-kdn5-simplified(KDN5).ttl’ interpreted as inserting ‘jim: Looking at this example: https://github.com/fhircat/org.hl7.fhir.core-turtle-examples-writer/blob/hcls-fhir-rdf-issue-121-link-diff/fhir-examples-out/plandefinition-example-kdn5-simplified(KDN5).ttl’ before ‘EricP to draft’
Succeeded: i/EricP to draft/jim: Looking at this example: https://github.com/fhircat/org.hl7.fhir.core-turtle-examples-writer/blob/hcls-fhir-rdf-issue-121-link-diff/fhir-examples-out/plandefinition-example-kdn5-simplified(KDN5).ttl
dbooth: Looks good, but I wonder if it is safe to use a relative URI there. Does the base need to be set specially for it, or not?
tim: They're relative to the server
(Jim looked at the the FHIR documentation for IDs, and verified that it seems safe to use relative URIs.
dbooth: We lose visual correspondence between the Turtle and the JSON
ericP: Round tripping, those relative URIs will no longer be relative. … The root can be chopped off when round tripping, and to check it, you would have to verify that the base matches.
dbooth: What if the base doesn't match, when someone makes FHIR RDF?
Resolving relative references against a RESTful base.
If the Bundle entry containing the reference does not have a fullUrl that matches the [RESTful URL regex] and the Bundle is a batch or transaction and the entry.request.method is POST, PUT or PATCH
take the base URL of the server that is the target of the batch/transaction and append the relative reference to it (e.g., transaction is being posted to "https://fhir.somewhere.org", then the expanded reference would be "https://fhir.somewhere.org/Patient/123"
Follow the steps for Resolving absolute references above
tim: Note that the rules for resolving references in contained resources are the same as those for resolving references in the resource that contains the contained resource. I.e. the fullUrl of the containing resource is used when determining the base for relative references, etc.
ericP: The HAPI parser of FHIR RDF can ignore those absolute URIs, and it can just use the fhir:v value that references the ID.
AGREED: To Jim's proposed solution.
IRI stems
gaurav: Making spreadsheet. Next time to add IRI stems, should to multiple at once.
jim: Made a couple more changes to issue 120 also. Started outputting IDs has hash relative IRIs. … Now working on the harder part for swapping in the IRIs for the contained parts.
jim: I looked at the RDF-star for annotating a triple, in turtle it will gen a bnode with a reified triple. Or you can provide an ID for a generated node, you can. Uses a tilde thing on the bracket syntax.
ACTION: EricP to mention to RDF-star group that bracket-equals would align better w N3 than tilde.
jim: Why change the model, for RDF-star annotation. Seems the same as the old reifications.
ericP: Their canonical exsmple was "Bob works for IBM", and annotate it w "in 2002".
jim: There's a proposal for composite data types,
erich: It caters to single dimension lists and hashes, but could be extended to multi-D arrays. … CDT allows you to SPARQL over that data.
ericP: It allows you to create new data types. … Whereas if there were a first-class list object in RDF, then you could do it.
HAPI
ericP: Haven't gotten to it yet. Still working on fhir:l. Did shex generator. EricP: Started AMIA demonstrator, but got hung up on packaging issues.
ericP: I think code system is a URL. THat could also have a fhir:l. Right now canonical extends URI, and URL extends URI. And we put special rules on canonical to add fhir:l, but we could change it to any place you see a URI.
ericP: I tried to get deepak to make a small version of FHIR that covers most of the def of structure def, e.g., 5 resources withh most of the properties, having all of the language features.
dbooth: Only sustainable if it is auto-generated.
ericP: Or the code could throw an error if there's a new feature found.
ericP: Would be nice to have an auto generated list of IRI stems, produced from the terminology server.
gaurav: We have a script that generates an expanded version of this.
dbooth: What about a github action in our repo, to pull from THO and gen the list?
gaurav: But it would miss the ones that are in flight (i.e., not yet in THO, but going through the process) … Concern about github action is that UTG repo is quite big. … Start w a manual markdown version.
erich: the properties can be both as number or as names:
dcm:Modality "CT";
dcm:00080060 "CT"
erich: I used owl:sameAs between them
detlef: suggest using equivalentProperty instead
dbooth: agree. owlSameAs could cause problems with distinguishing them.
erich: I'm also catching invalid empty VR tm (time value) and marking them as invalid in the RDF … 2276: (0018,1200) DS #0 [] DateOfLastCalibration … Jena riot detected the invalid value. … Good use case for SHACL to validate things in DICOM.
erich: The little bit of DICOM that is RDF, want to use my DICOM-to-SHACL to tease out the keywords and rules. Want to handle that keywords set to map those terms to URIs. … That mapping is part of the DICOM spec. … I drafted the DICOM paper that covers the work so far. Want your review. … Hannah Bath (QLevr author) has her own DICOM model. … She called our "unclean" … Want to remove the inline binaries and see if the data loads faster. … I'll publish a new version w these changes. … Then benchmark qlevr against virtuoso. … starting w load times, though query times also matter.
detlef: I prefer the tags rather than the names. They're usually shorter.
erich: I'll try it.
detlef: I'll try your code.
erich: I also want to try geosparql queries.
erich: Next step is to get the paper done. That will advertise it, and get more input. Then see if DICOM is interested.
FHIR RDF R5
detlef: Colleague of mine is very unhappy with it.
dbooth: Please capture that feedback our issues list so that we can address it.
tim: Also it looks like it uses shex 2.2 . It would be good to what version of shex it uses.
ericp: It uses a version we haven't yet merged back into jena. … We're still working on usability for it.
tim: Also he FHIRPath constraints are not in the pubic shex.
ericp: Yes, the need a lot of debugging. … Also the jena shex version that iovka put together is much faster, so we hope to use that one.
tim: So far I've been getting around these issues. … I'm using it for encoding more of the valueset bindings. … this issue: w3c/hcls-fhir-rdf#167 … using a reasoner with valuesets … I think I have a solution for doing that w an ontology, e.g., SNOMED … Basically because of the OWA you need to use shex for the validation part. … Did you consider: What was the status on making a SHACL version?
ericp: Probably possible, but wouldn't give you as good validation because of no notion of closed shapes. … Would probably work for base resources, but not well for profiles. … You'd have to customize it for each profile. … SHACL was meant to be easy to implement; ShEx was meant to be easy on users.
detlef: Want to receive FHIR R5 and convert it to FHIR R5 RDF. … But found that HAPI does not contain a real R5 parser. … Tim said I should look at the FHIR core for the real parser. … But then we ran into multiple issues. … Went back to HAPI FHIR base, and found that this parser is generic, and creates the correct RDF and JSON in R5 but not R4. … And we asked it for a FHIR context for R5, but it turns out it is a hybrid version that uses R4 syntax but has some R5 contexts. Broken. … tried to replace it w the R5 parser, but has implementatoin of 28k lines in one class.
ericp: That's grahame's implementaion.
detlef: I was guessing it was generated. … Also serializatoin is completely missing. … Lots of comments of implementation missing. … My colleague changed the generic parser to support proper R5 (we think). That work is ongoing. … HAPI FHIR base has some tests for it. Converted XML to RDF and see if that creates the correct XML again. … ANd my colleage was checking agains the examplex, but they're created from a third implementation. … My colleague collected issuee believing he would get an R5 parser. E.g., still uses fully qualified property names, still uses the R4 list syntax, etc. … Looks like nobody actually implemented the R5 parser. Should have caused an exception if it isn't implemented.
tim: We switched to using a class called turtle parser.
jim: SHould contained resources be identified the same way as Bundled resources? Our proposed resolution for issue 170 (contained) differs from how Bundles are handled currently.
Contained resources share the same internal id resolution space as the parent resource (for id attributes, see Narrative references).
When resolving references, references are resolved by looking through the 'container' resource - the one that contains the other resources. Since there are no nested contained resources, there is only one container resource.
tim: Fragment-only URL References to contained resources are never resolved outside the container resource. Specifically, resolution stops at the elements Bundle.entry.resource and Parameters.parameter.resource, but not at DomainResource.contained. To reference a contained resource in a different entry within a Parameters or Bundle (or anywhere else), the reference must include the containing resource. E.g. Observation/123#pat.
David Booth, Gaurav Vaidya, Jim Balhoff, Ken Lord, Tim Prudhomme
Regrets
-
Chair
David Booth
Scribe
dbooth
Meeting minutes
FHIR R6
ken: Continuing efforts w Eric Jahn, who recommended that I attend.
ken: Company MDIX, solutions for exchanging data in healthcare market. … CLinical healthcare, FHIR, v2, CDA models. … Also other clinical areas, x12. … Also in healthcare interop -- not just clinical care, but also social care. Provide interop btwn different exachange formats.
jim: U of NC, bio ontologies. Worked on the FHIR RDF serialization.
garuav: Also work w Jim at Renaissance research institute at U of NC. … Working on IRI stems, and getting IRI prefixes registered.
tim: Worked w ontologies in the past, now working for FHIRly, hoping to use FHIR RDF in the future.
There are plans for the R6 version. W each version, they update the maturity level of everything in the spec, based on how its being used in the real world to reflect its maturity. … In this next release, I learned that they want this next version to only include parts of the spec that are at the last level of maturity. … They're removing everthing that is not completely mature, which sounds like RDF and most things in the FHIR spec, but for many things they'll skip ahead many things to keep them in the spec. … So they did this analysis, and I think it would include RDF. Most things that are at least at level 3 they'll keep. … The things they're taking out, they'll put in other impl guides (IGs), or in an "Additional Resources" part of the spec. … Anything at R6 should not introduce breaking changes going forward. … For RDF R6, that means there cannot be any breaking changes going forward. … So we should review all issues that are still affecting R5. Anything new would have to be in by Nov. … But if we do add something, as long as it's not a breaking change going forward, then it's okay. … They're still working on how this will work.
tim: Maturity level: https://build.fhir.org/versions.html#maturity … This defines what they mean by forward and backward compatible. … There are things in shex that doesn't fully represnet valueset bindings. … So to make it reflect how those valueset bindings work, I don't think it would be a breaking change, but making it more consistent w other parts of FHIR. … They really want R6 to be more stable, and forward compatible. … We're also fixing a few things w R5, and I assume that would be okay. But I've also proposed some things to into R6, and I hope we can finish those by Nov.
ken: My opinion: there's been lots of talk over the last couple years. Business motivation is to get more of the FIHR resources normative in R6, and that's an effort to move people off of 4.0., which is 5 years old. Motivation is to get people to have more normative resources.
ACTION: DBooth to reveiw R6 criteria and issues list
Issue 175:
Slash notation should be defined in a notation key at the beginning of the Built-In Functions section
tim: Currently we gen OWL for every resource in the spec. But many profiles are use. Want to gen OWL classes for those profile resources. … as a subclass of the original resource. … Want to do that as a piece of code the people can use to gen that OWL. … Some of these profiles are in the main spec, some are in IGs. … E.g., want to create owl for heart rate profile: https://build.fhir.org/heartrate.html … Also should gen shex for the profiles in the main spec, but also make that code available. … Because if someone wants to validate, they'll want both OWL and shex. … Profiles can add more constraints. … IDK if it should go into R5, but I'd like it for R6.
ken: It's a lot to do this. In a profile, there are constraints, but also different rules, like different terminology bindings, new data concepts that don't exist in the base resource. … Cardinality issues. Do you anticipate covering those issues?
tim: A lot of that would already be covered, because right now we have a process to translate a structure def to an OWL class. … And we have bindings to valuesets in shex, but there's another issue that we don't fully represent them for codeable concepts. Still need to do that, even for the current RDF and shex. But if we fix that, I think profiles would work the same way.
dbooth: I think the OWL and shex are informative, rather than normative, so not subject to maturity levels.
tim: US Core is a very important IG -- required by all in the USA. … It has its own namespace
dbooth: Why? Are there name clashes?
tim: Because there could be clashes between IGs. … Not sure there is an automatable way to determine the IRI prefix for a given profile.
dbooth: Next steps?
tim: We should def them for profiles that are in the main spec. … This would enable people to use RDF the way FHIR should be used.
dbooth: Should the shex and/or owl be considered authoritative?
AGREED: View shex as authoritative iff FHIR views xml schema as authoritative
dbooth: What about owl?
gaurav: owl should be authoritative unless it conflicts
detlef: Might be hard to get owl to a reliable level
ericp: shex is authoritative for content model, owl is authoritative for inference, rdf page describes the mapping that shex gives you .... … Like in W3C you need two independent impls … I don't think any format pages for FHIR have made the claim that you only need to read one page. … Format pages give a flavor of what to do, and additional rules, but a valid doc is almost entirely defined by the schema, modulo semantics that need to be expressed in prose
dbooth: So a valid piece of fhir rdf needs to conform to rdf page prose, shex and owl
ericp: But anything that conforms to the shex will conform to the owl.
tim: They want to make R6 as backward and forward compatible as possible. … If we were to make changes to the OWL or shex after R6, that would break usage of OWL or shex, then that would go against the idea of it being "normative" maturity level. … That's why I thought, for R6, we should ensure that any new changes would be backward and forward compatible.
ericp: If we make something inconsistent, can we do bug fixes?
tim: I think so. Look at what was done for R5. They did new releases, like v4.01
dbooth: If something in the spec is mutually inconsistent, it would have to be fixable.
tim: Shex doesn't have a full representation of valueset bindings. … If we add that, technically it adds constraints to previous versions, but it's more consistent w FHIR.
tim: We're changing fhir:link to fhir:l . That's a big change. Should get that into R6, and not change it after R6.
ericp: Adding constraints to shex breaks existing invalid RDF.
dbooth: agreed.
tim: The real breaking changes are things like fhir:link to fhir:l , and any turtle serializations.
AGREED: Focus on turtle changes and consequent shex and owl changes for R6, and other shex and owl enhancements are lower priority for R6
dbooth: Needing update: examples (Jim done, but not yet PR), shex, owl, HAPI.
tim: I can do the shex and the owl generation.
ericp: I tried to do a test suite with a few FHIR resources with some datatypes and exploit most of the struct def functionality.
ericp: I can do HAPI
ADJOURNED
Minutes manually created (not a transcript), formatted by scribe.perl version 244 (Thu Feb 27 01:23:09 2025 UTC).
Diagnostics
Succeeded: s/normative/authoritative/
Succeeded: i/ADJOURNED/ericp: I can do HAPI
Succeeded: s/consequent shex and owl/consequent shex and owl changes/
No scribenick or scribe found. Guessed: dbooth
Maybe present: AGREED, dbooth, detlef, gaurav, tim
All speakers: AGREED, dbooth, detlef, ericp, gaurav, tim
David Booth, Detlef Gritner, Eric Jahn, Erich Bremer, EricP, Jim Balhoff, Ken Lord, Tim Prudhomme
Regrets
-
Chair
David Booth
Scribe
dbooth
Meeting minutes
R6 triage
dbooth: I think #172 (Issues with using FHIR R5 ShEx) can be done after R6, because it does not affect the validity of any FHIR RDF data. It only affects ShEx usability and precision.
dbooth: Similarly, I think #177 (OWL polymorphic types + references), #178 (OWL punning issues) and #179 (OWL hierarchy fixes) can be done after R6, because they do not affect the validity of any FHIR RDF data. They only affects OWL usability and precision.
tim: It includes both bug fix and improvement … already implemented fixes for these.
tim: These changes also implement the fhir:link --> fhir:l
dbooth: Can we approve #177 for inclusion in R6?
dbooth: the fhir:l change needs to be in R6
AGREED: the fhir:l change needs to be in R6
AGREED: Other than the fhir:l change, 177, 178 and 179 could wait until after R6
AGREED: 167 and 168 can be done after R6
dbooth: Also #171 (Add a documentation page for all the IRI prefixes currently on THO) after R6? I think it can be an "informative" addition, rather than normative/authoritative.
jim: There are some hard-coded URIs for LOINC and MeSH, but they don't use the IRI stems in THO
dbooth: I think THO repo can update independently of the FHIR core versioning
ken: Yes, that's the intent.
AGREED: #171 can be after R6
Issue 120: Syntax of added fhir:l links
dbooth: At
w3c/hcls-fhir-rdf#120 (comment) Tim replied: "Yes and we should recommend here that they SHOULD be generated, but not specify how."
dbooth: Should we specify the URI syntax in the rdf.html page?
tim: For References, they should be constrained by the Reference types, e.g., for Patient, "Patient" should be in the link. … But for URIs the literals should just be turned links directly.
dbooth: We'll specify the location of where the fhir:l links go, but what about the syntax of those links?
AGREED: Yes, specify what the syntax SHALL be
dbooth: And that will include the special case for canonical, from pipe to query string,
Scheduling extra calls
dbooth: Nov 1 R6 deadline is rapidly approaching. Should we schedule more FHIR RDF calls? Add Mondays at 11am Boston time?
AGREED: Add Tuesdays 11am Boston time
Issue 120
tim: Might be able to do HAPI changes after R6 deadline
FHIR Ontology
ken: This work on the FHIR ont could be helpful to us, because the FHIR data model has been critical in our ont. … Would be really helpful to have a powerpoint presentation on it. … What are the objectives and mission of this work?
ericp: Slides I did for SWAT4LS might be helpful?
ACTION: DBooth to send Ken previous status slides
tim: Also copy me
Yellow schemas
ACTION: Dbooth to create an issue to add Turtle to XML column on datatypes page
AGREED: The yellow schema in github #120 is good, modulo the above tweaks
Issue 177: Fix OWL polymorphic types + references
dbooth: The fhir:l for the canonical type is generated by the general case of generating a fhir:l for any kind of uri datatype.
dbooth: What should be said about the base for the fhir:l ?
tim: The base should be whatever FHIR server you are using.
jim: In the java code, if the lower-case reference has a slash in it, and it's not an URN or http URI or starts with a hash, then it's assumed to be relative to the current FHIR server
tim: the canonical datatype gets constrained by a Reference type. So want to add shex and OWL restrictions to indicate that.
ericp: Use case: you want to use FHIR's schema abilities. FSH encourages you to build your own models using FHIR types.
tim: There are actually FHIR Logical models that are part of the core spec, just as there are profiles, like VitalSigns … We could gen OWL classes for them
tim: The fact that resources, datatypes and all other structure defs are under the StructureDev/ subdir, ensures uniqueness. … Every US insurance company has to support the US core profile.
AGREED: Go ahead with issue 175, and put the profile and logical model ontologies in separate files.
dbooth: It feels like this may be too much to fully implement for R6, but it sounds like you're suggesting putting hooks for it into the R6 ont
tim: Yes
dbooth: I like the idea a lot, but concerned about what we put into R6
AGREED: This would be non-breaking change, so wait for after R6 for it.
R6 timeline
dbooth: Propose getting ITS approval next week, probably Wed 3pm Boston time.
tim: I think I can make the namespace change for datatypes everywhere.
Next meeting
tim: Out Thursday
HAPI
ericp: I'm digging through HAPI FHIR, and tried R4 parser on current examples (R6?), and it failed a whole bunch. … It doesn't parse extensions on extensions. Fixing that. … Want to eliminate all errors and then rebase for R5 changes. … Detlef has a version that works on old tests (R4)
detlef: But it produces somethign that looks like R5
Minutes manually created (not a transcript), formatted by scribe.perl version 246 (Wed Oct 1 15:02:24 2025 UTC).
Diagnostics
Warning: ‘i/How to generate/https://github.com/w3c/hcls-fhir-rdf/issues/178’ interpreted as inserting ‘https://github.com/w3c/hcls-fhir-rdf/issues/178’ before ‘How to generate’
Succeeded: i/How to generate/https://github.com/w3c/hcls-fhir-rdf/issues/178
ericp: For Base types, I suggest we defer to Tim, because it only affects the ont … Also for Meta datatypes … Any polymorphic properties that has that type as an option, then it would show up in instance data, and should use fhirdt: prefix
AGREED: Go ahead with fhirdt: for all 5 categories of datatypes
Issue 185 Misc rdf.html corrections and editorial updates