This is an index of Do This First tests defined for profiles in the IHE Radiology (RAD) domain.
The AI Results (AIR) Profile specifies how AI Results encoded as DICOM Structured Reports (SRs). Depending on the AI algorithms implemented on the AIR Evidence Creator (EC) actor, the EC will create/encode one or more of the different result primitives in its SRs, e.g. qualitative findings, measurements, locations, regions, parametric maps, tracking identifiers, image references.
For the Connectathon, there is a set of no-peer tests to evaluate how the Evidence Creator encodes its AI results; the tests follow the naming pattern AIR_Content_*. Each of these tests align with a different result primitive included in an AI results SR. We have created separate tests for the different result primitives to make it test execution and evaluation more manageable. The Evidence Creator will perform Connectathon tests that are applicable to the SRs and primitives it has implemented.
The purpose of this Preparatory test is to have the Evidence Creator describe in narrative form the nature of its AI results implementation. Reading this description will help the Connectathon monitor have the proper context to evaluate your Evidence Creator application, the AI results you produce, and the result primitives included in your AI SR instances.
For this test you (the Evidence Creator) will produce a short document describing your implementation in the context of the AI Results Profile specification. The format of the document is not important. It may be a PDF, a Word or google doc, or some other narrative format.
Your document shall include the following content:
There is no "pass/fail" for this test. However, you must complete it because it is a prerequisite for several Connectathon tests. The Connectathon monitor will be looking for the document you produce here and use it when s/he is evaluating your AI result content.
This Preparatory test is informational. It is intended to prepare the AIR Evidence Creator for Connectathon tests that will be used to evaluate the AI result SRs produced by the Evidence Creator.
Another Preparatory test, AIR_Sample_Exchange, instructs the Evidence Creator to upload AI result SRs into the Samples area of Gazelle Test Management. In that test, the Evidence Creator will also use the Pixelmed DICOM validator to perform DICOM validation of your SRs. The Pixelmed validator checks the baseline requirements of the DICOM SR, including the requirements of the Templale IDs (TIDs) within the SR. The tool does not, however, check the requirements and constraints that are part of the content specification in the AIR Profile.
In Gazelle Test Managment, on your Test Execution page, you will find a set of no-peer Connectathon tests used to evaluate encoding of AI results; these Connectathon tests follow the naming pattern AIR_Content_*. The different tests align with different result primitives that are included in an AI results SR, e.g. qualitative findings, measurements, locations, regions, parametric maps, tracking identifiers, image references.
Depending on the AI algorithms it implements, we expect an Evidence Creator to create/encode one or more of these types of result primitives. We have created separate tests for the different result primitives to make test execution and evaluation more manageable during the Connectathon.
Prior to the start of the Connectathon, we highly recommend that the Connectathon participant that will test the Evidence Creator actor read each AIR_Content_* Connectathon test
>>Note: There is a Content test for each of the AI result primitives. The AI algorithm(s) on your Evidence Creator may not include all of the defined result primitives (e.g. you may not produce parametric maps. For the Connectathon, you will only be required to perform the AIR_EC_Content* and AIR_Display* tests that are applicable to your system. (This separation of capabilities into separate tests results in some redundant test steps, but one large test for all primitives would have been difficult for testers and monitors to manage.)
In each AIR_Content_* test, you will find test steps and evaluation criteria for specific encoding requirements for the different result primitives. We recommend that you examine your AI result SR content using these test steps. If you find discrepancies, you may need to update your software to be compliant with the AIR content requirements. If you disagree with any of the tests or test steps, you should contact the IHE Radiology Domain Technical Project Manager to resolve your concern.
If you use the tests to review the SRs during the Prepatarory phase, you can be confident that the Connectathon monitor will find no errors when s/he evaluates your SRs during the Connectathon.
There is no result file to submit into Gazelle Test Management for this informational test.
The AI Results (AIR) Profile requires the Image Display to demonstrate specific display capabilities when rendering AI Result SRs. These requirements are in Display Analysis Result [RAD-136].
At the Connectathon, a monitor will sit down at your AIR Image Display and run through a set of tests to evaluate the display requirements in [RAD-136].
In this preparatory test, we are providing you with some test data advance of the Connectathon that you will use to demonstrate AIR display requirements. The test data includes:
NOTE: During the Connectathon, the Image Display will be required to perform tests with with AI Result IODs from the Evidence Creator test partners at that Connectathon. The Image Display may also be asked to use AI Result IODs in this test data, especially where this sample data contains DICOM object types or AIR primitives that the 'live' test partners do not produce.
For AIR IMAGE DISPLAY systems:
>> AIR_Display_Analysis_Result
>> AIR_Display_Parametric_Maps
>> AIR_Display_Segmentation_IOD
>> AIR_Display_* (etc...)
For ALL OTHER AIR ACTORS:
It is OPTIONAL non-Image-Display actors to access the samples, but we recognize the value of test data to all developers, so you are welcome to access the samples.
IMAGE DISPLAY SYSTEMS: Create a text file that briefly describes your progress in using the SRs with your Image Display. Upload that file into Gazelle Test Management as the result file for test. There is no pass/fail for this preparatory test . We want to make sure you're making progress toward what is expected during evaluation of your Image Display at the Connectathon.
The AI Workflow for Imaging (AIW-I) Profile specifies how to request, manage, perform, and monitor AI Inference on digital image data.
Both the sequence of transactions in AIW-I and the content of the workitem(s) created by the Task Requester depend on the AI inferences and workflows implemented on the AIW-I Task Performer actor. Therefore, the purpose of this Preparatory test is to gather information from the Task Performer which will influence how it will interact with its test partners during the Connectathon. The Task Performer will describe:
This description will help the Task Requester ensure that the workitems it creates are adequately populated for you, and that you test the workflow(s) you support with your partners at the Connectathon.
For this test you (the Task Performer) will produce a short document describing your implementation in the context of the AIW-I Profile specification. According to AIW-I, Section 50.4.1.1, a DICOM Conformance Statement is the ideal home for these details. If you have one, great! But, for the purpose of this preparatory test, the format of the document is not important. It may be a PDF, a Word or google doc, or some other narrative format.
Your document shall include the following content:
You will find and read the document provided by the Task Performer above.
There is no "pass/fail" for this test. However, you must complete it because it is a prerequisite for several Connectathon tests. Your AIW-I test partners, plus the Connectathon monitor, will be looking for the document produced here.
The Image Display actor in the Basic Image Review (BIR) Profile is unlike other IHE actors in that its requirements are primarily functional and do not require exchange of messages with other actors.
At the Connectathon, a monitor will sit down at your system and run through a set of tests to evaluate the requirements in the BIR profile. In this preparatory test, we are providing you with the test okab and the accompanying images in advance of the Connectathon. To prepare, we expect you to load the test data 9images) run these tests in your lab in preparation for the Connectathon itself.
After loading the test images onto your Image Display, run the test in the BIR Test Plan document using your display application.
Create a text file that briefly describes your progress in running these tests. Upload that file into Gazelle Test Management as the result file for test. There is no pass/fail for this preparatory test . We want to make sure you're making progress toward what is expected during evaluation of your Image Display at the Connectathon. .
To enable Connectathon testing, the Image Display is required host studies on its Image Display.
There is one Connectathon test -- IID Invoke Display -- to exercise the Image Display and Image Display Invoker in the IID profile. The 'Special Instructions' for that test ask you to host a set of studies. This preparatory 'test' ensures you have the proper data loaded on your system prior to arriving at the Connectathon.
We do not provide specific studies for you, but rather define the characteristics of the studies you should bring
Come to the Connectathon with:
There are no result files to upload into Gazelle Test Management for this test. Preloading these prior to the Connectathon is intended to save you precious time during Connectathon week.
The goal of this “test” is for the Portable Media Creator system to prepare, in advance of the Connectathon, your PDI media that the Portable Media Importer partners will test with during the Connectathon. Doing this in your home lab will save you valuable time during Connectathon week.
All PDI Portable Media Creators must support CD media; USB and DVD are optional. The media you create should contain a “representative sample” of the data produced by your system. Complete and representative data on your media makes for a better interoperability test.
At a Connectathon Online, it is not possible for test partners to exchange physical PDI media. In that case, we ask the Portable Media Creator (PMC) to:
Prior to Connectathon, you should create two copies of your media: CD, USB, and/or DVD, depending on what you support. On the first day of the Connectathon, you will give one copy to Connectathon monitor who is evaluating PDI tests. You will keep one copy and use it for your peer-to-peer tests with your Importer partners.
Use the following guidelines when creating your media:
Note that you may not have the information to make your label until you arrive at Connectathon.
Optional:
Starting in 2019, the ITI and Radiology Technical Framework contains specifications for including PDI and XDM content on the same media. If your Portable Media Creator supports both the PDI and XDM Profile, you should create media with the appropriate content. For details, see:
There are no test steps to execute for this test.
Instead, create a text file which documents the type of DICOM images your modality creates and lists the DICOM Baseline Template your Acquisition Modality uses when creating Dose SRs for the REM profile.
CT modalitites which report on irradiation events shall be capable of producing an SR compliant with TID 10011.
Actors which support on irradiation events for Modalities of type XR, XA, RF, MG, CR, or DX shall be capable of producing an SR compliant with TID 10001
Your text file should have the following naming convention: CompanyName_SystemName_REM.txt.
Submit the text file into the Gazelle Test Management as the results this test.
To prepare for testing the RAD Encounter-based Imaging Workflow (EBIW) Profile, the EBIW actors must prepare to use a common set of DICOM codes.
The codes you need are identified in the peer-to-peer test that you will perform at the Connectathon.
1. In Gazelle Test Management, find the test "EBIW_10_Read_This_First" on your main Test Execution page.
2. Read the entire Test Description to understand the test scenario.
3. For each of the DICOM attributes listed in the Test Description, the Encounter Manager should configure its system to be able to use the values in the bullet lists. This ensures that consistent values will be returned in modality worklist responses for EBIW tests during the Connectathon.
There is no file to upload to Gazelle Test Management for this preparatory test. If you do not load the codes you need on your test system prior to the Connectathon, you may find yourself wasting valuable time on the first day of Connectathon syncing your codes with those of your test partners.
To prepare for testing workflow profiles in RAD, CARD, LAB, and EYECARE domains, and also for the ITI PAM Profile, it is helpful for systems that send HL7 messages (eg patient registration and orders) and/or DICOM messages (modality worklist, storage) to work with a common set of codes.
We ask ADT, Order Placer, Order Filler and Acquisistion Modality actors and PAM and PLT actors to load codes relevant to their system in advance of the Connectathon
These codes include, for example:
The codes that you need depend on the profile/actors you support. HL7 and DICOM codes used for Connectathon testing are the same set that is used in the Gazelle OrderManager tool. OrderManager contains simulators for some actors in workflow profiles.
** HL7 codes ** - are documented here:
Some of these codes are also mapped into DICOM messages. Use the spy-glass icon in the right column to view the value set for each code. (Note that the format of these files is compliant with the IHE SVS Sharing Value Sets profile.)
** DICOM codes ** - Order Filler and Acquisition Modality actors need a mapping between Requested Procedure codes, Scheduled Procedure codes, and Protocol Codes.
For RAD and CARD, that hierarchy is here: https://gazelle.ihe.net/common/order-manager/orderHierarchy4Radiology.xml
For EYECARE, that hierarchy is here: https://gazelle.ihe.net/common/order-manager/orderHierarchy4Eyecare.xml. (Note that this is documented in excel form here.)
There is no result file to upload to Gazelle Test Management for this preparatory test. If you do not load the codes you need on your test system prior to the Connectathon, you may find yourself wasting valuable time on the first day of Connectathon syncing your codes with those of your test partners.
This test gives you access to DICOM studies used to test XDS-I Query & Retrieve, and the QIDO-RS Query [RAD-129] transaction that is used by actors in several profiles (WIA, AIR, ...). The data is also used to test the RAD-14 transaction with the Enterprise Identiy option in SWF.b
Location of the studies
There are four DICOM studies available. The Responder system (e.g. and Image Manager, Imaging Document Source or Imaging Document Responder) must load these four studies onto its system.
Summary of the DICOM studies
The contents of the studies are summarized in the "XDS-I,b XCA-I and WIA studies" google sheet.
There are 3 tabs in the sheet:
Patient ID Procedure Code Modality Series Count Image Count --------------------------------------------------------------------- C3L-00277 36643-5 DX 1 1 C3N-00953 42274-1 CT 3 11 TCGA-G4-6304 42274-1 CT 3 13 IHEBLUE-199 CT 1 1
Prior to the Connectathon, the Imaging Document Source should:
There is no file to upload to Gazelle Test Management for this preparatory test. If you do not load the studies you need on your test system prior to the Connectathon, you may find yourself wasting valuable time on the first day of Connectathon.
This test is for Imaging Document Source actors in the XDS-I.b and XCA-I Profiles that support the "Set of DICOM Instances" option. (If your Imaging Document Source only supports PDF or Text Reports, then this test does not apply to you.)
For this test, we ask you to create manifests for 3 studies that Connectathon Technical Managers provide. This enables us to check both the metadata and manifest for expected values that match data in the images and in the XDS metadata affinity domain codes defined for the Connectathon (i.e. codes.xml). (For other peer-to-peer tests during Connectathon, you will be able to also test with studies that you provide.)
The manifests you create for these 3 studies will be used for some XDS-I/XCA-I tests during Connectathon week.
Before you prepare the manfiests using the Instructions below, first load the DICOM Studies in the Test Data. See Prepratory Test DICOM_QR_Test_Data
Prior to the Connectathon, the Imaging Document Source should:
During Connectathon, a monitor will examine your Manifest; there are two verifications that Connectathon Monitors will perform:
(1) examine the DICOM Manifest for the study
(2) examine the metadata for the submitted manifest
We do not duplicate the Evaluation details here, but we encourage the Imaging Document Source to read those details now to ensure its manifest will pass verification during Connectathon. Find those details in Gazelle Tests Management on your Text Execution page in Connectathon test "XDS-I.b_Manifest_and_Metadata".