RAD 'Do This First' Tests

This is an index of Do This First tests defined for profiles in the IHE Radiology (RAD) domain.

 

AIR_EC_Capabilities

Overview

The AI Results (AIR) Profile specifies how AI Results encoded as DICOM Structured Reports (SRs).   Depending on the AI algorithms implemented on the AIR Evidence Creator (EC) actor, the EC will create/encode one or more of the different result primitives in its SRs, e.g. qualitative findings, measurements, locations, regions, parametric maps, tracking identifiers, image references.

For the Connectathon, there is a set of no-peer tests to evaluate how the Evidence Creator encodes its AI results; the tests follow the naming pattern AIR_Content_*. Each of these tests align with a different result primitive included in an AI results SR.  We have created separate tests for the different result primitives to make it test execution and evaluation more manageable.  The Evidence Creator will perform Connectathon tests that are applicable to the SRs and primitives it has implemented.

The purpose of this Preparatory test is to have the Evidence Creator describe in narrative form the nature of its AI results implementation.     Reading this description will help the Connectathon monitor have the proper context to evaluate your Evidence Creator application, the AI results you produce, and the result primitives included in your AI SR instances.

Instructions for Evidence Creators

For this test you (the Evidence Creator) will produce a short document describing your implementation in the context of the AI Results Profile specification.  The format of the document is not important.  It may be a PDF, a Word or google doc, or some other narrative format.

Your document shall include the following content:

  1. Your system name in Gazelle Test Management (eg. OTHER_XYZ-Medical)
  2. AI Algorithm Description - this should be a sentence or two describing what your algorithm does (e.g. detect lung nodules)
  3. DICOM IODs implemented (one or more of:  Comprehensive 3D SR Storage IOD, Segmentation Storage IOD, Parametfic Map Storage IOD, Key Object Selection (KOS) Document Storage IOD)
  4. Result primitives encoded in the AI Result SR. (one more of: qualitative findings, measurements, locations, regions, parametric maps, tracking identifiers, image references)
  5. If you encode measurements, indicate whether your measurements reflect a planar region of an image (i.e. use TID 1411, a volume (TID 1410), or are measurements that are not tied to a planar region or volume (TID 1501). (Refer to RAD TF-3: 6.5.3.3 in the AIR TI Supplement for details.)
  6. If you encode regions, indicate whether they are contour-based regions (i.e. use TID 1410 or 1411) or pixel/voxel-based regions (i.e. use the DICOM Segmentation Storage SOP Class) (Refer to RAD TF-3: 6.5.3.5 for details).
  7. Please add any additional information (e.g. screen shots) that would help the reader understand your algorithm, and output.
  8. REPEAT 2 - 7 for each AI Algorithm that produces result(s) on your Evidence Creator
  9. Finally, in order to share it with your test partners, upload your document as a Sample in Gazelle Test Management.  On the 'List of Samples' page, use the dropdowns to find your test system, and on the 'Samples to share' tab, add a new "AIR_EC_Capabilities" sample and upload your document there.   When you save your sample, it will be visible to your test partners.

 

Instructions for AIR 'consumer' systems

    1. Each Evidence Creator should have uploaded a description of their AI algorithms and inputs and outputs into Gazelle Test Management (Gazelle TM under Testing-> Sample exchange.    On the Samples available for rendering tab under the AIR_EC_Capabilities entry,  This page will evolve as your partners add samples, so be patient. 
    2. Retrieve the document uploaded. The purpose is to give you an understanding of the types of AI content that your Image Display, Image Manager or Imaging Doc Cosumer will store/display/process  Refer to these help pages for details on this task.

Evaluation

There is no "pass/fail" for this test.  However, you must complete it because it is a prerequisite for several Connectathon tests.  The Connectathon monitor will be looking for the document you produce here and use it when s/he is evaluating your AI result content.

AIR_EC_Content_Test_Overview

Overview

This Preparatory test is informational.  It is intended to prepare the AIR Evidence Creator for Connectathon tests that will be used to evaluate the AI result SRs produced by the Evidence Creator.

Another Preparatory test, AIR_Sample_Exchange, instructs the Evidence Creator to upload AI result SRs into the Samples area of Gazelle Test Management.  In that test, the Evidence Creator will also use the Pixelmed DICOM validator to perform DICOM validation of your SRs.   The Pixelmed validator checks the baseline requirements of the DICOM SR, including the requirements of the Templale IDs (TIDs) within the SR.   The tool does not, however, check the requirements and constraints that are part of the content specification in the AIR Profile.

In Gazelle Test Managment, on your Test Execution page, you will find a set of no-peer Connectathon tests used to evaluate encoding of AI results; these Connectathon tests follow the naming pattern AIR_Content_*.  The different tests align with different result primitives that are included in an AI results SR, e.g. qualitative findings, measurements, locations, regions, parametric maps, tracking identifiers, image references.

Depending on the AI algorithms it implements, we expect an Evidence Creator to create/encode one or more of these types of result primitives.  We have created separate tests for the different result primitives to make test execution and evaluation more manageable during the Connectathon.

Instructions

Prior to the start of the Connectathon, we highly recommend that the Connectathon participant that will test the Evidence Creator actor read each AIR_Content_* Connectathon test

>>Note:  There is a Content test for each of the AI result primitives.   The AI algorithm(s) on your Evidence Creator may not include all of the defined result primitives (e.g. you may not produce parametric maps.  For the Connectathon, you will only be required to perform the AIR_EC_Content* and AIR_Display* tests that are applicable to your system.  (This separation of capabilities into separate tests results in some redundant test steps, but one large test for all primitives would have been difficult for testers and monitors to manage.)

In each AIR_Content_* test, you will find test steps and evaluation criteria for specific encoding requirements for the different result primitives.  We recommend that you examine your AI result SR content using these test steps.    If you find discrepancies, you may need to update your software to be compliant with the AIR content requirements.   If you disagree with any of the tests or test steps, you should contact the IHE Radiology Domain Technical Project Manager to resolve your concern.

If you use the tests to review the SRs during the Prepatarory phase, you can be confident that the Connectathon monitor will find no errors when s/he evaluates your SRs during the Connectathon.

Evaluation

There is no result file to submit into Gazelle Test Management for this informational test.

AIR_Test_Data

Overview

The AI Results (AIR) Profile requires the Image Display to demonstrate specific display capabilities when rendering AI Result SRs.  These requirements are in Display Analysis Result [RAD-136].

At the Connectathon, a monitor will sit down at your AIR Image Display and run through a set of tests to evaluate the display requirements in [RAD-136].

In this preparatory test, we are providing you with some test data advance of the Connectathon that you will use to demonstrate AIR display requirements.   The test data includes:

  • binary DICOM Structured Reports (SRs) that encode the AI result examples documented in RAD TF-3, Appendix A Example Analysis Result Encodings (currently in the AIR Trial Implementation Supplement).
  • vendor samples from prior IHE Connectathons

NOTE:  During the Connectathon, the Image Display will be required to perform tests with with AI Result IODs from the Evidence Creator test partners at that Connectathon.  The Image Display may also be asked to use AI Result IODs in this test data, especially where this sample data contains DICOM object types or AIR primitives that the 'live' test partners do not produce.

Instructions

 For AIR IMAGE DISPLAY systems:

  1. Access the test data and load the SRs, and accompanying images, onto your Image Display.  See: https://github.com/IHE/connectathon-artifacts/tree/main/profile_test_data/RAD/AIR
  2. Review requirements in the Connectathon tests listed below
  3. Use the test data in your own lab to prepare to demonstrate those display requirements to a monitor during Connectathon.

>> AIR_Display_Analysis_Result

>> AIR_Display_Parametric_Maps

>> AIR_Display_Segmentation_IOD

>> AIR_Display_*  (etc...)

 

For ALL OTHER AIR ACTORS:

It is OPTIONAL non-Image-Display actors to access the samples, but we recognize the value of test data to all developers, so you are welcome to access the samples.

  1. To access the test data, see: https://github.com/IHE/connectathon-artifacts/tree/main/profile_test_data/RAD/AIR.  Samples are arranged in sub-directories by Connectathon and then by vendor
  2. To download a file in one of the sub-directories, click on the individual file name, then use the links on the right side of the page to download the 'Raw' file.

Evaluation

IMAGE DISPLAY SYSTEMS:  Create a text file that briefly describes your progress in using the SRs with your Image Display. Upload that file into Gazelle Test Management as the result file for test. There is no pass/fail for this preparatory test . We want to make sure you're making progress toward what is expected during evaluation of your Image Display at the Connectathon.

AIW-I_Task_Performer_Capabilities

Overview

The AI Workflow for Imaging (AIW-I) Profile specifies how to request, manage, perform, and monitor AI Inference on digital image data.  

Both the sequence of transactions in AIW-I and the content of the workitem(s) created by the Task Requester depend on the AI inferences and workflows implemented on the AIW-I Task Performer actor.  Therefore, the purpose of this Preparatory test is to gather information from the Task Performer which will influence how it will interact with its test partners during the Connectathon.   The Task Performer will describe:

  • AIW-I workflows it supports. See AIW-I Section 50.1.1.3, Section 50.4.1.5, and Section 50.4.2 .  One or more of:
    • Pull workflow
    • Triggered pull workflow
    • Push workflow
  • the AI algorithms (inferences) it has implemented
  • the inputs each algorithm needs when it is triggered.  See AIW-I Section 50.4.1.1 and 50.4.1.2.

This description will help the Task Requester ensure that the workitems it creates are adequately populated for you, and that you test the workflow(s) you support with your partners at the Connectathon.

Instructions for Task Performers

For this test you (the Task Performer) will produce a short document describing your implementation in the context of the AIW-I Profile specification.  According to AIW-I, Section 50.4.1.1, a DICOM Conformance Statement is the ideal home for these details.  If you have one, great!  But, for the purpose of this preparatory test, the format of the document is not important.  It may be a PDF, a Word or google doc, or some other narrative format.

Your document shall include the following content:

  1. Your system name in Gazelle Test Management (eg. OTHER_XYZ-Medical)
  2. Technical Contact name/email - this is someone who can be contacted if there are questions about what you provide below
  3. The AIW-I Workflow(s) you support:   Pull Workflow,  Triggered-pull Workflow, and/or Push Workflow
  4. AI Algorithm Description - this should be a sentence or two describing what your algorithm does (e.g. detect lung nodules)
  5. The Workitem Code that will trigger this AI Algorithm - refer to AIW-I Section 4.1.2.   You may use a code in Table 50.4.1.2-1 if there is one that applies.  Otherwise, suggest a code that is appropriate for your AI algorithn
  6. The Input Parameters  & Values required by your AI Algorithm - you will need to be very specific in answering this.  Please refer to AIW-I Section 50.4.1.1 and Table 50.4.1.1-1.  Identify the UPS attribute(s) your algorithm relies on, and the value(s) you expect for each.
  7. The Input Information Sequence content your AI altorithm requires - Please refer to Section 50.4.1.3 and identify the DICOM images (if any) that your AI algorithm expects to see in the Input Information Sequence (0040,4021) of the UPS Workitem that triggers your AI algorithm.
  8. Any other information that will help the reader understand your algorithm and how it is triggered.
  9. REPEAT 3 - 8 for each AI Algorithm that can be triggered on your Task Performer
  10. Finally, in order to share it with your test partners, upload your document as a Sample in Gazelle Test Management.  On the 'List of Samples' page, use the dropdowns to find your test system, and on the 'Samples to share' tab, find the "AIW-I_Performer_Capabilities" entry and upload your document there.   When you save your sample, it will be visible to your test partners.

 

Instructions for Task Requesters, Managers, Watchers

You will find and read the document provided by the Task Performer above.

  1. In Gazelle Test Management, on the 'List of Samples' page, use the dropdowns to find your test system, and on the 'Samples available for rendering' tab, find the "AIW-I_Performer_Capabilities" entry with the document provided by your Task Performer partner(s) above.  
  2. Download that document and use it to configure your system for items such as workitem codes, input parameters, etc, that you will need in order to create a UPS workitem [RAD-80] for that Performer

Evaluation

There is no "pass/fail" for this test.  However, you must complete it because it is a prerequisite for several Connectathon tests.  Your AIW-I test partners, plus the Connectathon monitor, will be looking for the document produced here.

BIR_Test_Data

Overview

The Image Display actor in the Basic Image Review (BIR) Profile is unlike other IHE actors in that its requirements are primarily functional and do not require exchange of messages with other actors.  

At the Connectathon, a monitor will sit down at your system and run through a set of tests to evaluate the requirements in the BIR profile. In this preparatory test, we are providing you with the test okab and the accompanying images in advance of the Connectathon.   To prepare, we expect you to load the test data 9images) run these tests in your lab in preparation for the Connectathon itself.

Instructions

  1. Find the test plan and test data for BIR in Google Drive in IHE Documents >Connectathon > test_data > RAD-profiles > bir_data_sets .   From that folder download the following:
  • The Connectathon Test Plan for BIR Image Display: BIR_Image_Display_Connectathon_Tests-2023*.pdf
  • The BIR test images in file BIRTestData_2015.tar.bz
  • The index to the BIR test images in _README_BIR_dataset_reference.xls

After loading the test images onto your Image Display, run the test in the BIR Test Plan document using your display application.

Evaluation

Create a text file that briefly describes your progress in running these tests. Upload that file into Gazelle Test Management as the result file for test. There is no pass/fail for this preparatory test . We want to make sure you're making progress toward what is expected during evaluation of your Image Display at the Connectathon. .

IID_Prepare_Test_Data

Overview

To enable Connectathon testing, the Image Display is required host studies on its Image Display.

There is one Connectathon test -- IID Invoke Display -- to exercise the Image Display and Image Display Invoker in the IID profile. The 'Special Instructions' for that test ask you to host a set of studies. This preparatory 'test' ensures you have the proper data loaded on your system prior to arriving at the Connectathon.

We do not provide specific studies for you, but rather define the characteristics of the studies you should bring

Instructions

Come to the Connectathon with:

    • At least 3 studies for the same patient, ie will have the same value in patient ID, but the accession number and study dates will be different for each of these studies.
    • At least one other study for a different patient
    • A study containing a KOS object identify at least one image in the study as a 'key image'

Evaluation

There are no result files to upload into Gazelle Test Management for this test.  Preloading these prior to the Connectathon is intended to save you precious time during Connectathon week.

PDI_Prepare_Media

Overview

The goal of this “test” is for the Portable Media Creator system to prepare, in advance of the Connectathon, your PDI media that the Portable Media Importer partners will test with during the Connectathon.   Doing this in your home lab will save you valuable time during Connectathon week.

All PDI Portable Media Creators must support CD media; USB and DVD are optional. The media you create should contain a “representative sample” of the data produced by your system.  Complete and representative data on your media makes for a better interoperability test.

Special Instructions for Connectathon Online:

At a Connectathon Online, it is not possible for test partners to exchange physical PDI media.  In that case, we ask the Portable Media Creator (PMC) to:

  1. create an ISO image of your CD media
  2. upload that ISO file into the Sample area of Gazelle Test Management
  • On the 'Samples to share' tab for your test system, find the 'PDI' entry
  • Upload and save your ISO image on that sample page

Instructions for PDI Portable Media Creators (face-to-face Connectathon):

Prior to Connectathon, you should create two copies of your media: CD, USB, and/or DVD, depending on what you support.  On the first day of the Connectathon, you will give one copy to Connectathon monitor who is evaluating PDI tests.  You will keep one copy and use it for your peer-to-peer tests with your Importer partners.

Use the following guidelines when creating your media:

  1. Modality systems shall put all IOD types on the media that it is capable of creating (eg. MG, US, KOS, SR, CAD-SR etc).  If you can create GSPS or KOS objects, these should also be included.
  2. PACS vendors & multi-modality workstations shall put at least 5 different image types on their media.  If they support SR, KOS, etc, they shall also put those types on the media.
  3. Media creators will create two copies of appropriate media with your images and other DICOM objects.
  4. Label your physical media.  The label should contain:
  • your system name in Gazelle Test Management
  • your table location
  • and the name of a technical contact at your table at the Connectathon

Note that you may not have the information to make your label until you arrive at Connectathon.

Optional:

Starting in 2019, the ITI and Radiology Technical Framework contains specifications for including PDI and XDM content on the same media.  If your Portable Media Creator supports both the PDI and XDM Profile, you should create media with the appropriate content.   For details, see:

  • RAD TF-2: 4.47.4.1.2.3.3 "Content when Grouping with XDM"
  • ITI TF-1: 16.1.1 "Cross profile considerations - RAD Portable Data for Imaging (PDI)"
  • ITI TF-2b: 3.32.4.1.2.2. "Content Organization Overview"
  • Connectathon test "PDI_with_XDM_Create"

Evaluation

  1. There is no file to upload to Gazelle Test Management for this test.
  2. There is no specific evaluation for this test.  Feedback will come when your partners import the contents of your media during Connectathon week.
  3. Make sure you pack up the media you created and bring it to Connectathon!

 

REM_Modality_Type_and_Template_Support

Instructions

There are no test steps to execute for this test.

Instead, create a text file which documents the type of DICOM images your modality creates and lists the DICOM Baseline Template your Acquisition Modality uses when creating Dose SRs for the REM profile.

CT modalitites which report on irradiation events shall be capable of producing an SR compliant with TID 10011.

Actors which support on irradiation events for Modalities of type XR, XA, RF, MG, CR, or DX shall be capable of producing an SR compliant with TID 10001

Your text file should have the following naming convention: CompanyName_SystemName_REM.txt.

Evaluation

Submit the text file into the Gazelle Test Management as the results this test.

Preload_Codes_for_EBIW

Introduction

To prepare for testing the RAD Encounter-based Imaging Workflow (EBIW) Profile, the EBIW actors must prepare to use a common set of DICOM codes. 

  • Encounter Manager:  Please complete the configuration and preparation described below prior to performing any peer-to-peer Connectathon tests for the EBIW profile.
  • Image Manager, Results Aggregator, Modality actors:   There is no work for you to perform, but this test contains a description of the procedures and patients that will be used in peer-to-peer EBIW tests.   You will benefit from reading them prior to the Connectathon. 
  • Modalities:  If you find that the proposed DICOM codes do not adequately match what your application would use, please contact the Connectathon Radiology Technical Manager **well in advance** of the Connectathon so that the set of codes can be expanded to meet your needs.

Instructions

The codes you need are identified in the peer-to-peer test that you will perform at the Connectathon.

1.  In Gazelle Test Management, find the test "EBIW_10_Read_This_First" on your main Test Execution page.

2.  Read the entire Test Description to understand the test scenario.

3.  For each of the DICOM attributes listed in the Test Description, the Encounter Manager should configure its system to be able to use the values in the bullet lists. This ensures that consistent values will be returned in modality worklist responses for EBIW tests during the Connectathon.

 

Evaluation

There is no file to upload to Gazelle Test Management for this preparatory test.   If you do not load the codes you need on your test system prior to the Connectathon, you may find yourself wasting valuable time on the first day of Connectathon syncing your codes with those of your test partners.

Preload_Codes_for_HL7_and_DICOM

Introduction

To prepare for testing workflow profiles in RAD, CARD, LAB, and EYECARE domains, and also for the ITI PAM Profile, it is helpful for systems that send HL7 messages (eg patient registration and orders) and/or DICOM messages (modality worklist, storage) to work with a common set of codes. 

We ask ADT, Order Placer, Order Filler and Acquisistion Modality actors and PAM and PLT actors to load codes relevant to their system in advance of the Connectathon

These codes include, for example:

  • Administrative sex codes in PID-8
  • Dcotors sent in PV1
  • Facility codes sent in HL7 PV1-3
  • Universal Service ID (order codes) sent in OBR-4
  • Priority codes sent in OBR-27 or TQ1-9
  • Acquisition Modality code sent in OBR-24 and (0008,0060)
  • ...and more

Instructions

The codes that you need depend on the profile/actors you support.  HL7 and DICOM codes used for Connectathon testing are the same set that is used in the Gazelle OrderManager tool. OrderManager contains simulators for some actors in workflow profiles.

** HL7 codes ** - are documented here:

Some of these codes are also mapped into DICOM messages.  Use the spy-glass icon in the right column to view the value set for each code.  (Note that the format of these files is compliant with the IHE SVS Sharing Value Sets profile.)

  • ADT, Order Placer, and Order Filler plus PAM Supplier systems should review the link above and load codes relevant to the HL7 messages it supports

** DICOM codes ** - Order Filler and Acquisition Modality actors need a mapping between Requested Procedure codes, Scheduled Procedure codes, and Protocol Codes. 

For RAD and CARD, that hierarchy is here: https://gazelle.ihe.net/common/order-manager/orderHierarchy4Radiology.xml   
For EYECARE, that hierarchy is here: https://gazelle.ihe.net/common/order-manager/orderHierarchy4Eyecare.xml. (Note that this is documented in excel form here.)

  • An Order Filler system should load codes relevant to the domain(s) it is testing. 
  • An Acquisition Modality system should load codes relevant to the acquisitions it can perform. 

Evaluation

There is no result file to upload to Gazelle Test Management for this preparatory test.   If you do not load the codes you need on your test system prior to the Connectathon, you may find yourself wasting valuable time on the first day of Connectathon syncing your codes with those of your test partners.

DICOM_QR_Test_Data

Introduction

This test gives you access to DICOM studies used to test XDS-I Query & Retrieve, and the QIDO-RS Query [RAD-129] transaction that is used by actors in several profiles (WIA, AIR, ...).  The data is also used to test the RAD-14 transaction with the Enterprise Identiy option in SWF.b

Location of the studies

There are four DICOM studies available.  The Responder system (e.g. and Image Manager, Imaging Document Source or Imaging Document Responder) must load these four studies onto its system.  

Summary of the DICOM studies

The contents of the studies are summarized in the "XDS-I,b XCA-I and WIA studies" google sheet. 

There are 3 tabs in the sheet:

  1. Identifies values in key attributes in the DICOM header for each study.  
  2. You can ignore tabs 2 and 3; they apply to the XDS-I profile that re-uses these studes.
Patient ID   Procedure Code  Modality   Series Count    Image Count
---------------------------------------------------------------------
C3L-00277           36643-5     DX                 1              1
C3N-00953           42274-1     CT                 3             11
TCGA-G4-6304        42274-1     CT                 3             13
IHEBLUE-199                     CT                 1              1

 

 

Instructions

Prior to the Connectathon, the Imaging Document Source should:

  1. Load the 4 DICOM studies onto your test system. (See 'Location of the Studies' above.)

 

Evaluation

There is no file to upload to Gazelle Test Management for this preparatory test.   If you do not load the studies you need on your test system prior to the Connectathon, you may find yourself wasting valuable time on the first day of Connectathon.

XDS-I.b_Prepare_Manifests

Introduction

This test is for Imaging Document Source actors in the XDS-I.b and XCA-I Profiles that support the "Set of DICOM Instances" option.  (If your Imaging Document Source only supports PDF or Text Reports, then this test does not apply to you.)

For this test, we ask you to create manifests for 3 studies that Connectathon Technical Managers provide.  This enables us to check both the metadata and manifest for expected values that match data in the images and in the XDS metadata affinity domain codes defined for the Connectathon (i.e. codes.xml).  (For other peer-to-peer tests during Connectathon, you will be able to also test with studies that you provide.)

The manifests you create for these 3 studies will be used for some XDS-I/XCA-I tests during Connectathon week.

Prerequisite test

Before you prepare the manfiests using the Instructions below, first load the DICOM Studies in the Test Data.  See Prepratory Test DICOM_QR_Test_Data

Instructions

Prior to the Connectathon, the Imaging Document Source should:

  1. Load the 3 DICOM studies onto its test system. (See 'Prerequisite test' above.)
  2. Construct 3 XDS-I Manifests, one for each of the studies.
  3. Submit one DICOM Manifest, for the CT study for patient C3N-00953, as a sample in Gazelle Test Management, and perform DICOM validation:
    • Log in to Gazelle Test Management for the Connectathon
    • Access the samples page:  menu Testing-->Samples exchange
    • On the "Samples to share" tab, find the entry for "XDS-I_Manifest"
    • Upload the .dcm file for your manifest
    • Under "Actions", use the green triangle icon to perform DICOM validation of your manifest using Gazelle EVS.  We expect a validation result with no errors.

 

Evaluation

During Connectathon, a monitor will examine your Manifest; there are two verifications that Connectathon Monitors will perform:

(1) examine the DICOM Manifest for the study

(2) examine the metadata for the submitted manifest

We do not duplicate the Evaluation details here, but we encourage the Imaging Document Source to read those details now to ensure its manifest will pass verification during Connectathon.  Find those details in Gazelle Tests Management on your Text Execution page in Connectathon test "XDS-I.b_Manifest_and_Metadata".