AIR_EC_Content_Test_Overview

Overview

This Preparatory test in informational.  It is intended to prepare the AIR Evidence Creator for Connectathon tests that will be used to evaluate the AI result SRs producted by the Evidence Creator.

Another Preparatory test, AIR_EC_Capabilities, instructs the Evidence Creator to upload AI result SRs into the Samples area of Gazelle Test Management.  In that test, the Evidence Creator will also use the Pixelmed DICOM validator to perform DICOM validation of your SRs.   The Pixelmed validator checks the baseline requirements of the DICOM SR, including the requirements of the Templale IDs (TIDs) within the SR.   The tool does not, however, check the requirements and constraints that are part of the content specification in the AIR Profile.

In Gazelle Test Managment, on your Test Execution page, you will find a set of no-peer tests used toevaluate encoding of AI results; these Connectathon tests follow the naming pattern AIR_Content_*.  The different tests align with different result primitives that are included in an AI results SR, e.g. qualitative findings, measurements, locations, regions, parametric maps, tracking identifiers, image references.

Depending on the AI algorithms it implements, we expect an Evidence Creator to create/encode one or more of these types of result primitives.  We have created separate tests for the different result primitives to make it test execution and evaluation more manageable during the Connectathon.

Instructions

Prior to the start of the Connectathon, we highly recommend that the Connectathon participant that will test the Evidence Creator actor read each AIR_Content_* Connectathon test

>>Note:  There is a Content test for each of the AI result primitives.   The AI algorithm(s) on your Evidence Creator may not include all of the defined result primitives (e.g. you may not produce parametric maps.  For the Connectathon, you will only be required to perform the AIR_EC_Content* and AIR_Display* tests that are applicable to your system.  (This separation of capabilities into separate tests results in some redundant test steps, but one large test for all primitives would have been difficult for testers and monitors to manage.)

In each AIR_Content_* test, you will find test steps and evaluation criteria for specific encoding requirements for the different result primitives.  We recommend that you examine your AI result SR content using these test steps.    If you find discrepancies, you may need to update your software to be compliant with the AIR content requirements.   If you disagree with any of the tests or test steps, you should contact the IHE Radiology Domain Technical Project Manager to resolve your concern.

If you use the tests to review the SRs during the Prepatarory phase, you can be confident that the Connectathon monitor will find no errors when s/he evaluates your SRs during the Connectathon.

Evaluation

There is no result file to submit into Gazelle Test Management for this informational test.