Terms and abbrevations
Abbrevations
C
- CAT : Connectathon
- CDA : Clinical Document Architecture
E
- EVS : External Validation Services
G
- GMM : Gazelle Master Model
I
- IHE : Integrating the Healthcare Enterprise
P
- PAT : Projectathon
- PDT : Test Plan
S
- SUT : System Under Test
T
- TM : Test Management
X
- XDS : Cross Enterprise Document Sharing
Glossary
B
- Benchmark Test : A standard against which measurements or comparisons can be made. A test that is be used to compare components or systems to each other or to a standard
- Beta testing : Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a compone nt or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquire feedback from the market
- Black box testing : Testing, either functional or non-functional, without reference to the internal structure of the component or system
- Black box test design techniques : Documented procedure to derive and select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure
- Blocked test case : A test case that cannot be executed because the preconditions for its execution are not fulfilled
C
- Certification : The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam
- Code coverage : An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage
- Compliance : The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions
- Component : A minimal software item that can be tested in isolation
- Coverage : The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite
D
- Defect : A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system
- Driver : A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system
- Dynamic analysis : The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution
- Dynamic testing : Testing that involves the execution of the software of a component or system
E
- Exit criteria : The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report against and to plan when to stop testing
- Expected result : he behavior predicted by the specification, or another source, of the component or system under specified conditions
F
- Fail : A test is deemed to fail if its actual result does not match its expected result
- Failure : Actual deviation of the component or system from its expected delivery, service or result. [After Fenton] The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered
- Functional requirement : A requirement that specifies a function that a component or system must perform
I
- Impact analysis : The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements
- Incident : Any event occurring during testing that requires investigation
- Interoperability : The capability of the software product to interact with one or more specified components or systems
N
- Non-functional requirement : A requirement that does not relate to functionality, but to attributes of such as reliability, efficiency, usability, maintainability and portability
P
- Pass : A test is deemed to pass if its actual result matches its expected result
- Precondition : Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure
R
- Requirement : condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document
S
- Simulator : A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs
- Static analysis : Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software development artifacts. Static analysis is usually carried out by means of a supporting tool
- Stub : A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component
T
- Test case : A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement
- Test condition : An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, quality attribute, or structural element
- Test level : A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test
- Test strategy : A high-level document defining the test levels to be performed and the testing within those levels for a programme (one or more projects)
U
- Use case : A sequence of transactions in a dialogue between an actor and a component or system with a tangible result, where an actor can be a user or anything that can exchange information with the system
V
- Validation : Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled
- Verification : Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled
W
- White box test design technique : Documented procedure to derive and select test cases based on an analysis of the internal structure of a component or system