The Lung Image Database Consortium (LIDC): ensuring the integrity of expert-defined "truth"

Samuel G Armato, Rachael Y Roberts, Michael F McNitt-Gray, Charles R Meyer, Anthony P Reeves, Geoffrey McLennan, Roger M Engelmann, Peyton H Bland, Denise R Aberle, Ella A Kazerooni, Heber MacMahon, Edwin J R van Beek, David Yankelevitz, Barbara Y Croft, Laurence P Clarke

Research output: Contribution to journalArticlepeer-review

Abstract

RATIONALE AND OBJECTIVES: Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish "truth" for algorithm development, training, and testing. The integrity of this "truth," however, must be established before investigators commit to this "gold standard" as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the "truth" collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database.

MATERIALS AND METHODS: One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the "blinded read phase"), radiologists independently identified and annotated lesions, assigning each to one of three categories: "nodule >or=3 mm," "nodule <3 mm," or "non-nodule >or=3 mm." For the second read (the "unblinded read phase"), the same radiologists independently evaluated the same CT scans, but with all of the annotations from the previously performed blinded reads presented; each radiologist could add to, edit, or delete their own marks; change the lesion category of their own marks; or leave their marks unchanged. The post-unblinded read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of identification of potential errors introduced during the complete image annotation process and correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional.

RESULTS: A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process.

CONCLUSIONS: The establishment of "truth" must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems.

Original languageEnglish
Pages (from-to)1455-63
Number of pages9
JournalAcademic Radiology
Volume14
Issue number12
DOIs
Publication statusPublished - Dec 2007

Keywords

  • Databases as Topic
  • Diagnosis, Computer-Assisted
  • Humans
  • Knowledge Bases
  • Lung Neoplasms
  • Observer Variation
  • Quality Assurance, Health Care
  • Radiology
  • Radiology Information Systems
  • Solitary Pulmonary Nodule
  • Tomography, X-Ray Computed

Fingerprint Dive into the research topics of 'The Lung Image Database Consortium (LIDC): ensuring the integrity of expert-defined "truth"'. Together they form a unique fingerprint.

Cite this