Sorry, you need to enable JavaScript to visit this website.
Skip to main content

Coronavirus Disease 2019 (COVID-19) and Diagnostic Error

Angel N. Desai, MD, MPH and Patrick S. Romano, MD, MPH, on behalf of the AHRQ PSNet team | January 11, 2022
View more articles from the same authors.

Originally published July 30, 2020. Updated January 11, 2022.


Diagnostic error has been increasingly recognized as an important and evolving patient safety issue. Amid a global pandemic, delayed diagnosis of infection with SARS-CoV-2, the virus that causes COVID-19, may lead to preventable transmission1 and delayed initiation of effective treatment.2 Perhaps even more importantly, other treatable diagnoses may be missed as clinicians focus on suspected or confirmed COVID-19.3 As state4 and federal5 governments relax COVID-19-related restrictions, the healthcare system faces increased pressure to diagnose cases rapidly and accurately.6

This primer has been updated to incorporate new information and evidence. The embedded links are intended to refer readers to relevant resources published on PSNet, while resources from other federal agencies and the peer-reviewed literature are listed at the end.

Biases in the Diagnostic Process

Clinician decision-making often relies on cognitive thought processes—called heuristics—that facilitate rapid diagnosis and treatment in stressful circumstances. However, these same heuristics that support “fast and frugal” decision-making can lead to diagnostic errors. To avoid such errors, it is useful to understand why these heuristics potentially lead to biases in the diagnostic process, and how these biases can be mitigated. This process has been described as “cognitive debiasing” or “meta-cognition.” It entails switching from “automatic thinking” (also called “intuitive” or System 1 reasoning) to more “reflective thinking” (also called “analytical” or System 2 reasoning), informed by communication with other team members and use of online resources.

Availability Bias

The availability heuristic may lead clinicians to overdiagnose conditions that are available in their memories, based on recent reading or clinical encounters. At the same time, clinicians may tend to underdiagnose conditions with which they have little direct or indirect experience. Specifically, availability affects how healthcare providers estimate the pre-test or prior probability of disease. In two WebM&M cases published on Patient Safety Network (PSNet), the diagnosis of aortic dissection—diagnosis with which emergency department physicians have little experience—was missed as physicians focused on more common causes of “crushing chest pain” and right-sided abdominal pain. In communities where the incidence of COVID-19 is low, the availability heuristic may cause clinicians to miss the diagnosis of COVID-19. Many communities in the United States and elsewhere have seen sudden, exponential increases in the incidence of COVID-19,7 particularly in the context of variants of concern,8 illustrating the risk of relying on prior experience in estimating the probability of disease.

With heavy coverage of COVID-19 in both lay media and professional journals, availability bias may lead clinicians to miss other respiratory infections (e.g., Legionella, Pneumococcus, Mycoplasma, Chlamydia), exacerbations of asthma or chronic obstructive lung disease, or acute cardiovascular or neurologic disease because of so much recent experience with COVID-19.1 One commentator recently described this phenomenon as “COVID blindness.”9 Given the availability of effective therapies for these other conditions, it is important to consider alternative diagnoses that may present with similar symptoms.10

Anchoring Bias

The anchoring heuristic may lead clinicians to resist altering their initial diagnostic impression, despite subsequent information that contradicts that impression. This phenomenon has also been described as premature closure on a diagnosis that turns out to be incorrect. Specifically, anchoring affects how much healthcare providers adjust their post-test or posterior probability estimates after new findings appear or new test results become available. In a previous WebM&M case, a patient was treated six times over several months for presumed diabetic neuropathy while a more serious diagnosis of peripheral artery disease was missed. Other PSNet WebM&M cases have discussed a patient with glioblastoma whose physicians prematurely closed on a diagnosis of vasculitis, and a patient with a perforated esophagus whose physicians prematurely closed on a diagnosis of pneumonia.

In current circumstances, this cognitive (anchoring) bias may lead clinicians to miss the diagnosis of COVID-19 by putting insufficient weight on new findings that emerge after the patient’s initial presentation, failing to repeat diagnostic testing after an initially negative result, or failing to consider SARS-CoV-2 after another pathogen has been identified. Recent case series have shown that 2-6% of patients hospitalized with COVID-19 can have co-infections with other respiratory pathogens such as rhinoviruses, parainfluenza virus 3, respiratory syncytial virus, Mycoplasma pneumonia, or Chlamydia pneumonia.11,12 A positive test result for one of these pathogens may lead to premature closure and failure to consider the possibility of co-infection with SARS-CoV-2, particularly in patients who had a previous SARS-CoV-2 infection or are fully vaccinated.

Premature closure may also lead clinicians to make a presumptive diagnosis of COVID-19 (allowing the patient to self-quarantine at home) while failing to order diagnostic tests that would point to other diagnoses. The emergence of the Delta variant, the Omicron variant, and other variants of concern (VOCs) has resulted in an increasing number of cases across the United States, which may also contribute to premature closure13. Even when the diagnosis of COVID-19 is confirmed, anchoring bias may impede clinicians from recognizing secondary bacterial infections and other treatable complications.1,10 To minimize anchoring and avoid premature closure, healthcare providers may consider approaches such as the following:

  • Take a diagnostic time-out, or a deliberate pause to reassess the working diagnosis and consider other possibilities.
  • Deliberately look for evidence that would question or challenge the working diagnosis, recognizing that clinicians tend to look for prototypical manifestations of disease through pattern recognition and fail to consider atypical variants (a problem known as “representativeness restraint”).  
  • Explicitly consider the risk of two co-occurring diagnoses, as in a WebM&M case where the diagnosis of sepsis was missed because of co-occurring tumor lysis syndrome.
  • Use “artificial intelligence” (e.g., computerized clinical decision support system [CDSS] or other predictive analytics) to provide clinical decision support that estimates the probabilities of alternative diagnoses, electronically triggers testing for COVID-19, and/or queries patients automatically regarding symptoms (including symptoms recently identified by Centers for Disease Control and Prevention).
  • Carefully assess the response to initial treatment, such as antibiotic therapy for presumed bacterial pneumonia, and consider alternative diagnoses if that response is poor.

Implicit Biases

The disproportionate impact of COVID-19 on African-American and Hispanic communities, older adults and people with disabilities in the United States has highlighted the importance of conscious efforts to identify and address implicit biases that may be contributing to these disparities.14-16 In contrast to explicit bias, which may be more evident and not addressed in this Primer, implicit bias involves associations outside conscious awareness that lead to a negative evaluation of a person based on irrelevant characteristics such as race and ethnicity, nationality, disability, and socio-economic status.  Recognizing the presence of implicit biases and developing clinical interventions to train medical personnel may reduce biases in the diagnostic process.17 In the context of COVID-19, implicit biases may hinder honest and accessible patient-provider communication about potential exposures and high-risk signs and symptoms.18 

Diagnostic Testing for Active SARS-CoV-2 Infection

Viral tests such as nucleic acid amplification tests (NAATs) are molecular tests that are used to detect SARS-CoV-2 infection. Reverse-transcription polymerase chain reaction (RT-PCR) is a type of NAAT that is used to amplify nucleic acids and detect evidence of viral RNA. Laboratory-based evaluations of currently used RT-PCR tests show high analytical sensitivity and near-perfect specificity with no misidentification of other common respiratory pathogens,19 but test sensitivity is typically lower due to variation in how specimens are obtained and handled, the stage of illness when testing is performed, and potential mutations or deletions in the viral genome that the assay targets for detection.20,21 In recent research and clinical practice, serial testing has been used to establish the diagnosis of COVID-19 in the setting of high clinical suspicion. The false negative rate on initial testing of symptomatic patients, based on any positive result on serial testing (when clinically indicated), has been reported in several studies as follows:

  • 2.5% to 29%22-27(in six Chinese studies),
  • 3.2% in a study from New York,10 and
  • 4.3% in a large study from Seattle.28

According to a literature review and pooled analysis of data from seven studies with 1,330 respiratory samples, the estimated false-negative rate was 38% on the day of symptom onset, decreasing to a nadir of 20% on day 8.29 A subsequent study from New York reported on 3,432 patients who had repeated testing; the clinical sensitivity of a single RT-PCR test for SARS-CoV-2 was estimated between 58% and 96%.30

Point-of-care molecular SARS-CoV-2 testing has been increasingly employed in certain settings for quicker turnaround as compared to standard laboratory-based RT-PCR tests. One study comparing an early point-of-care NAAT to laboratory-based RT-PCR found the median reporting time to be 2.6 versus 26.4 hours with an effective sensitivity of 96.9% (95% CI 84.2-99.9) and specificity of 100% (95% CI 96.9-100).31

Viral antigen testing has been widely employed to detect infection with respiratory pathogens, including SARS-CoV-2.  While RT-PCR detects RNA, antigen tests are immunoassays that detect the presence of viral antigens in nasal or nasopharyngeal specimens. The most commonly used are rapid lateral flow assays that can be employed at home or at the point-of-care, although laboratory-based SARS-CoV-2 antigen assays are also available at some institutions.32 One benefit of antigen testing is that the turnaround time for reporting is typically faster compared with standard molecular testing and it is often less expensive, although the sensitivity of such tests can vary. A recent Cochrane Review of rapid, point-of-care tests for diagnosis of SARS-CoV-2 infection reported that the average sensitivity across 48 published studies evaluating antigen tests was 72.0% (95% CI 63.7%-79.0%) among symptomatic participants and 58.1% (95% CI 40.2% to 74.1%) among asymptomatic participants.33 In addition, sensitivity was found to vary across tests (34.1%-88.1%) and to be associated with timing of symptoms (78.3% in the first week after symptom onset versus 51.0% in the second week). Overall specificity was high (99.6%, 95% CI 99.0% to 99.8%) for both symptomatic and asymptomatic participants.33 More frequent, serial testing may offset the decreased sensitivity of point-of-care antigen tests.34 Given variable sensitivity, viral antigen testing is sometimes considered adjunctive to RT-PCR testing,35 and current CDC guidelines recommend considering confirmatory testing using laboratory-based NAAT in certain settings. The Infectious Diseases Society of America continues to recommend NAATs as the diagnostic method of choice for SARS-CoV-2 given these limitations;31 however, antigen testing can be helpful when molecular testing is not readily available. 

In general, viral shedding appears to be greater in the nasopharynx than in the oropharynx,36 and more prevalent in specimens obtained from the lower respiratory tract (e.g., bronchoalveolar lavage fluid, sputum) than in specimens obtained from the upper respiratory tract.37-39 Clinical studies have confirmed that nasopharyngeal sampling is more sensitive than oropharyngeal sampling.39,40 A recent systematic review and meta-analysis examining specimen collection methods for SARS-CoV-2 RT-PCR testing found that “using nasopharyngeal swabs as the gold standard, pooled nasal and throat swabs gave the highest sensitivity of 97% (95% CI 93-100), whereas lower sensitivities were achieved by saliva (85%, 75-93) and nasal swabs (86%, 77-93) and a much lower sensitivity by throat swabs (68%, 35-94).” High specificities (97-99%) and negative predictive values (95-99%) were observed among different clinical specimens, although substantial heterogeneity limited the conclusions of this study41  and other recent reviews.42-45

Technical guidance regarding specimen acquisition is available from CDC:

  • For initial diagnostic testing for SARS-CoV-2, CDC recommends collecting an upper respiratory tract specimen using a synthetic fiber swab with a plastic or wire shaft.
  • Do not use calcium alginate swabs or swabs with wooden shafts, as they may contain substances that inactivate some viruses and may inhibit molecular tests.
  • Swabs should be placed immediately into a sterile transport tube containing 2-3 mL of an appropriate transport medium to preserve viral nucleic acid.
  • Patients with productive cough may also have sputum collected and tested when available, although sputum induction is not recommended due to concerns for aerosol production. For patients for whom it is clinically indicated (e.g., those receiving invasive mechanical ventilation), a lower respiratory tract aspirate or bronchoalveolar lavage sample may also be collected and tested. Collect 2-3 mL into a sterile, leak-proof, screw-cap sputum collection cup or sterile dry container.

Self-tests have become widely available in recent months. Self-tests or “at-home” tests can be antigen or molecular-based, and typically require either a nasal or saliva specimen.46 Early studies have suggested that some methods of self-collection may be comparable to clinician-collected samples.47 Certain at-home diagnostic tests have been granted Emergency Use Authorizations by the FDA; however, sensitivities of these tests vary and can depend on sample collection, timing, local COVID-19 epidemiology, and characteristics of the test themselves.48,49 Recent guidance from the CDC recommends isolation and discussion with a healthcare provider after a positive self-test, particularly in the setting of a fully vaccinated, asymptomatic and/or unexposed individual. A negative antigen-based self-test result may represent a false-negative; for this reason, some kits recommend repeat testing 2 to 3 days after an initial negative test as serial testing may improve overall diagnostic accuracy.47

Clinical Implications

CDC has reported marked variation in the incidence of SARS-CoV-2 infection across the United States over time.50 Factors contributing to this variation may include differences in the timing of introduction and transmission of SARS-CoV-2 and its recent variants, population density and resulting exposure to respiratory droplets, uptake of vaccination and adherence with other community mitigation strategies, availability of SARS-CoV-2 testing, and the risk profile of the exposed populations (e.g., older adults or those with underlying medical conditions).

Tracking of the community-specific cumulative incidence and point prevalence of SARS-Cov-2 infection is available from state and local health departments and from data aggregators such as the CDC, the University of Washington’s Institute for Health Metrics Evaluation and Johns Hopkins’ Coronavirus Resource Center. Although these data sources do not provide identical information, due to gaps and lags in data capture, they can be used to help clinicians avoid availability bias by supporting better estimation of the pre-test or prior probability of COVID-19. In addition, CDC’s National Healthcare Safety Network (NHSN) has a COVID-19 Module that enables hospitals to report daily counts of patients with suspected or confirmed COVID-19, current use and availability of hospital beds and mechanical ventilators, as well as both healthcare worker staffing and supply status and availability. Antigen tests reported to public health departments must be distinguished from NAAT and antibody tests. 

A simple example illustrates the importance of availability bias and how incorrect assumptions about disease prevalence lead to incorrect interpretations of test results (also known as post-test or posterior probabilities). These relationships are further explained in several open access papers and online tutorials.51,52 If a diagnostic test has 90% sensitivity and 100% specificity (i.e., on the upper end of values reported across the studies cited here),16,53 the accuracy or predictive value of a negative test result is about 99.5%. The true prevalence among tested patients is 5%, but only 71% when the true prevalence among tested patients is 80%. If the test sensitivity is only 70%, as may occur with inadequate specimen collection or early infection, then the false negative rate would be 23% when the pre-test probability of COVID-19 is 50%.54 Hence, the Infectious Diseases Society of America suggests repeating viral RNA testing when the initial test is negative in symptomatic individuals with an intermediate or high clinical suspicion of COVID-19.55 In individuals with a low clinical suspicion of COVID-19, a single negative test result may suffice, and serial testing may not be an effective use of resources.55 

CDC guidance further recommends that a positive NAAT diagnostic test should not be repeated within 90 days as people can continue to shed detectable RNA long after the risk of transmission has passed.56 Moreover, recent guidelines suggest that prior receipt of a COVID-19 vaccine should not affect the use or interpretation of SARS-CoV-2 viral tests, specifically NAATs or antigen tests.

Routine serologic testing is not recommended to assess for immunity in either unvaccinated persons or persons who have received currently available COVID-19 vaccines because a positive antibody test for spike protein IgM/IgG may not differentiate previous infection from the anticipated effect of vaccination.56 In certain settings, such as for public health surveillance, antibody testing that specifically evaluates for IgM/IgG to the nucleocapsid protein may be appropriate.

The COVID-19 pandemic reinforces the importance of accurate and timely communication of test results to both patients and healthcare providers, with appropriate interpretive guidance. Best practices regarding communication of clinically important test results, and design and implementation of electronic alerts for healthcare providers, have been published. At least two WebM&M cases previously discussed in PSNet have demonstrated the potential negative consequences of delayed or incorrect communication of abnormal test results, and suggested system improvements to prevent them.

COVID-19 Testing Guidance from CDC

Angel N. Desai, MD, MPH

Assistant Clinical Professor
Adult Infectious Disease Specialist
Department of Internal Medicine
UC Davis Health

Patrick S. Romano, MD, MPH
Co-Editor-in-Chief, PSNet
Departments of General Medicine and Pediatrics
UC Davis Health


  1. West CP, Montori VM, Sampathkumar P. COVID-19 testing: the threat of false-negative results. Mayo Clin Proc. 2020;95(6):11271129. [Free full text]

  1. Wiersinga WJ, Rhodes A, Cheng AC, et al. Pathophysiology, transmission, diagnosis, and treatment of Coronavirus Disease 2019 (COVID-19): a review. JAMA. 2020;324(8):782-793. [Free full text]

  1. Gandhi TK, Singh H. Reducing the risk of diagnostic error in the COVID-19 era. J Hosp Med2020;15(6):363-366. [Free full text]

  1. Office of the Governor. California Roadmap to Modify the Stay at Home Order. Accessed August 30, 2021:

  1. Guidelines: Opening Up America Again. The White House. Accessed January 7, 2022: 

  1. McClellan M, Gottlieb S, Mostashari F, et al. A National COVID-19 Surveillance System: Achieving Containment. Durham, NC: Duke Margolis Center for Health Policy; April 7, 2020. [Free full text]

  1. COVID-19 Dashboard. Center for Systems Science and Engineering, Johns Hopkins University. Accessed August 30, 2021.

  1. Monitoring Variant Proportions. COVID Data Tracker. Centers for Disease Control and Prevention. Accessed August 30, 2021.

  1. Brown L. COVID blindness. Diagnosis (Berl). 2020;7(2):8384. [Free full text]

  1. Coleman JJ, Manavi K, Marson EJ, et al. COVID-19: to be or not to be; that is the diagnostic question. Postgrad Med J. 2020;96(1137):392-398. [Free full text]

  1. Richardson S, Hirsch JS, Narasimhan M, et al; the Northwell COVID-19 Research Consortium. Presenting characteristics, comorbidities, and outcomes among 5700 patients hospitalized with COVID-19 in the New York City area. JAMA. 2020;323(20):2052-2059. [Free full text]
  2. Vaughn VM, Gandhi TN, Petty LA, et al. Empiric antibacterial therapy and community-onset bacterial coinfection in patients hospitalized with Coronavirus Disease 2019 (COVID-19): a multi-hospital cohort study. Clin Infect Dis. 2021;72(10):e533-e541. [Free full text] 

  1. The Centers for Disease Control and Prevention. Variants of the Virus. Updated February 6, 2023. Accessed June 2, 2023. 

  1. Peek ME, Simons RA, Parker WF, et al. COVID-19 among African Americans: an action plan for mitigating disparities. Am J Public Health. 2021;111(2):286-292. [Free full text]

  1. Sabatello M, Burke TB, McDonald KE, et al. Disability, ethics, and health care in the COVID-19 pandemic. Am J Public Health. 2020;110(10):1523-1527. [Free full text]
  2. National Council on Disability. The Impact of COVID-19 on People with Disabilities. Washington, DC: National Council on Disability; October 2021. [Available at]

  1. FitzGerald C, Hurst S. Implicit bias in healthcare professionals: a systematic review. BMC Med Ethics. 2017;18(1):19. [Free full text]

  1. Hall WJ, Chapman MV, Lee KM, et al. Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. Am J Public Health 2015;105(12):e60–e76. [Free full text]

  1. Kubina R, Dziedzic A. Molecular and serological tests for COVID-19: a comparative review of SARS-CoV-2 Coronavirus laboratory and point-of-care diagnostics. Diagnostics (Basel). 2020;10(6):434. [Free full text]

  1. Sethuraman N, Jeremiah SS, Ryo A. Interpreting diagnostic tests for SARS-CoV-2. JAMA. 2020;323(22):2249-2251. [Free full text]

  1. SARS-CoV-2 viral mutations: impact on COVID-19 tests. United States Food and Drug Administration. Accessed January 3, 2022. [Available at]

  1. Fang Y, Zhang H, Xie J, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):e115-e117. [Free full text]

  1. Li Y, Yao L, Li J, et al. Stability issues of RT-PCR testing of SARS-CoV-2 for hospitalized patients clinically diagnosed with COVID-19. J Med Virol. 2020;92(7):903–908. [Free full text]  

  1. Long C, Xu H, Shen Q, et al. Diagnosis of the coronavirus disease (COVID-19): rRT-PCR or CT? Eur J Radiol. 2020;126:108961. [Free full text]

  1. Ai T, Yang Z, Hou H, et al. Correlation of chest CT and RT-PCR testing in Coronavirus Disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology.  2020;296(2):E32-E40. [Free full text]

  1. Bernheim A, Mei X, Huang M, et al. Chest CT findings in coronavirus disease-19 (COVID-19): relationship to duration of infection. Radiology. 2020;295(3):200463. [Free full text]

  1. Xie X, Zhong Z, Zhao W,et al. Chest CT for typical 2019-nCoV pneumonia: relationship to negative RT-PCR testing. Radiology. 2020;296(2):E41-E45. [Free full text]

  1. Long DR, Gombar S, Hogan CA, et al. Occurrence and timing of subsequent severe acute respiratory syndrome coronavirus 2 reverse-transcription polymerase chain reaction positivity among initially negative patients, Clin Infect Dis. 2021;72(2)323–326. [Free full text] 

  1. Kucirka LM, Lauer SA, Laeyendecker O, et al. Variation in false-negative rate of reverse transcriptase polymerase chain reaction-based SARS-CoV-2 tests by time since exposure. Ann Intern Med. 2020;173(4):262-267. [Free full text]

  1. Green DA, Zucker J, Westblade LF, et al. Clinical performance of SARS-CoV-2 molecular tests. J Clin Microbiol. 2020;58(8):e00995-20. [Free full text]

  1. Collier DA, Assennato SM, Warne B, et al. Point of care nucleic acid testing for SARS-CoV-2 in hospitalized patients: a clinical validation trial and implementation study. Cell Rep Med. 2020;1(5):100062. [Free full text]  

  2. Hanson KE, Altayar O, Caliendo AM, et al. IDSA guidelines on the diagnosis of COVID-19: Antigen Testing. Infectious Diseases Society of America; Version 1.0.0. Accessed August 30, 2021.

  1. Dinnes J, Deeks JJ, Berhane S, et al. Cochrane COVID-19 Diagnostic Test Accuracy Group. Rapid, point-of-care antigen and molecular-based tests for diagnosis of SARS-CoV-2 infection. Cochrane Database Syst Rev. 2021;3(3):CD013705. [Free full text] 

  1. Smith RL, Gibson LL, Martinez PP, et al. Longitudinal assessment of diagnostic test performance over the course of acute SARS-CoV-2 infection. J Infect Dis.  2021;224(6):976–982. [Free full text]

  1. Mak GC, Cheng PK, Lau SS, et al. Evaluation of rapid antigen test for detection of SARS-CoV-2 virus. J Clin Virol. 2020;129:104500. [Free full text]

  1. Zou L, Ruan F, Huang M, et al. SARS-CoV-2 viral load in upper respiratory specimens of infected patients. N Engl J Med. 2020;382(12):1177-1179. [Free full text]

  1. Lin C, Xiang J, Yan M, et al. Comparison of throat swabs and sputum specimens for viral nucleic acid detection in 52 cases of novel coronavirus (SARS-Cov-2)-infected pneumonia (COVID-19). Clin Chem Lab Med. 2020;58(7):1089-1094. [Free full text]

  1. Wu J, Liu J, Li S, et al. Detection and analysis of nucleic acid in various biological samples of COVID-19 patients. Travel Med Infect Dis. 2020;37:101673. [Free full text]

  1. Wang W, Xu Y, Gao R, et al. Detection of SARS-CoV-2 in different types of clinical specimens. JAMA. 2020;323(18):1843-1844. [Free full text]

  1. Wang X, Tan L, Wang X, et al. Comparison of nasopharyngeal and oropharyngeal swabs for SARS-CoV-2 detection in 353 patients received tests with both specimens simultaneously. Int J Infect Dis. 2020;94:107109. [Free full text]

  1. Tsang NNY, So HC, Ng KY, et al. Diagnostic performance of different sampling approaches for SARS-CoV-2 RT-PCR testing: a systematic review and meta-analysis. Lancet Infect Dis2021;21(9)1233-1245. [Free full text]  

  1. Cheng MP, Papenburg J, Desjardins M, et al. Diagnostic testing for severe acute respiratory syndrome-related Coronavirus-2: a narrative review. Ann Intern Med.  2020;172(11):726-734. [Free full text]

  1. Yan Y, Chang L, Wang L. Laboratory testing of SARS-CoV, MERS-CoV, and SARS-CoV-2 (2019-nCoV): current status, challenges, and countermeasures. Rev Med Virol. 2020;30:e2106. [Free full text]

  1. Loeffelholz MJ, Tang YW. Laboratory diagnosis of emerging human coronavirus infections - the state of the art. Emerg Microbes Infect. 2020;9(1):747-756. [Free full text].

  1. Tang YW, Schmitz JE, Persing DH, et al. The laboratory diagnosis of COVID-19 infection: current issues and challenges. J Clin Microbiol. 2020;58(6):e00512-20. Free full text

  1. CDC COVID-19. Self Testing. Accessed August 30, 2021.  

  1.  McCulloch DJ, Kim AE, Wilcox NC, et al. Comparison of unsupervised home self-collected midnasal swabs with clinician-collected nasopharyngeal swabs for detection of SARS-CoV-2 infection. JAMA Netw Open. 2020;3(7):e2016382. [Free full text]

  1. FDA. In Vitro Diagnostics EUAs – Antigen Diagnostic Tests for SARS-CoV-2. Accessed August 30, 2021.

  1. In vitro diagnostic EUAs – molecular diagnostic tests for SARS-CoV-2. Food and Drug Administration. Accessed January 3, 2022.

  1. CDC COVID-19 Response Team. Geographic differences in COVID-19 cases, deaths, and incidence - United States, February 12-April 7, 2020. MMWR Morb Mortal Wkly Rep. 2020;69(15):465-471. Accessed August 30, 2021: [Free full text]

  1. Vetter TR, Schober P, Mascha EJ. Diagnostic testing and decision-making: beauty is not just in the eye of the beholder. Anesth Analg. 2018;127(4):1085-1091. [Free full text]

  1. Montori VM, Wyer P, Newman TB, et al; Evidence-Based Medicine Teaching Tips Working Group. Tips for learners of evidence-based medicine: 5. The effect of spectrum of disease on the performance of diagnostic tests. CMAJ. 2005;173(4):385-390.  [Free full text]  

  1. Nalla AK, Casto AM, Huang MW, et al. Comparative performance of SARS-CoV-2 detection assays using seven different primer-probe sets and one assay kit. J Clin Microbiol. 2020;58(6):e00557-20. [Free full text]

  1. Woloshin S, Patel N, Kesselheim AS. False negative tests for SARS-CoV-2 infection - challenges and implications. N Engl J Med. 2020;383(6):e38. [Free full text]

  1. Hanson KE, Caliendo AM, Arias CA et al. IDSA guidelines on the diagnosis of COVID-19: Molecular antigen testing. Infectious Diseases Society of America; Version 2.0.0. Accessed August 30, 2021. 
  2. Overview of Testing for SARS-CoV-2 (COVID-19). Centers for Disease Control and Prevention. Accessed January 7, 2022.
This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Related Resources From the Same Author(s)
Related Resources