Deficiencies were also noted in study design and data analysis techniques when the articles reported the results of head-to-head comparisons of two or more tests relative to a reference standard. Head-to-head comparisons of diagnostic tests require that all of the evaluated tests be applied to all of the study patients or different tests be applied to randomly selected patients; 12% of studies did not fulfill this standard. Difficulties in data analysis were also noted by the observation that 76% of articles did not use standard statistical tests to compare the relative diagnostic accuracies of the evaluated tests fully my canadian pharmacy. The most common flaw was the use of hypothesis testing statistics, such as x2. Student’s t test, or analysis of variance to analyze test results. The generated p values from these analyses supported or rejected the null hypothesis that no differences existed in the different patient groups identified by the competing diagnostic tests. They did not, however, provide a measure of the magnitude of differences in the overall diagnostic properties of the evaluated tests. This information would be more adequately presented by confidence intervals for measures of test efficacy or comparative statistics based on ROC analysis for tests with ordinal or continuous test results. Results from these statistical analyses were provided for only a small proportion of the evaluated studies.
Flaws were also noted in the techniques used to select a decision threshold (“cutoff points”) for articles that evaluated tests with continuous or ordinal results. Only 11 of 21 of such articles described their methods for choosing decision threshold values; 8 of the 11 articles that described a method used a standard technique. Likelihood ratios calculated at varying decision thresholds have particular relevance for clinical decision-making, but they were not reported by any of the study articles.