# Evaluating Virtual Screening Methods: Good and Bad Metrics for the “Early Recognition” Problem

dataset

posted on 26.03.2007, 00:00 by Jean-François Truchon, Christopher I. BaylyMany metrics are currently used to evaluate the performance of ranking methods in virtual screening (VS),
for instance, the area under the receiver operating characteristic curve (ROC), the area under the accumulation
curve (AUAC), the average rank of actives, the enrichment factor (EF), and the robust initial enhancement
(RIE) proposed by Sheridan et al. In this work, we show that the ROC, the AUAC, and the average rank
metrics have the same inappropriate behaviors that make them poor metrics for comparing VS methods
whose purpose is to rank actives early in an ordered list (the “early recognition problem”). In doing so, we
derive mathematical formulas that relate those metrics together. Moreover, we show that the EF metric is
not sensitive to ranking performance before and after the cutoff. Instead, we formally generalize the ROC
metric to the early recognition problem which leads us to propose a novel metric called the Boltzmann-enhanced discrimination of receiver operating characteristic that turns out to contain the discrimination
power of the RIE metric but incorporates the statistical significance from ROC and its well-behaved
boundaries. Finally, two major sources of errors, namely, the statistical error and the “saturation effects”,
are examined. This leads to practical recommendations for the number of actives, the number of inactives,
and the “early recognition” importance parameter that one should use when comparing ranking methods.
Although this work is applied specifically to VS, it is general and can be used to analyze any method that
needs to segregate actives toward the front of a rank-ordered list.