Today’s paper [PMC] is Hert et al., (2009) Quantifying biogenic bias in screening libraries. At issue for todays class is a discussion about one of the first steps in drug discovery, compound library selection and generation. The authors of this paper pose a very interesting question: with the available chemical space (which is massive) how do high throughput screening (HTS) efforts for drug discovery ever succeed?
Chemical space—that is, all possible molecules—is estimated to be greater than 10^60 molecules with 30 or fewer heavy atoms; 10 ug of each would exceed the mass of the observable universe. This figure decreases if criteria for synthetic accessibility and drug likeness are taken into account and increases steeply if up to 35 heavy atoms (about 500 Da) are allowed. Positing even a modest specificity of proteins for their ligand, the odds of a hit in a random selection of 10^6 molecules from this space seem negligible.
So, based on this seemingly impossible complexity, how does HTS ever succeed to begin with. They have at least two hypotheses:
HTS nevertheless does return active molecules for many targets; how does it overcome the odds stacked against it? One might hazard two hypotheses. First, molecules that are formally chemically different can be degenerate to a target, and many derivatives of a chemotype may have little effect on affinity. This behavior, and the polypharmacology of small molecules, undoubtedly contributes to screening hit rates. Such chemical degeneracy seems unlikely, however, to overcome the long odds against screening. A second explanation is that screening libraries are far from random selections, but rather are biased toward molecules likely to be recognized by biological targets. This second hypothesis seems more plausible, as many accessible molecules are likely to resemble or derive from metabolites and natural products. Some of these will have been synthesized to resemble such biogenic molecules, while others will have used biogenic molecules as a starting material.