to Montreal! Having lived in Montreal for 3 years as a postdoc at McGill I’m about as excited as I can be to be headed back to my old stomping grounds for the International Association for the Study of Pain meeting. Mrs. JP and I will be there for a week and are both attending the conference. This will be Mrs. JP’s first time at the conference and our first time going to a conference together (the basic and clinical mix at this conference has provided the opportunity). Should be exciting!
This is the first IASP meeting on the new two year rotation (next one is in Japan 2012 and then to Argentina 2014). The schedule for this meeting is very strong (IASP is always good) but the growth in the field is readily evident from this year’s line-up of talks and posters. I may try to blog the conference but more likely I’ll be sending out tweets.
Pharm 551A class, there will be no posts this week for class content since I won’t be around.
Today’s paper [PMC] is Hert et al., (2009) Quantifying biogenic bias in screening libraries. At issue for todays class is a discussion about one of the first steps in drug discovery, compound library selection and generation. The authors of this paper pose a very interesting question: with the available chemical space (which is massive) how do high throughput screening (HTS) efforts for drug discovery ever succeed?
Chemical space—that is, all possible molecules—is estimated to be greater than 10^60 molecules with 30 or fewer heavy atoms; 10 ug of each would exceed the mass of the observable universe. This figure decreases if criteria for synthetic accessibility and drug likeness are taken into account and increases steeply if up to 35 heavy atoms (about 500 Da) are allowed. Positing even a modest specificity of proteins for their ligand, the odds of a hit in a random selection of 10^6 molecules from this space seem negligible.
So, based on this seemingly impossible complexity, how does HTS ever succeed to begin with. They have at least two hypotheses:
HTS nevertheless does return active molecules for many targets; how does it overcome the odds stacked against it? One might hazard two hypotheses. First, molecules that are formally chemically different can be degenerate to a target, and many derivatives of a chemotype may have little effect on affinity. This behavior, and the polypharmacology of small molecules, undoubtedly contributes to screening hit rates. Such chemical degeneracy seems unlikely, however, to overcome the long odds against screening. A second explanation is that screening libraries are far from random selections, but rather are biased toward molecules likely to be recognized by biological targets. This second hypothesis seems more plausible, as many accessible molecules are likely to resemble or derive from metabolites and natural products. Some of these will have been synthesized to resemble such biogenic molecules, while others will have used biogenic molecules as a starting material.
Today was the first day of class for Pharm 551A. Not much to talk about other than a brief discussion of the construction of the class and expectations followed up by a quick run through of the basics of drug discovery to prep for the papers we’ll be doing over the next month. We’ll start the online continuation of paper discussion here after Thursday’s class.
If you’re coming over to check out the blog and are in the class here are some links to the pharma blogs I told you about this morning:
Derek Lowe’s In The Pipeline
and another I forgot to mention: Pharma Conduct, although it may have gone dead because nothing has been posted in some time.
Also, here are some of my previous discussions on drug discovery on this blog, in case you’re interested:
Drug Discovery in Academia
Drug Discovery in Academia and NIH funding
Regular readers of my blog (DM and someone else I’m sure), this is the start of the incorporation of my class into the blog. Yes, its an experiment and, yes, the students agreed that it “might” be useful.
On the first day of class at UofA its a good time to re-commemorate one of the true giants in the history of pharmacology. Almost 2 years ago today Hank Yamamura passed away after a fight with lung cancer. Today, 2 PhD students started in our department funded by the Hank Yamamura Fellowship. The fellowship is a terrific way to remember a great scientist dedicated to understanding how drugs interact with receptors, training the next generation of scientists and mentoring junior faculty. He is one of my scientific heros and a dear friend and mentor. Here is what I had to say about Hank the day after he passed away.
Pharmacology has lost another giant. Hank Yamamura died last night after a long battle with cancer. It is hard to imagine that there is a pharmacologist alive today that is not familiar with the work of Dr. Yamamura. He is the author of nearly 500 papers, countless books and book chapters and a mentor to a generation of pharmacologists. Hank did his PhD at University of Washington and then headed to Sol Snyder’s lab for postdoctoral work. Hank played a major role in the original descriptions of muscarinic pharmacology while working with Snyder. In 1975 he moved to the University of Arizona where he eventual became a Regent’s Professor in the Department of Pharmacology. At the University of Arizona, Hank practically wrote the book on opioid receptor pharmacology with especially strong contributions to the area of delta-opioid peptides. Hank was an active and cherished member of the Department of Pharmacology from 1975 until the day of his passing.
I first learned about Hank Yamamura, like most pharmacology PhD students, from his book “Neurotransmitter Receptor Binding”. His contributions to cannabinoid pharmacology played an important role in my PhD work. For the past 9 months of my life I was lucky enough to work in the same Department with Hank. When I first arrived here he was one of the first to greet me. Hank made a point to come visit my office (which was in a separate building) at least once a week and he was always eager to hear about what we were working on. He read all of my grant applications and gave me incredibly detailed comments. He shared advice on navigating the varieties of channels at the University and we eventually developed a small collaboration (which will continue). In other words, in 9 short months Hank became one of the most important mentors I have ever had and became a dear friend. I am just one of hundreds of trainees and faculty who have been positively touched by Hank’s never-ending enthusiasm for science and boundless generosity. I think I can speak for the entire field in saying that we will all miss Hank.
This is almost unbelievable. Apparently a federal judge has blocked the Obama administration’s change to stem cell research policy.
There’s this gem of a paragraph at Forbes on the story:
A federal appeals court had ruled that two fellow plaintiffs – doctors who do research with adult stem cells, James Sherley of the Boston Biomedical Research Institute and Theresa Deisher of AVM Biotechnology – were entitled to sue over the new guidelines, prompting U.S. District Judge Royce Lamberth on Monday to reverse a decision he made in October when he dismissed the lawsuit.
Sherley and Deisher allege that the guidelines will result in increased competition for limited federal funding and will injure their ability to compete successfully for National Institutes of Health stem cell research money.
I almost fell over when I read that so went searching for confirmation and found this at USA Today:
Lamberth’s reversal follows a federal appeals court ruling that allowed two adult-stem-cell researchers to pursue a lawsuit, claiming that the new guidelines would increase competition for limited federal funds and that it violated federal law.
Lamberth said that the “injury” of increased competition that James Sherley of the Boston Biomedical Research Institute and Theresa Deisher of AVM Biotechnology would face “is not speculative. It is actual and imminent. Indeed, the guidelines threaten the very livelihood of plaintiffs Sherley and Deisher.”
I really don’t know what to say but this seems like a dangerous precedent to me from Judge Lamberth. There is obviously more to the ruling than the dangers of competition for NIH funding (seriously, I can’t believe this) as part of the ruling is based on anti stem cell interests but this NIH thing appears to have played a part. And, yes, if you think you recognize the name James Sherley from something else, you probably do.
UPDATE: In case you don’t go all the way down in the comments, look what DM found
My lab dabbles in mTOR work so I pretty regularly scan pubmed for new papers with mTOR in the title or abstract. Its a fast moving field so there’s lots to keep up with and usually 20-30 new papers out for each of my saturday morning mTOR searches. This morning I stumbled upon this little gem:
Burd et al., PLoS One Low-load high volume resistance exercise stimulates muscle protein synthesis more than high-load low volume resistance exercise in young men.
Now, I’m no exercise physiologist so I’m not going to go into details on the background of the work but I do know enough to know that stimulating protein synthesis in muscles through weight training is your goal. This builds muscle mass, makes you stronger/faster — all those reasons that we go to the gym in the first place. The question is how best to do that. mTOR, for those that don’t know, is a kinase involved in regulating protein synthesis. Its a complicated cascade but the basics are that stimulating mTOR in a cell type leads to more protein synthesis. If a specific type of exercise stimulates mTOR activity, this is probably a good thing to achieving results from your workout. In the paper they had young men do three types of exercise (on a leg extension machine): 1) low rep / high weight, 2) medium rep / medium weight and 3) low weight / high rep. In a nutshell, they found that low weight / high rep was the best way to go to stimulate mTOR in muscle fibers. So what does this mean?
Can this get any worse? On Monday this story on financials and drugs coming off patent vs. the pipeline for Lilly is in the NYTimes. Then, yesterday, this story comes out about Lilly’s failed gamma-secretase trial for Alzheimer’s. Derek Lowe has a nice run-down on this latest failure plus links to his previous musings about this up today. Doesn’t sound good for Lilly. And what does this mean for all the hype on the Alzheimer’s biomarkers last week? Looks like major turmoil ahead to me.
Go read. I have nothing else to say other than: Bravo!!
I have a few minutes so I would like to highlight this paragraph written by editor in chief Dr. John Maunsell:
Another troubling problem associated with supplemental material is that it encourages excessive demands from reviewers. Increasingly, reviewers insist that authors add further analyses or experiments “in the supplemental material.” These additions are invariably subordinate or tangential, but they represent real work for authors and they delay publication. Such requests can be an unjustified burden on authors. In principle, editors can overrule these requests, but this represents additional work for the editors, who may fail to adequately referee this aspect of the review.
In my opinion this is absolutely correct and gets right to the heart of what I think is wrong with science today. Our papers are ultimately about ideas and the experiments that either support or reject those ideas. The constant you just need one more piece of supporting evidence for everything mindset of many reviewers (I include myself in falling into this trap) is not useful to the process. The ideas in a given paper may or may not stand the test of time and that one more piece of supporting evidence is unlikely to have any influence on what that test of time will determine. The point is to get potentially influential ideas out there and to get them out earlier rather than later. Post publication experimental scrutiny is and will always be how the test of time determines the validity of new scientific concepts.
Genomic Repairman has a little rant up over at Labspaces in which he becomes exasperated at someone in his field who has had two R01 continuously for at least 26 years. Abel Pharmboy has already dealt with this in hilarious fashion (weedhopper? — haven’t heard that one before) so I won’t belabor the obvious; however, I would like to point out a few long-standing grants in my area — pain research — that have had a profound impact on our understanding of pain.
Let’s start with what, to my knowledge, is the granddaddy of them all in the pain field: Ed Perl’s 36 year old (expired in 2008) R01 entitled “SPINAL AND PROJECTION MECHANISMS RELATED TO PAIN”. If you’ve got a neuroscience or general medicine textbook handy, look up the word “nociceptor”. These are pain sensing neurons in the peripheral nervous system. Now go read what the textbook says. There should be something there about how these neurons are noxious stimulus detectors that specifically respond to stimulation in the noxious range. They also respond to many chemicals and temperature changes into the too hot and cold to handle range. What you just read is work that was funded by this R01. Similarly, if you are interested in how noxious input is processed in the dorsal horn of the spinal cord much of what you will find in the textbooks came from Ed Perl’s work funded by this R01.
How about another old one (the grant, not the person): Gerald (Jerry) Gebhart’s 28 year old R01 entitled “MECHANISMS AND MODULATION OF VISCERAL PAIN”. Of his 300+ publications, many of them were funded by this long-standing R01. Like Ed Perl’s grant, to understand the contribution that this continuously funded grant has had on our appreciation of pain processing is quite simple — pick up a textbook. Much of what we know about visceral pain has come from work done in this grant. The work spans from understanding how descending modulation systems (periaqueductal grey (PAG) and rostral ventromedial medulla (RVM)) amplify pain of a visceral origin to the receptors and molecules responsible for causing visceral pain. Pretty important stuff and, I might add, if you are working in Pharma trying to develop drugs to target visceral pain, chances are you are relying pretty heavily on Jerry’s work not only for target identification but also for the techniques that you will use to validate whether your drugs are doing what you want them to do. You may have also noted from that link above that Dr. Gebhart is also the current President of the International Association for Pain.
Let’s do one more shall we… Allan Basbaum’s 33 year old R37 (Merit Award) entitled “BRAINSTEM CONTROL OF PAIN TRANSMISSION”. Where to start with this one? Anatomical mechanism of analgesic action of opioids: check. Anatomy of bulbospinal projections to the dorsal horn of the spinal cord: check. The list could go on and on. Again, this is all standard textbook stuff now. One thing that really interests me about this grant is the remarkable transformation that the work has undergone as state-of-the-art techniques in biomedical science have changed. Perhaps more than any one else I can think of in the field, Dr. Basbaum’s lab has not only kept up but consistently led in continuing to push the edge in terms of using the latest and greatest techniques to address problems in new and exciting ways. He’s also the current Editor in Chief for Pain, the most influential journal in the field.
There are many more such examples in the field but I think you get the point. If you search for all 3 of the researchers I have highlighted above in NIH Reporter (all years, not active projects) you will note that these long-standing grants represent the bulk of the funding that each of these PIs have held over their careers. In my view this adds considerable stability to the field with very little sign of stagnation. Its interesting to check the abstracts for these grants in renewal years (generally every 5 years). You will note a remarkable change in the hypotheses being addressed and a real progression in the work from funding period to funding period. One might even argue (I would) that the titles don’t really fit the grant anymore but that’s part of the beauty of choosing a very general sounding title. I wish I would have done that for my first R01.
I totally missed that DrdrA hit this up too…
I’m gonna get to a full post on the process soon but have no time for that now. I just want to briefly describe the steps from the perspective of a primarily in vivo pharmacologist:
1. find your target
2. get some idea of structure activity relationship (SAR) for your target
3. design a high-throughput screen (HTS) — preferable two of them (functional and binding)
4. start making and screening compounds
lots of time and frustration passes (maybe frustration is not the right word but its hurry up and wait to get to where you want to be)
5. validate hits from screening
6. validate hits some more, go back to SAR
7. scale up hits for in vivo test
8. do in vivo test (start screaming and yelling if it works)
Number 8 would have been me (and very important turbo grad student) today.
I am a happy man… (and that new Arcade Fire album is pretty good too)
More on all of this later.