January 2013

Q&A with Jeremy Nicholson

Jeremy Nicholson

Jeremy Nicholson of Imperial College London is heading up the U.K.’s government-funded phenome center, which will undertake population-scale metabolic phenotyping and profiling. Nicholson is one of the pioneers in metabonomics, the systemic study of the responses of small molecules or metabolites that are expressed in a cell, tissue or organ to physiological or pathological stimulation or genetic modification. The MRC-NIHR Phenome Center is backed by the U.K. Medical Research Council and the National Institute for Health Research. Two instrument manufacturing companies, Waters Corp. and Bruker BioSpin, are funding partners in the venture and are developing new technologies for high-throughput metabolic analysis with Imperial College London.

Below is the interview ASBMB’s science writer, Rajendrani Mukhopadhyay, conducted with Nicholson in late 2012. The interview has been edited for length and clarity.

Is the new MRC-NIHR Phenome Centre opening in January on the site of the London Olympics antidoping laboratory as was reported in the media during the 2012 Olympic Games?
For operational reasons and in agreement with all of our funding partners, we decided to relocate much of the instrumentation used in the Olympic 2012 drug testing laboratory to one of the major Imperial College medical campuses, the Hammersmith Hospital. With the center at Imperial, we will have better high-speed data links with the Imperial computer networks as well as bring the facility into the middle of our new translational medicine and biobanking centers. So there are technical and scientific advantages to the Imperial location (for the MRC-NIHR Phenome Centre). We are undertaking a major lab refurbishment to allow the center to open in early 2013.

What’s the impetus for building a large-scale phenome center?
I’ve been working in metabolic phenotyping and metabolic profiling for the best part of 30 years. I’ve been thinking about trying to build a national center for about seven or eight years to broaden and extend the research capacity and capability [for metabolic phenotyping and profiling] to other universities in the U.K., even outside of the U.K. The infrastructure of the Olympic Games 2012 antidoping laboratory offered a window of opportunity. They had 45 mass spectrometers of various types working in parallel. They were doing up to 300 forensic assays for different drugs, metabolites, and other markers of abuse with a turnaround time of about six hours. There is no analytical laboratory in the world with that sort of capacity and throughput coupled with that level of forensic quality. The Olympics test lab was developed and run by David Cowan of King’s College, and King’s remains a strategic partner in the new phenome center.

Photo of two surgeons standing next to two of the new 600 MHz NMR spectrometers 
Two surgeons with two of the new 600 MHz NMR spectrometers at the Imperial Clinical Phenome Center. Image provided by Jeremy Nicholson. 

What will the NIHR-MRC Phenome Centre do?
It has multiple functions. It will function as a research and development laboratory. The industrial investment is to develop the next generation of high-throughput technology so that we can go for faster, cheaper and more efficient analyses of complex biological mixtures.

We will also do population-level human phenotyping in partnership with multiple research groups in epidemiological research but will especially serve the National Institute for Medical Research's biomedical research centers, such as Oxford, Cambridge, University College London, Imperial and King’s. From a scientific point of view, we introduced the concept of the metabolomewide association study, or MWAS, in 2008 (1). The idea is that you can measure thousands of metabolic variables in urine or plasma samples taken from epidemiological studies, and you can regress those variables against disease risk factors, such as blood pressure, body mass index, visceral fat. In epidemiology, we try to find metabolic markers associated with the risk of getting a disease. In some ways, we are trying to rewrite the handbook of molecular epidemiology, because we’re giving epidemiologists a much wider range of analytical metrics than they ever had before to describe physiological variation.

In an ideal situation, you would have a genomewide association study plus a metabolomewide association study. You can look at those together statistically, and that’s one of the things we’ll be doing as part of the phenome center. We will be looking at populations that have been quite extensively genotyped and doing complementary metabolic profiling. Genes and environment combine to create your individual risk factor of getting a disease, and this also works at the population level. Your genes are like a set of cards you get dealt with when you are born. How you play them through your life, lifestyle and environment, determines whether you win a game or not. The environment is the most important factor that we might be able to control in our lives, but we need to understand how the interactions work at the physiological level.

Another very important part of the metabolic phenotype is the contribution from the gut microbiome, another exploding area of biological science. The microbes inside us have an enormous influence on our biochemistry. It’s only in the last few years that we’ve discovered really how important those bugs are, in terms of our disease risk factor probabilities and how they are connected with many noninfectious diseases. ... Again, we’re getting a new set of information that previously hasn’t been available to epidemiologists — the output of the gut microbial activity in people who are sampled in epidemiological studies.

By the way, one of the things the MRC also wanted us to do is to set up a research training center linked to the phenome center. This will be to train the next generation of clinician-scientists who will be using this technology. We’ll be running new medical mass spectrometry/nuclear magnetic resonance courses so we have a national training capability for these technologies generously funded by our industrial partners.

How you do envision integrating all these different types of data and handling the sheer volume of data?
We plan to have a big computational cluster to handle the data volume. We are doing a lot of work, for instance, on using graphical processor unit calculations for ultrahigh speed data analysis. If you go to get an Xbox or a Sony GameBoy, they have unbelievably fast graphic processors to deal with real-time simulations. For instance, a (central processing unit) processor that fits inside a (personal computer) might have four or eight cores or even 12 cores these days. A (graphics processing unit) has up to 3,000 cores in it, so if you can program it in the right way, the performance is outstanding. Some of my group have been working with other groups around the world stringing together 15, 20, 50 graphic processors. If you can program them correctly, you can do data processing and visualization, like an IBM Deep Blue, at one one-hundredth of the cost.

How did you get to where you are today?
I was an inorganic biochemist in the early 1980s working on complexation and dynamics of potentially toxic metals like mercury and cadmium in biological systems. In one set of experiments, we were trying to measure the kinetics of transport of metals into blood cells. We were using nucleic magnetic resonance spectroscopy. In red blood cells, for instance, you’ve got glutathione at an intracellular concentration of about 2 millimolar, and as certain metals get into the cell, they complex with the glutathione and the NMR signal starts to change. By measuring the rate of change of signal in the red blood cells, you could measure the rate of absorption of these toxic metals directly. I wanted to do the complexation study in a realistic situation, so we added the metals to whole blood rather than a suspension of red blood cells. Of course, there were lots of extra signals from the blood plasma metabolites. As soon as I saw this, the penny dropped. I thought, “This is a clinical diagnostic tool.” Doing NMR of plasma allows you to get a very rapid fingerprint of plasma biochemistry in just a few minutes with no sample pretreatment, so it was easy to try out lots of experiments in just a few weeks. I ran a sample of my own urine in an NMR experiment — even more signals! Then I popped some paracetamol [Editor’s note: Paracetamol is known as acetaminophen in the U.S.] and looked at the signals in my urine a few hours later and was able to follow its metabolism and excretion. I was driving my wife completely crazy, because I was doing experiments on myself and it was disrupting the household. I decided to fast completely for 48 hours and look at my urine every few hours. I watched my ketosis develop in near real time. She watched my temper get worse.

My focus then shifted to this new field of spectroscopic diagnostics. It was how many metabolites could you discover? How many diseases can you diagnose using this approach? That’s how metabonomics was born for me.

Apparently there is going to be another phenome center for clinical applications at St. Mary’s Hospital, which is part of Imperial College London.
The Imperial clinical phenome center is for patient phenotyping. It is funded by consolidating multiple grants from the NIHR, the Gates Foundation, the [U.S. National Institutes of Health] and drug companies. It’s a laboratory for patient-journey phenotyping (2). When you go into hospital, there’s a work-up procedure, the doctor decides what’s wrong with you, what treatment you’re going to have, and there is an outcome — recovery or possibly not. That is a patient journey. We’re putting together all the technologies we have into every stage of the patient journey and developing new tools for diagnosis and monitoring response to therapy.

The mathematics and analytical chemistry are the same [for the two phenome centers], but the information delivery timescales are different. In epidemiology studies, there are a large number of samples, and the analyses take a long time. In clinical situations, you’ve got a smaller number of samples, but you have to analyze and model them faster because the doctor needs the information to make decisions. [The two] present different sorts of modeling challenges. You also have to think about how you visualize data so a doctor can make sense of them and make a useful decision. Most systems biology information generated by genomics, proteomics and metabonomics is actually completely useless to doctors, because it cannot be visualized or presented in a medical framework. They need something very simple to help make a decision. Note that I say “help.” We are not trying to replace medical decision making, merely augment it. The data need to be built into a decision tree so you take complex data and link them to the therapeutic framework.

Sounds like you’ve covered all your bases.
We have a core research facility at Imperial, a new clinical phenome center, a new population phenome center and others. I think we have all the bits to do a good job at making systems medicine real. The translational task is still an enormous one but very worthwhile. Ask me how we are doing in three years’ time!
 

REFERENCES
  1. 1. Holmes, E. et al. Nature 453, 396 – 400 (2008).
  2. 2. Nicholson, J. K. et al. Nature 491, 384 – 392 (2012).
     
     

Raj_MukhopadhyayRajendrani Mukhopadhyay (rmukhopadhyay@asbmb.org) is the senior science writer for ASBMB Today and the technical editor for The Journal of Biological Chemistry. Follow her on Twitter (www.twitter.com/rajmukhop), and read her ASBMB Today blog, Wild Types.
 
 


First Name:
Last Name:
Email:
Comment:


0 Comments

Page 1 of 1

found= true2124