Why aren’t there quality controls for antibody research?
Many people know antibodies as our bodies’ safeguard against infections. They keep us healthy by recognizing surface features on bacteria or viruses and alerting the immune system: attack! But they also play a crucial role in life science research. Their ability to recognize exactly one protein makes them ideal precision tools for detection, quantification and, in some cases, even treatments.
Of course, that’s only true if the antibodies do what they’re supposed to. Bad biological substances such as poor-quality antibodies are major drivers of scientific studies that can’t be reproduced.
Reproducibility is one of the gold standards of sound science — to be considered trustworthy, a lab’s results have to be repeatable by others. But around 36 percent of all research that has been flagged as irreproducible is caused by bad biological substances, making up a good third of the staggering $28 billion wasted every year due to irreproducible preclinical research.
The quality of everyday research substances — an issue that seems fundamentally mundane — impedes the progress of medicine and costs every one of us dearly, financially as well as personally. Currently, there are no official standards for the quality of commercially available antibodies or evaluations by third-party institutions before commercial availability. As these unreliable preclinical studies, partly caused by poor-quality antibodies, distort entire scientific fields and endanger the lives of citizens relying on future health care, distributors of commercial antibodies have to take action by imposing stricter quality standards.
Two studies that comprehensively investigated the quality of commonly used antibodies tell a disconcerting story. In the first, published in 2010 in the journal Nature Structural & Molecular Biology by the group of Jason D. Lieb located at the University of North Carolina at Chapel Hill, all 246 of the commonly used antibodies in the field of DNA-binding protein modifications were tested. Of these, 25 percent bound multiple targets instead of the one that they were supposed to do. Four antibodies were perfectly specific — to the wrong protein. In the second study, published in 2008 in Molecular & Cellular Proteomics by the laboratory of Mathias Uhlén at the Royal Institute of Technology in Stockholm, Sweden, researchers tested the quality of around 6,000 of the most commonly used commercial antibodies. The results were dismal; less than half of the antibodies were sufficiently specific for their supposed target protein.
Eleftherios Diamandis, a cancer researcher at the Mount Sinai Hospital in Toronto, Canada, is one researcher who fell prey to this widespread issue. He just wanted to contribute to the fight against pancreatic cancer. As in practically every type of cancer, early detection is key, as it can increase the five-year survival rate from 5 percent up to 20 percent. What better way to detect cancer early than by a biomarker, a molecule whose presence in unusual amounts indicates cancer, measurable by a quick blood test? He and his collaborators found a promising protein from the pancreas, CUB and zona pellucida-like domains protein 1, or CUZD1, which seemed to reliably indicate the presence of pancreatic cancer.
To determine elevated levels of CUZD1, they used a commercial testing kit, an enzyme-linked immunosorbent assay, or ELISA. Briefly put, in an ELISA, an antibody specific for the target protein, such as CUZD1, binds to the target protein, and then a second antibody (specific for the first antibody) coupled to an enzyme is added. The amount of enzyme then is measured by, say, a color change induced by an enzymatic reaction. From this the initial amount of CUZD1 can be inferred. Commercial availability from a major distributor of laboratory substances typically indicates standardization and reliability, which is why it usually is preferred to a self-made solution.
But something was rotten in the state of biomarker detection. When Diamandis and his team investigated whether their ELISA truly recognized their protein of interest (something most researchers would never attempt to do with a purportedly reliable commercial product), they found the horrible truth: Their expensive product did not detect CUZD1 at all. Instead, it measured the cancer protein CA125.
No wonder it presented a good biomarker for cancer — it literally was cancer. A poor-quality antibody in the commercial ELISA kit cost the Diamandis lab around $500,000 and many months of hard work. And worst of all, only a hair’s breadth away: If they hadn’t questioned their results, an obvious next step would have been a clinical trial to test whether early detection with this novel biomarker led to a better patient outcome. People could have died from the error, as the detection of cancer wouldn’t have happened at an early stage but at a later stage when the cancer protein CA125 was present.
Tackling an abstract problem such as the reproducibility of preclinical research is notoriously hard, as actionable procedures are not immediately obvious. One approach is to apply practical solutions to the much more tangible root causes of this problem. Poor-quality antibodies are one of these root causes. So what can we do about it?
Every link in the chain of the great human endeavor of research has to play its part to achieve improvement. Manufacturers of antibodies need to implement more stringent quality standards, testing their antibodies with multiple methods. These quality standards ideally should be set by an independent organization (at least partly consisting of scientists) that would certify the antibodies prior to their usage. Funding organizations should demand this antibody quality standard prior to the start of a project, and editors as well as reviewers of scientific journals should demand them after the end of a project.
We should demand an improvement in the groundwork for clinical trials, starting with mandatory quality controls of commercial antibodies. We all entrust our lives and health to these quality standards.
This essay was originally published in August on Massive Science.
Join the ASBMB Today mailing list
Sign up to get updates on articles, interviews and events.
Vaccination arose in the 18th century during a frenzied period of trial and error, in which many didn't survive a trip to the doctor. If you squint a little, it looks a lot like the early days of the COVID-19 outbreak.