Personalized Medicine Education and Advocacy

Thought leadership in personalized medicine


Leave a comment

Advancing the Promise of Personalized Medicine With Liquid Biopsies and Analysis of ctDNA

Guest Blog
by John Beeler, Ph.D., Vice President, Corporate and Business Development, Inivata

jbeeler

John Beeler, Ph.D.

The promise of personalized medicine gained momentum with the publication of the human genome more than 13 years ago. Enthusiasm grew over the potential to decode the genetic basis of disease and enable efficient utilization of a new generation of genomically targeted therapies. Precision medicine, whereby genomic information is integrated into clinical decision-making, intended to realize a more personalized approach and treat the right patient with the right drug at the right time.

Despite clear examples of success illustrated by the development of molecular technologies to identify genomic alterations with high specificity and sensitivity and to guide the use of targeted therapies against altered genes, including EGFR, BRAF, ALK and others, there has been a growing chorus of skeptics who infer that application of these targeted therapies to a broader population harboring respective genomic alterations is nothing more than hype. According to these skeptics, one of the primary reasons that precision oncology medicine is an illusion is the lack of data from randomized clinical trials that support the use of genomically guided therapies in more tumor diverse populations. However, there are several factors that have contributed to the paucity of data supporting advances in precision oncology medicine, particularly in patients with advanced stage disease.

The lack of tumor tissue available for molecular profiling is a primary barrier to more robust clinical data. Reasons for the lack of tissue necessary for broad molecular profiling include poor performance status, which precludes patients from being subjected to the invasive procedure necessary for obtaining a tissue specimen. On other occasions, it has been noted that tumors are inaccessible for a biopsy (e.g. bone metastases) and even when obtained, the limited amount of biopsy material can be insufficient for molecular profiling. According to published reports, approximately one-third of advanced stage cancer biopsies deliver tissue specimens that are either poor quality or have insufficient tumor material for a molecular analysis to be performed.

Tissue-based biopsies are further constrained by spatial and temporal limitations that may provide an inaccurate representation of the heterogeneous nature of the malignant growth, which results in the treatment of a patient based on an inaccurate molecular diagnosis. Finally, tissue biopsies are not conducive to serial sampling and are thus incapable of monitoring the molecular evolution when a tumor progresses. The failure to obtain a high-quality tissue specimen that accurately reflects the complete tumor biology may be a contributing factor to the lack of data supporting the realization of precision oncology medicine.

Fortunately, we now find ourselves at a potential inflection point, capable of positively impacting the field of personalized medicine. Recent advances in the application of “liquid biopsies” and the potential to harness molecular information in circulating cell-free tumor DNA (ctDNA) from the convenience of a simple blood draw offers a “game-changer.” Analysis of ctDNA represents a new generation of molecular applications that are capable of producing data that was previously unavailable, thereby helping to deliver on the full promise of providing health care that is both more precise and personal.

First identified more than 60 years ago by Mandal and Metais, advances in genomics and molecular methods now allow analysis of cell free DNA with unprecedented sensitivity and specificity to expand the range of opportunities for liquid biopsy applications that will impact the major aspects of a patient’s care. Analysis of ctDNA can identify genetic alterations that enable therapy selection, quantitatively monitor treatment progress, including disease recurrence via serial sampling, and detect new resistance mutations as they emerge. This liquid biopsy approach has the potential to revolutionize cancer care and improve and/or resolve many of the limitations inherent in current tissue-based standard treatment protocols for providing a broad molecular profile.

This is particularly relevant in non-small cell lung cancer (NSCLC), where a significant number of patients with advanced NSCLC are not receiving molecular testing for first-line therapy and even less are receiving a molecular profile at disease progression following first-line therapy. The opportunity to improve the availability of a molecular profile when one is not otherwise obtainable and provide valuable molecular information that impacts clinical decision-making offers to help deliver on the promise of personalized medicine.

In contrast to the current state of euphoria generated by the liquid biopsy approach and analysis of ctDNA, there is still significant work to be done to generate data illustrating the benefits of ctDNA. The data is necessary to address the limitations of current tissue-based testing, to drive adoption and utilization of this innovative approach, and to improve trust in the regulatory and reimbursement landscape. At Inivata we believe it is vital to get this aspect correct, most importantly for cancer patients, to fulfill the promise of personalized medicine.


Leave a comment

An Infrastructure for Innovation: How the 21st Century Cures Bill Established a More Favorable Landscape for Personalized Medicine in 2017

by Daryl Pritchard, Ph.D., Vice President, Science Policy

daryl_pritchard

Daryl Pritchard, Ph.D.

Barack Obama’s administration was clearly committed to the advancement of personalized medicine.  In addition to launching the Precision Medicine Initiative and the Cancer Moonshot effort, during his tenure President Obama regularly described personalized medicine as the future of health care.

“I want the country that eliminated polio and mapped the human genome to lead a new era of medicine, one that delivers the right treatment at the right time,” he said in his 2015 State of the Union Address.

The election of Donald Trump as the 45th President has led to a great deal of uncertainty on the future of personalized medicine and the fate of Obama’s signature personalized medicine programs.

Fortunately, Congress has confirmed its support for personalized medicine by passing the 21st Century Cures Act.  The law, designed to accelerate the pace of biomedical innovation, a goal that President-elect Trump has expressed interest in, provides continued momentum for personalized medicine. The House of Representatives voted 392 – 26 in favor of the legislation one week before the Senate passed the law 94 – 5. President Obama is expected to sign it.

“The 21st Century Cures bill supports personalized medicine,” PMC President Edward Abrahams said. “The Precision Medicine Initiative, the Cancer Moonshot and speedier access to innovative therapies based on molecular pathways, in particular, will all contribute to a healthier nation.”

Among other things, the bill:

  • Authorized $4.8 billion in funding over 10 years for programs at the National Institutes of Health (NIH) that include the All of Us Research Program and the Cancer Moonshot Research Program, as well as $500 million for FDA to implement provisions to improve innovation
  • Required FDA to make the patient experience a more central part of the drug development process
  • Established a review pathway at FDA for biomarkers and other drug development tools
  • Included provisions related to FDA’s oversight of diagnostics, albeit without addressing the longstanding debate on the regulation of laboratory-developed tests
  • Modernized clinical trial design and evidence development as it relates to the consideration of real-world data and other topics
  • Required FDA to pilot one or more inter-center institute(s) to help develop and implement processes for coordination of activities in major disease areas between the drug, biologics and device centers
  • And enhanced the country’s capacity to deliver personalized medicine through improvements and incentives in health information technology.

The law is not perfect. Policymakers have pointed out, for example, that the amount of NIH funding is half of what was proposed in the 2015 version, and that the programs are authorized rather than appropriated, thereby not guaranteeing that they will be funded. Others object to using the Affordable Care Act’s prevention funds to pay for Cures provisions.  Furthermore, although proponents claim the law’s new measures will not weaken FDA’s regulatory oversight, some critics disagree, especially regarding regenerative medicine.

But despite these concerns, the 21st Century Cures Act has provided a more favorable setting for personalized medicine amid ongoing uncertainty under a new administration.

The Personalized Medicine Coalition applauds Congress for passing the legislation and delivering it to the President, who has indicated his strong support for its provisions.  By allocating resources for personalized medicine programs and encouraging biomedical innovation, the 21st Century Cures Act will help drive personalized medicine’s enormous potential for patients and the health system in a future that otherwise remains uncertain.


Leave a comment

A Chancellor’s Tale: Transforming Academic Medicine

Guest Blog
by Ralph Snyderman, M.D., James B. Duke Professor of Medicine, Chancellor Emeritus, Duke University

ralph_snyderman1

Ralph Snyderman, M.D.

A Chancellor’s Tale:  Transforming Academic Medicine is a personal and intimate story of my 15-year journey as Duke University’s Chancellor for Health Affairs during a time of major upheaval in medicine. The story, I hope, will help demonstrate the importance of planning, leadership and organizational change to the advancement of the personalized medicine paradigm.

My experience as a scientist and a physician prior to assuming my role as chancellor did little to prepare me to deal with a highly complex and entrenched institution that, unbeknownst to itself, was in need of disruptive change. The story describes the path the institution and I took during my tenure as chancellor, during which time the Duke University Medical Center became known for innovations in medicine and the conception of personalized health care.

As CEO of the Duke University Health System, I saw that despite our delivering outstanding, state-of-the-art care, treatments were generally directed towards the reversal of episodes of late-stage disease.  By 2000, anticipating the power of genomics and associated advances in technology, my colleagues and I began to envision an entirely new approach to care.  Rather than being reactive to disease, health care could be proactive, predictive, preventive and personalized.  As health and disease are a consequence of one’s genetics and environmental exposures over time, the availability of technologies to quantify health risks, track disease progression and identify specific disease mechanisms could be a game-changer for how care is delivered.  Rather than starting with a disease manifestation and working back, clinicians could, in conjunction with their patient, anticipate disease risks and work to mitigate them and to treat them precisely when needed.  As a consequence of this thinking, my colleagues and I conceived of an entirely new approach to health care and in doing so, laid the foundation for personalized health care.

A Chancellor’s Tale tells the story of the major transformation of Duke’s academic enterprise along with the concepts that resulted in the creation of care delivery models for personalized, proactive, patient-driven care. The book describes the difficulties of making change in a complex academic institution including what worked, what went wrong and lessons learned.  My hope is that the personalized medicine community will find the stories interesting and my learning experiences useful.

The book is available for purchase at https://www.dukeupress.edu/a-chancellors-tale.

Special offer: Use coupon code E16SNYDR to save 30 percent on the hardcover edition when you order from dukeupress.edu.


Leave a comment

It’s Time to Protect Patient Care and Rethink the Way We Define and Assess Clinical Utility in Molecular Diagnostics

Guest Blog
by Elaine Lyon, Ph.D., Medical Director of Genetics, Genomics and Pharmacogenomics at ARUP Laboratories, Professor of Pathology at the University of Utah School of Medicine, Co-chair of the FEND task force, Senior author of the FEND publication

e_lyon

Elaine Lyon, Ph.D.

Recent advances in genomic medicine continue to provide many new opportunities to improve our modern health care system. However, before we can truly realize the full promise of precision medicine, we need a more practical and patient-centered approach for evaluating clinical usefulness for these types of molecular testing procedures. Current models for clinical utility evaluation clearly fall short of ensuring best care and usefulness for the patient, their family, their provider, and the health care system. While personalized medicine is driving health care progress today, this restricted definition of clinical utility is putting on the brakes and impeding progress.

Since the new molecular pathology Current Procedural Terminology (CPT) codes were implemented, the roles of clinical validity and clinical utility have been the subject of intense discussion. Many stakeholders have adopted very narrow definitions that do not address all the important applications, including diagnosis, prognosis, risk assessment, prediction of future disease, as well as monitoring and selection of therapies.

The ongoing shift of payer expectations from a “reasonable and necessary” to a “demonstrable clinical utility” requirement can be a difficult and unrealistic expectation for laboratories. Since the new requirement was created with therapeutic products in mind, applying the same standards to a test that is designed to establish a diagnosis for an inherited condition is difficult. Even if the test performs flawlessly, the patient may never be “cured” or ‘’treated;” symptoms may only be managed.   In addition, many of these procedures are for diseases so rare that a statistically valid study would be almost impossible, or would at least take many years to complete. Ultimately, we need to capture evidence for the clinical utility of these procedures outside of a traditional randomized control trial setting. To achieve maximum benefit, we need to recognize that any individual test result is an intermediate outcome that relies on proper clinical interpretation and utilization in context for that specific patient.

For example, if a patient with breast cancer tests negative for specific BRCA 1/2 mutations, the physician may move forward with specific drug therapy treatment based on this information; however, the very same test may be used for someone who doesn’t have breast cancer but is being tested because there is a strong family history of breast cancer. In this case, the test is used as a screening tool before the onset of disease. It really depends on how the physician uses the information.

I currently co-chair the Association for Molecular Pathology (AMP) Framework for the Evidence Needed to Demonstrate (FEND) Clinical Utility Task Force, which was formed two years ago to address these specific challenges. The task force seeks to represent the views of the more than 2,300 AMP members, who are fully embedded in the various disciplines of molecular diagnostics, including infectious diseases, inherited conditions and oncology. AMP’s members include individuals from academic and community medical centers, government and industry, including pathologist and doctoral scientist laboratory directors, basic and translational scientists, technologists and trainees.

The FEND task force recently authored a new report published in The Journal of Molecular Diagnostics that establishes a new standard for clinical utility of molecular diagnostics for inherited diseases and cancer. One of our early goals was to have a peer-reviewed publication that could help start the next wave of discussions with all key stakeholders. In the report, we recommend a broad, patient-centered definition that takes into account the different ways that a test might be used, with an understanding that the test results may indicate follow-on activities that are not as simple as merely determining which drug to prescribe, or at what dose. Our inclusive approach utilizes a modified ACCE model and emphasizes that a clinical test result’s utility depends on the context in which it is used to classify a patient’s disease or disorder and/or guide management. We were very careful to make recommendations that can be extended to additional applications of molecular testing.

We believe our recommendations provide a reasonable and feasible path forward that puts patients and their families at the center of evaluating clinical utility for molecular testing procedures. We look forward to continued stakeholder engagement to further advance clinical genomics so that we can begin to realize the full promise of personalized medicine.

To read the full-text, free report, please visit http://dx.doi.org/10.1016/j.jmoldx.2016.05.007.


Leave a comment

Can We Assess the Value of Personalized Medicine in Treating Cancer?

Guest Blog
from Dan Leonard, M.A., President, National Pharmaceutical Council

dleonard

Dan Leonard, M.A.

Although there is a bright spotlight on the field of personalized medicine thanks to the Obama administration’s Cancer Moonshot and Precision Medicine initiatives, there are real concerns about how targeted medicines will be considered in value assessment frameworks, which are geared toward evaluating treatments for a population rather than individual patients.

This week, National Pharmaceutical Council (NPC) President Dan Leonard sat down with Amy M. Miller, Ph.D., Executive Vice President, PMC, to discuss these issues.

Dan Leonard (DL): Amy, thank you for joining me to talk about personalized medicine and value frameworks. Tell us a little about personalized medicine and its impact on patient care, especially in light of ongoing debates about costs, coverage and value.

Amy Miller (AM): Thank you for this opportunity. Personalized medicine is an evolving field in which physicians use diagnostic tests to determine which medical treatments will work best for each patient. By combining the data from those tests with an individual’s medical history, circumstances and values, health care providers can develop targeted treatment and prevention plans. This concept challenges how health care products and services are discovered, developed, regulated, covered, paid for and delivered in the clinic. Therefore, it is no surprise that personalized medicine challenges how value assessments are conducted.

DL: The basis for personalized medicine is that every patient is unique and will respond to treatments differently, something NPC also has demonstrated in our research. How can a value assessment framework take that important concept into consideration?  

AM: Fortunately for value assessment framework designers, many organizations have published suggestions for them. It is important for the value assessment questions to accurately reflect available data. Furthermore, when looking at disease areas with targeted therapeutics, value assessment frameworks must look at the full complement of clinical tools, including the diagnostic test or tests and the clinical outcome differences between a targeted and non-targeted treatment approach.

DL: The Institute for Clinical and Economic Review (ICER) is currently evaluating treatments for non-small-cell lung cancer (NSCLC), a disease that is unique to each patient and requires targeted treatment. They’ll be hosting a public meeting about that evaluation on October 20. What kinds of factors should ICER be considering as part of its review of NSCLC treatments?

AM: NSCLC is a disease where personalized medicine has transformed care over the last decade, and patients have seen tremendous improvements in morbidity, mortality and quality of life as a result. We’ve seen evolution in treatments targeting many driver mutations in the tumor. We’ve seen evolution in the types of diagnostics used to select those treatments. I think ICER needs to carefully consider all that we’ve learned over the last decade using EGFR-mutation therapeutics, value those therapeutics from a patient’s perspective, and consider how we incentivize investments in PD-1 therapeutics so we can capitalize on their tremendous potential in a similar way. PD-1 inhibitors work quite well for a subset of patients, but we do not know how best to use them yet. We will figure it out, and when we do, it will likely have tremendous implications for patients, giving them longer, better lives than comparators like chemotherapy.

DL:  What have you heard from your member companies about ICER’s review of NSCLC treatments? Are there ways that ICER could better integrate those comments, as well as patient input?

AM: ICER concurrently opened a public comment period where stakeholders could suggest process changes.  One example of a process change that ICER could make now is to include a more representative group of stakeholders on its advisory council. Furthermore, historically, ICER has discussed the implications of its decision after a vote on the value assessment. Simply listening to the public before voting would reassure stakeholders that ICER values their input.

DL: You had asked ICER to provide a longer comment period for NSCLC, which they granted. Do you think that was enough time? How could they improve the comment process in the future?

AM: PMC is a coalition representing pharmaceutical manufacturers, diagnostic companies, patients, providers, payers and other stakeholders, and the perspectives that these groups have together are often valuable to those who seek input on their work. However, for a coalition to engage a group like ICER requires that our members consider the issues from their perspective before joining a conversation about how to support a concept. ICER’s comment period (30 days and, in this case, including holidays) did not allow for us to engage.  For a document of this magnitude and import, we suggested allowing for a 60 – 90 day comment period, which conforms to other organizations’ timelines. We hope ICER will consider that.

DL: These are exciting times for personalized medicine, with rapid developments in understanding this science and finding new cures. With these developments, how could — or should — ICER update its reports to remain current?

AM: In the case of NSCLC, new clinical evidence was published during ICER’s comment period, and for PD-1 inhibitors, I think we’ll see new data coming in more than once a year going forward. Because ICER is not alone in the value assessment trade, I’d suggest that the field of value assessment coordinate and come up with best practices for updating their findings. That way, innovators, patients and providers will have a timetable to engage with the update.

DL: Thanks so much for speaking with me.

For more about value framework assessments, check out NPC’s Guiding Practices for Patient-Centered Value Assessment and Current Landscape: Value Assessment Frameworks and watch the video from the organization’s conference, Assessing Value: Promise and Pitfalls.


Leave a comment

Beyond the Barriers: Deconstructing the Regulatory and Reimbursement Hurdles for Companion Diagnostics

Guest Blog
by Alessandra Cesano, M.D., Ph.D., Chief Medical Officer, NanoString Technologies

alessandra_cesano1

Alessandra Cesano, M.D., Ph.D.

On November 16, a group of diagnostic industry representatives will convene to discuss the regulatory and reimbursement hurdles for personalized medicine diagnostics during the 12th Annual Personalized Medicine Conference at Harvard Medical School. These kinds of discussions have never been more important.

The push for personalized medicine came to the national forefront in January when President Obama announced the National Cancer Moonshot. This bold initiative aims to accelerate the discovery of personalized treatments tailored to an individual’s genetic profile and/or the tumor’s biology. Companion diagnostics (CDx) play an important role in precision medicine, as they are designed to enrich care for patients who will benefit from a “companion” drug, by helping to characterize the disease’s biology and matching it with the mechanism of action of a specific drug.

Because many of the new drugs in the pipeline work on a specific genetic or biological target that is present in some, but not all, patients with a certain kind of cancer, there is a need for an accompanying test to determine if the drug will or will not have a benefit for a specific patient. These tests may also point to which patients are at immediate risk for harmful side effects.

The promise of companion diagnostics is not under debate, but there are regulatory and reimbursement hurdles that need to be overcome before these tests achieve widespread acceptance and deliver on the promise. First is the cost. The development of a companion diagnostic requires a significant investment, along the lines of tens of millions of dollars. However, presently, the value that companion diagnostics bring to the health care system, specifically in terms of improving patient outcomes and effectiveness of the care delivered, is not appropriately recognized by the reimbursement system.

Supporting the development of CDx tests will require significant investment up front, but once adopted they will help the health care system realize considerable cost and time savings. Currently, the health care system favors the “one-size-fits-all” approach to drug delivery, despite the fact that the subgroup of patients benefiting from treatment is on average only 20 – 30 percent. By using CDx, we can enrich the patient population for which a specific drug is effective, resulting in better outcomes and significantly reduced costs for the health care system. Designing and executing appropriate clinical trials to demonstrate the “clinical utility” and cost-effectiveness of selection biomarkers in each particular clinical setting will be an important part of the evidence needed to obtain test reimbursement.

Another obstacle involves how CDx and personalized medicine have impacted the regulatory landscape. While the regulatory path to a CDx is relatively well defined by regulatory guidelines, a gray area remains on when/how a  “generic” version of those tests (aka laboratory-developed tests or LDTs, which are analytically developed by a single laboratory without a clinical validation requirement) will be regulated by the FDA. This uncertainty has negatively affected the investor community’s appetite for diagnostic companies.

In light of these hurdles, a promising solution to driving CDx adoption is partnership among diagnostic manufacturers and the pharmaceutical industry. Biopharmaceutical companies are developing many therapies and as a result need to enroll patients in many hundreds of clinical trials. The clinical development of their drugs is going to demand enrichment strategies based on biomarkers. In fact, I can see a future in which it is the exception, rather than the rule, that drugs don’t have biomarkers when they come to market.

The biopharmaceutical industry is going to work with in vitro diagnostic companies that have the technology and the capabilities to both analyze the tumor’s biology and build an in vitro diagnostic product that can win clearance from the FDA. Ideally, these tests will be able to cut across whole drug classes (targets). Because of a limited supply of biotissue, the tests will need to be as holistic as possible. Thus, multi-plexed assays that allow for investigation of multiple aspects of biology in a single sample would be preferred.

Because of their unique ability to “match-make” a tumor’s biology with the right therapeutic choice, companion diagnostics are important for the efficient and effective treatment of patients. If the National Cancer Moonshot and other initiatives are going to be successful, there needs to be an alignment among all the stakeholders — including regulators, payers, pharmaceutical companies, physicians, patients and advocacy groups — recognizing the value of companion diagnostics in making “precision medicine” not just a promise but finally a reality.

 


Leave a comment

Think the Diagnostics Community Doesn’t Agree on Anything When It Comes to LDT Regulation? Think Again.

by Amy M. Miller, Ph.D., Executive Vice President, Personalized Medicine Coalition

amiller

Amy M. Miller, Ph.D.

As we enter a new round of discussions about laboratory-developed test (LDT) regulation, it is helpful to review where we’ve been. Many stakeholders have weighed in on the topic, and GenomeWeb’s comprehensive summary of the different proposals offers a useful comparison of them. At first glance, there appears to be little or no consensus. But listening to the community reveals more.

At the beginning of this year, I moderated a series of discussions on potential legislative solutions with representatives from the entire LDT community, including but not limited to those with an interest in personalized medicine. In short, the community agrees that a legislative solution to LDT regulation should take a risk-based approach and:

  1. Protect public health labs. Public health labs should be protected by any regulatory paradigm, which means that sentinel labs must be able to develop, deploy and use rapidly developed diagnostics to address critical public health needs.
  2. Allow flexibility and efficiency when managing modifications. As diagnostic device developers have long argued, the way modifications are managed by a regulatory system should be flexible and efficient to allow diagnostic tests to evolve with the clinical science that underpins them.
  3. Mitigate regulatory burdens for government and industry. To reduce regulatory burdens on government and industry, regulatory agencies should, when appropriate, recognize when certain safeguards are already in place. These mitigation strategies can help regulatory bodies keep pace with the rapidly evolving pace of personalized medicine diagnostic testing.
  4. Design a grandfathering system for tests already on the market. When FDA published its draft framework for regulating LDTs, we had no clear appreciation of the number of tests that might be captured by it. While we still do not have an exact count, tech firm NextGxDx estimates that there are nearly 70,000 personalized medicine diagnostics offered by about 300 labs with another eight to 10 coming to market each business day. To manage such an enormous workload, a regulatory agency must design a grandfathering system that will allow most tests to remain on the market unless there is a compelling reason to remove them.
  5. Ensure regulatory burdens reflect testing volumes. Regulatory burden must be reflective of testing volume.  For example, diagnostics designed for rare and un-met needs should be given careful and different consideration by any regulatory agency to ensure that tests are developed for micro-markets.
  6. Accept valid scientific evidence for regulatory purposes — even if that evidence does not include data from a randomized control trial. Personalized medicine has challenged how health care products and services are conceived, developed, regulated, covered, paid for and used by physicians.  Evidentiary requirements for regulatory review must also evolve. The community agrees that for diagnostics, valid scientific evidence should be acceptable for regulatory review, even when that evidence does not include data from randomized control trials.

Understanding these points and the logic behind them is essential to progress on this topic. Fortunately, we are not starting from scratch.

While there is no consensus about which regulatory agency should manage LDT regulation, the House Energy and Commerce Committee has released a draft legislative solution designed to address the community concerns outlined above. That proposal builds on a rich dialogue that began when the first genetic tests entered the market and continued in 2007 when Senators Ted Kennedy (D-MA) and Gordon Smith (R-OR) released a bi-partisan proposal for FDA to actively regulate lab tests. Soon after, then-Senator Obama and Senator Richard Burr (R-NC) released a draft legislative proposal that was not quite as burdensome. Finally, in 2010, Senator Hatch outlined a novel path at FDA for diagnostics, which opened up the conversation about the true difference between diagnostics and the medical devices that their regulatory structure mirrored.

These historical efforts stimulated conversation, and what was learned has influenced how we consider the topic. The Committee’s next draft will have been improved by stakeholder input, and we can expect the Senate to continue improving on the next draft.

Addressing this duel path to market and the difficulties inherent in such a regulatory paradigm is essential to the field.  Once this debate is settled, we can all concentrate on the biggest issue in personalized medicine: coverage, payment and use of personalized medicine diagnostics to dramatically improve the care patients receive. Guided by these areas of agreement and rich historical dialogue, we may be able to focus on those conversations sooner than we think.