Novel antioxidant makes old blood vessels seem young again

May 31, 2018

Older adults who take a novel antioxidant that specifically targets cellular powerhouses, or mitochondria, see age-related vascular changes reverse by the equivalent of 15 to 20 years within six weeks, according to new University of Colorado Boulder research.

The study, published this week in the American Heart Association journal Hypertension, adds to a growing body of evidence suggesting pharmaceutical-grade nutritional supplements, or nutraceuticals, could play an important role in preventing heart disease-the nation’s No. 1 killer. It also resurrects the notion that oral antioxidants, which have been broadly dismissed as ineffective in recent years, could reap measurable health benefits if properly targeted, the authors say.

“This is the first clinical trial to assess the impact of a mitochondrial-specific antioxidant on vascular function in humans,” said lead author Matthew Rossman, a postdoctoral researcher in the department of integrative physiology. “It suggests that therapies like this may hold real promise for reducing the risk of age-related cardiovascular disease.”

For the study, Rossman and senior author Doug Seals, director of the Integrative Physiology of Aging Laboratory, recruited 20 healthy men and women age 60 to 79 from the Boulder area.

Half took 20 milligrams per day of a supplement called MitoQ, made by chemically altering the naturally-occurring antioxidant Coenzyme Q10 to make it cling to mitochondria inside cells.

The other half took a placebo.

After six weeks, researchers assessed how well the lining of blood vessels, or the endothelium, functioned, by measuring how much subjects’ arteries dilated with increased blood flow.

Then, after a two-week “wash out” period of taking nothing, the two groups switched, with the placebo group taking the supplement, and vice versa. The tests were repeated.

The researchers found that when taking the supplement, dilation of subjects’ arteries improved by 42 percent, making their , at least by that measure, look like those of someone 15 to 20 years younger. An improvement of that magnitude, if sustained, is associated with about a 13 percent reduction in heart disease, Rossman said. The study also showed that the improvement in dilation was due to a reduction in .

In participants who, under placebo conditions, had stiffer arteries, supplementation was associated with reduced stiffness.

Blood vessels grow stiff with age largely as a result of oxidative stress, the excess production of metabolic byproducts called which can damage the endothelium and impair its function. During youth, bodies produce enough antioxidants to quench those free radicals. But with age, the balance tips, as mitochondria and other cellular processes produce and the body’s antioxidant defenses can’t keep up, Rossman said.

Oral antioxidant supplements like vitamin C and vitamin E fell out of favor after studies showed them to be ineffective.

“This study breathes new life into the discredited theory that supplementing the diet with antioxidants can improve health,” said Seals. “It suggests that targeting a specific source-mitochondria-may be a better way to reduce oxidative stress and improve cardiovascular health with aging.”

More information: Matthew J. Rossman et al, Chronic Supplementation With a Mitochondrial Antioxidant (MitoQ) Improves Vascular Function in Healthy Older Adults, Hypertension (2018).

This article was originally published by: https://medicalxpress.com/news/2018-04-antioxidant-blood-vessels-young.html

Advertisements

DNA Robots Target Cancer

May 17, 2018

DNA nanorobots that travel the bloodstream, find tumors, and dispense a protein that causes blood clotting trigger the death of cancer cells in mice, according to a study published today (February 12) in Nature Biotechnology.

The authors have “demonstrated that it’s indeed possible to do site-specific drug delivery using biocompatible, biodegradable, DNA-based bionanorobots for cancer therapeutics,” says Suresh Neethirajan, a bioengineer at the University of Guelph in Ontario, Canada, who did not participate in the study. “It’s a combination of diagnosing the biomarkers on the surface of the cancer itself and also, upon recognizing that, delivering the specific drug to be able to treat it.”

The international team of researchers started with the goal of “finding a path to design nanorobots that can be applied to treatment of cancer in human[s],” writes coauthor Hao Yan of Arizona State University in an email to The Scientist.

Yan and colleagues first generated a self-assembling, rectangular, DNA-origami sheet to which they linked thrombin, an enzyme responsible for blood clotting. Then, they used DNA fasteners to join the long edges of the rectangle, resulting in a tubular nanorobot with thrombin on the inside. The authors designed the fasteners to dissociate when they bind nucleolin—a protein specific to the surface of tumor blood-vessel cells—at which point, the tube opens and exposes its cargo.

Nanorobot design. Thrombin is represented in pink and nucleolin in blue.S. LI ET AL., NATURE BIOTECHNOLOGY, 2018

The scientists next injected the nanorobots intravenously into nude mice with human breast cancer tumors. The robots grabbed onto vascular cells at tumor sites and caused extensive blood clots in the tumors’ vessels within 48 hours, but did not cause clotting elsewhere in the animals’ bodies. These blood clots led to tumor-cell necrosis, resulting in smaller tumors and a better chance for survival compared to control mice. Yan’s team also found that nanorobot treatment increased survival and led to smaller tumors in a mouse model of melanoma, and in mice with xenografts of human ovarian cancer cells.

The authors are “looking at specific binding to tumor cells, which is basically the holy grail for . . . cancer therapy,” says the University of Tennessee’s Scott Lenaghan, who was not involved in the work. The next step is to investigate any damage—such as undetected clots or immune-system responses—in the host organism, he says, as well as to determine how much thrombin is actually delivered at the tumor sites.

The authors showed in the study that the nanorobots didn’t cause clotting in major tissues in miniature pigs, which satisfies some safety concerns, but Yan agrees that more work is needed. “We are interested in looking further into the practicalities of this work in mouse models,” he writes.

Going from “a mouse model to humans is a huge step,” says Mauro Ferrari, a biomedical engineer at Houston Methodist Hospital and Weill Cornell Medical College who did not participate in the study. It’s not yet clear whether targeting nucleolin and delivering thrombin will be clinically relevant, he says, “but the breakthrough aspect is [that] this is a platform. They can use a similar approach for other things, which is really exciting. It’s got big implications.”

S. Li et al., “A DNA nanorobot functions as a cancer therapeutic in response to a molecular trigger in vivo,Nature Biotechnology, doi:10.1038/nbt.4071, 2018.

This article was originally published by: https://www.the-scientist.com/?articles.view/articleNo/51717/title/DNA-Robots-Target-Cancer/

Longevity industry systematized for first time

April 02, 2018

UK aging research foundation produces roadmap for the emerging longevity industry in a series of reports to be published throughout the year

The Biogerontology Research Foundation has embarked on a year-long mission to summarise in a single document the various emerging technologies and industries which can be brought to bear on aging, healthy longevity, and everything in between, as part of a joint project between The Global Longevity Consortium, consisting of the Biogerontology Research FoundationDeep Knowledge Life SciencesAging Analytics Agency and Longevity.International platform.

‍GLOBAL LONGEVITY SCIENCE LANDSCAPE 2017

For scientists, policy makers, regulators, government officials, investors and other stakeholders, a consensus understanding of the field of human longevity remains fragmented, and has yet to be systematized by any coherent framework, and has not yet been the subject of a comprehensive report profiling the field and industry as a whole by any analytical agency to date. The consortium behind this series of reports hope that they will come to be used as a sort of Encyclopedia Britannica and specialized Wikipedia of the emerging longevity industry, with the aim of serving as the foundation upon which the first global framework of the industry will be built, given the significant industry growth projected over the coming years.

Experts on the subject of human longevity, who tend arrive at the subject from disparate fields, have failed even to agree on a likely order of magnitude for future human lifespan. Those who foresee a 100-year average in the near future are considered extreme optimists by some, while others have even mooted the possibility of indefinite life extension through comprehensive repair and maintenance. As such the longevity industry has often defied real understanding and has proved a complex and abstract topic in the minds of many, investors and governments in particular.

The first of these landmark reports, entitled ‘The Science of Longevity‘, standing at almost 800 pages in length, seeks to rectify this.

Part 1 of the report ties together the progress threads of the constituent industries into a coherent narrative, mapping the intersection of biomedical gerontology, regenerative medicine, precision medicine, artificial intelligence, offering a brief history and snapshot of each. Part 2 lists and individually profiles 650 longevity-focused entities, including research hubs, non-profit organizations, leading scientists, conferences, databases, books and journals. Infographics are used to illustrate where research institutions stand in relation to each other with regard to their disruptive potential: companies and institutions specialising in palliative technologies are placed at the periphery of circular diagrams, whereas those involved with more comprehensive, preventative interventions, such as rejuvenation biotechnologies and gene therapies, are depicted as central.

In this report great care was taken to visualize the complex and interconnected landscape of this field via state of the art infographics so as to distill the many players, scientific subsectors and technologies within the field of geroscience into common understanding. Their hope was to create a comprehensive yet readily-understandable view of the entire field and its many players, to serve a similar function that Mendeleev’s periodic table did for the field of chemistry. While these are static infographics in the reports, their creators plan to create complimentary online versions that are interactive and filterable, and to convene a series of experts to analyze these infographics and continually update them as the geroscience landscapes shifts. Similar strategies are employed in Volume II to illustrate the many companies and investors within the longevity industry.

These reports currently profile the top 100 entities in each of the categories, but in producing them, analysts found that the majority of these categories have significantly more than 100 entities associated with them. One of their main conclusions upon finishing the report is that the longevity industry is indeed of substantial size, with many industry and academic players, but that it remains relatively fragmented, lacking a sufficient degree of inter-organization collaboration and industry-academic partnerships. The group plans to expand these lists in follow-up volumes so as to give a more comprehensive overview of the individual companies, investors, books, journals, conferences and scientists that serve as the foundation of this emerging industry.

Since these reports are being spearheaded by the UK’s oldest biomedical charity focused on healthspan extension, the Biogerontology Research Foundation is publishing them online, freely available to the public. While the main focus of this series of reports is an analytical report on the emerging longevity industry, the reports still delve deeply into the science of longevity, and Volume I is dedicated exclusively to an overview of the history, present and future state of ageing research from a scientific perspective.

The consortium of organizations behind these reports anticipate them to be the first comprehensive analytical report on the emerging longevity industry to date, and hope to increase awareness and interest from investors, scientists, medical personnel, regulators, policy makers, government officials and the public-at-large in both the longevity industry as well as geroscience proper by providing a report that simultaneously distills the complex network of knowledge underlying the industry and field into easily and intuitively comprehensible infographics, while at the same time providing a comprehensive backbone of chapters and profiles on the various companies, investors, organizations, labs, institutions, books, journals and conferences for those inclined for a deeper dive into the vast foundation of the longevity industry and the field of geroscience.

It is hoped that this report will assist others in visualising the present longevity landscape and elucidate the various industry players and components. Volume 2, The Business of Longevity, which at approximately 500 pages in length aims to be as comprehensive as Volume 1, is set to be published shortly thereafter, and will focus on the companies and investors working in the field of precision preventive medicine with a focus on healthy longevity, which will be necessary in growing the industry fast enough to avert the impending crisis of global aging demographics.

These reports will be followed up throughout the coming year with Volume 3 (“Special Case Studies”), featuring 10 special case studies on specific longevity industry sectors, such as cell therapies, gene therapies, AI for biomarkers of aging, and more, Volume 4 (“Novel Longevity Financial System”), profiling how various corporations, pension funds, investment funds and governments will cooperate within the next decade to avoid the crisis of demographic aging, and Volume 5 (“Region Case Studies”), profiling the longevity industry in specific geographic regions.

These reports are, however, only the beginning, and ultimately will serve as a launching pad for an even more ambitious project: Longevity.International, an online platform that will house these reports, and also serve as a virtual ecosystem for uniting and incentivizing the many fragmented stakeholders of the longevity industry, including scientists, entrepreneurs, investors, policy makers, regulators and government officials to unite in the common goal of healthspan extension and aversion of the looping demographic aging and Silver Tsunami crisis. The platform will use knowledge crowdsourcing of top tier experts to unite scientists with entrepreneurs, entrepreneurs to investors, and investors to policy-makers and regulators, where all stakeholders can aggregate and integrate intelligence and expertise from each other using modern IT technologies for these types of knowledge platforms, and all stakeholders can be rewarded for their services.

THE SCIENCE OF PROGRESSIVE MEDICINE 2017 LANDSCAPE

The consortium behind these reports are interested in collaboration with interested contributors, institutional partners, and scientific reviewers to assist with the ongoing production of these reports, to enhance their outreach capabilities and ultimately to enhance the overall impact of these reports upon the scientific and business communities operating within the longevity industry, and can be reached at info@longevity.international

This article was originally published by:
http://bg-rf.org.uk/press/longevity-industry-systematized-for-first-time

Prosthetic memory system successful in humans

April 02, 2018

Scientists at Wake Forest Baptist Medical Center and the University of Southern California (USC) have demonstrated the successful implementation of a prosthetic system that uses a person’s own memory patterns to facilitate the brain’s ability to encode and recall memory.

In the pilot study, published in today’s Journal of Neural Engineering, participants’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements.

“This is the first time scientists have been able to identify a patient’s own brain cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better, an important first step in potentially restoring memory loss,” said the study’s lead author Robert Hampson, Ph.D., professor of physiology/pharmacology and neurology at Wake Forest Baptist.

The study focused on improving episodic memory, which is the most common type of memory loss in people with Alzheimer’s disease, stroke and head injury. Episodic memory is information that is new and useful for a short period of time, such as where you parked your car on any given day. Reference memory is information that is held and used for a long time, such as what is learned in school.

The researchers enrolled epilepsy patients at Wake Forest Baptist who were participating in a diagnostic brain-mapping procedure that used surgically implanted electrodes placed in various parts of the brain to pinpoint the origin of the patients’ seizures. Using the team’s electronic prosthetic system based on a multi-input multi-output (MIMO) nonlinear mathematical model, the researchers influenced the firing patterns of multiple neurons in the hippocampus, a part of the brain involved in making new memories in eight of those patients.

First, they recorded the neural patterns or ‘codes’ while the study participants were performing a computerized memory task. The patients were shown a simple image, such as a color block, and after a brief delay where the screen was blanked, were then asked to identify the initial image out of four or five on the screen.

The USC team led by biomedical engineers Theodore Berger, Ph.D., and Dong Song, Ph.D., analyzed the recordings from the correct responses and synthesized a MIMO-based code for correct memory performance. The Wake Forest Baptist team played back that code to the patients while they performed the image recall task. In this test, the patients’ episodic memory performance showed a 37 percent improvement over baseline.

In a second test, participants were shown a highly distinctive photographic image, followed by a short delay, and asked to identify the first photo out of four or five others on the screen. The memory trials were repeated with different images while the neural patterns were recorded during the testing process to identify and deliver correct-answer codes.

After another longer delay, Hampson’s team showed the participants sets of three pictures at a time with both an original and new photos included in the sets, and asked the patients to identify the original photos, which had been seen up to 75 minutes earlier. When stimulated with the correct-answer codes, study participants showed a 35 percent improvement in memory over baseline.

“We showed that we could tap into a patient’s own memory content, reinforce it and feed it back to the patient,” Hampson said. “Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.

“To date we’ve been trying to determine whether we can improve the memory skill people still have. In the future, we hope to be able to help people hold onto specific memories, such as where they live or what their grandkids look like, when their overall memory begins to fail.”

The current study is built on more than 20 years of preclinical research on memory codes led by Sam Deadwyler, Ph.D., professor of physiology and pharmacology at Wake Forest Baptist, along with Hampson, Berger and Song. The preclinical work applied the same type of stimulation to restore and facilitate memory in animal models using the MIMO system, which was developed at USC.

The research was funded by the U.S. Defense Advanced Research Projects Agency (DARPA).

Story Source:

Materials provided by Wake Forest Baptist Medical Center. Note: Content may be edited for style and length.

This article was originally published by:
https://www.sciencedaily.com/releases/2018/03/180327194350.htm

Suicide molecules kill any cancer cell

January 05, 2018

CHICAGO – Small RNA molecules originally developed as a tool to study gene function trigger a mechanism hidden in every cell that forces the cell to commit suicide, reports a new Northwestern Medicine study, the first to identify molecules to trigger a fail-safe mechanism that may protect us from cancer.

The mechanism — RNA suicide molecules — can potentially be developed into a novel form of cancer therapy, the study authors said.

Cancer cells treated with the RNA molecules never become resistant to them because they simultaneously eliminate multiple genes that cancer cells need for survival.

“It’s like committing suicide by stabbing yourself, shooting yourself and jumping off a building all at the same time,” said Northwestern scientist and lead study author Marcus Peter. “You cannot survive.”

The inability of cancer cells to develop resistance to the molecules is a first, Peter said.

“This could be a major breakthrough,” noted Peter, the Tom D. Spies Professor of Cancer Metabolism at Northwestern University Feinberg School of Medicine and a member of the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.  

Peter and his team discovered sequences in the human genome that when converted into small double-stranded RNA molecules trigger what they believe to be an ancient kill switch in cells to prevent cancer. He has been searching for the phantom molecules with this activity for eight years.

“We think this is how multicellular organisms eliminated cancer before the development of the adaptive immune system, which is about 500 million years old,” he said. “It could be a fail safe that forces rogue cells to commit suicide. We believe it is active in every cell protecting us from cancer.”

This study, which will be published Oct. 24 in eLife, and two other new Northwestern studies in Oncotarget and Cell Cycle by the Peter group, describe the discovery of the assassin molecules present in multiple human genes and their powerful effect on cancer in mice.

Looking back hundreds of millions of years

Why are these molecules so powerful?

“Ever since life became multicellular, which could be more than 2 billion years ago, it had to deal with preventing or fighting cancer,” Peter said. “So nature must have developed a fail safe mechanism to prevent cancer or fight it the moment it forms. Otherwise, we wouldn’t still be here.”

Thus began his search for natural molecules coded in the genome that kill cancer.

“We knew they would be very hard to find,” Peter said. “The kill mechanism would only be active in a single cell the moment it becomes cancerous. It was a needle in a haystack.”

But he found them by testing a class of small RNAs, called small interfering (si)RNAs, scientists use to suppress gene activity. siRNAs are designed by taking short sequences of the gene to be targeted and converting them into double- stranded RNA. These siRNAs when introduced into cells suppress the expression of the gene they are derived from.Peter found that a large number of these small RNAs derived from certain genes did not, as expected, only suppress the gene they were designed against. They also killed all cancer cells. His team discovered these special sequences are distributed throughout the human genome, embedded in multiple genes as shown in the study in Cell Cycle.

When converted to siRNAs, these sequences all act as highly trained super assassins. They kill the cells by simultaneously eliminating the genes required for cell survival. By taking out these survivor genes, the assassin molecule activates multiple death cell pathways in parallel.

The small RNA assassin molecules trigger a mechanism Peter calls DISE, for Death Induced by Survival gene Elimination.

Activating DISE in organisms with cancer might allow cancer cells to be eliminated. Peter’s group has evidence this form of cell death preferentially affects cancer cells with little effect on normal cells.

To test this in a treatment situation, Peter collaborated with Dr. Shad Thaxton, associate professor of urology at Feinberg, to deliver the assassin molecules via nanoparticles to mice bearing human ovarian cancer. In the treated mice, the treatment strongly reduced the tumor growth with no toxicity to the mice, reports the study in Oncotarget. Importantly, the tumors did not develop resistance to this form of cancer treatment. Peter and Thaxton are now refining the treatment to increase its efficacy.

Peter has long been frustrated with the lack of progress in solid cancer treatment.

“The problem is cancer cells are so diverse that even though the drugs, designed to target single cancer driving genes, often initially are effective, they eventually stop working and patients succumb to the disease,” Peter said. He thinks a number of cancer cell subsets are never really affected by most targeted anticancer drugs currently used.

Most of the advanced solid cancers such as brain, lung, pancreatic or ovarian cancer have not seen an improvement in survival, Peter said.

“If you had an aggressive, metastasizing form of the disease 50 years ago, you were busted back then and you are still busted today,” he said. “Improvements are often due to better detection methods and not to better treatments.”

Cancer scientists need to listen to nature more, Peter said. Immune therapy has been a success, he noted, because it is aimed at activating an anticancer mechanism that evolution developed. Unfortunately, few cancers respond to immune therapy and only a few patients with these cancers benefit, he said.

“Our research may be tapping into one of nature’s original kill switches, and we hope the impact will affect many cancers,” he said. “Our findings could be disruptive.”

Northwestern co-authors include first authors William Putzbach, Quan Q. Gao, and Monal Patel, and coauthors Ashley Haluck-Kangas, Elizabeth T. Bartom, Kwang-Youn A. Kim, Denise M. Scholtens, Jonathan C. Zhao and Andrea E. Murmann.

The research is funded by grants T32CA070085, T32CA009560, R50CA211271 and R35CA197450 from the National Cancer Institute of the National Institutes of Health.

This article was originally published by:
https://news.northwestern.edu/stories/2017/october/suicide-molecules-kill-any-cancer-cell/

Eugenics 2.0: We’re at the Dawn of Choosing Embryos by Health, Height, and More

November 18, 2017

Nathan Treff was diagnosed with type 1 diabetes at 24. It’s a disease that runs in families, but it has complex causes. More than one gene is involved. And the environment plays a role too.

So you don’t know who will get it. Treff’s grandfather had it, and lost a leg. But Treff’s three young kids are fine, so far. He’s crossing his fingers they won’t develop it later.

Now Treff, an in vitro fertilization specialist, is working on a radical way to change the odds. Using a combination of computer models and DNA tests, the startup company he’s working with, Genomic Prediction, thinks it has a way of predicting which IVF embryos in a laboratory dish would be most likely to develop type 1 diabetes or other complex diseases. Armed with such statistical scorecards, doctors and parents could huddle and choose to avoid embryos with failing grades.

IVF clinics already test the DNA of embryos to spot rare diseases, like cystic fibrosis, caused by defects in a single gene. But these “preimplantation” tests are poised for a dramatic leap forward as it becomes possible to peer more deeply at an embryo’s genome and create broad statistical forecasts about the person it would become.

The advance is occurring, say scientists, thanks to a growing flood of genetic data collected from large population studies. As statistical models known as predictors gobble up DNA and health information about hundreds of thousands of people, they’re getting more accurate at spotting the genetic patterns that foreshadow disease risk. But they have a controversial side, since the same techniques can be used to project the eventual height, weight, skin tone, and even intelligence of an IVF embryo.

In addition to Treff, who is the company’s chief scientific officer, the founders of Genomic Prediction are Stephen Hsu, a physicist who is vice president for research at Michigan State University, and Laurent Tellier, a Danish bioinformatician who is CEO. Both Hsu and Tellier have been closely involved with a project in China that aims to sequence the genomes of mathematical geniuses, hoping to shed light on the genetic basis of IQ.

Spotting outliers

The company’s plans rely on a tidal wave of new knowledge showing how small genetic differences can add up to put one person, but not another, at high odds for diabetes, a neurotic personality, or a taller or shorter height. Already, such “polygenic risk scores” are used in direct-to-consumer gene tests, such as reports from 23andMe that tell customers their genetic chance of being overweight.

For adults, risk scores are little more than a novelty or a source of health advice they can ignore. But if the same information is generated about an embryo, it could lead to existential consequences: who will be born, and who stays in a laboratory freezer.

“I remind my partners, ‘You know, if my parents had this test, I wouldn’t be here,’” says Treff, a prize-winning expert on diagnostic technology who is the author of more than 90 scientific papers.

Genomic Prediction was founded this year and has raised funds from venture capitalists in Silicon Valley, though it declines to say who they are. Tellier, whose inspiration is the science fiction film Gattaca, says the company plans to offer reports to IVF doctors and parents identifying “outliers”—those embryos whose genetic scores put them at the wrong end of a statistical curve for disorders such as diabetes, late-life osteoporosis, schizophrenia, and dwarfism, depending on whether models for those problems prove accurate.

A days-old human embryo in an IVF clinic. Some cells can be removed to perform DNA tests.

dallasfertility.com

The company’s concept, which it calls expanded preimplantation genetic testing, or ePGT, would effectively add a range of common disease risks to the menu of rare ones already available, which it also plans to test for. Its promotional material uses a picture of a mostly submerged iceberg to get the idea across. “We believe it will become a standard part of the IVF process,” says Tellier, just as a test for Down syndrome is a standard part of pregnancy.

Some experts contacted by MIT Technology Review said they believed it’s premature to introduce polygenic scoring technology into IVF clinics—though perhaps not by very much. Matthew Rabinowitz, CEO of the prenatal-testing company Natera, based in California, says he thinks predictions obtained today could be “largely misleading” because DNA models don’t function well enough. But Rabinowitz agrees that the technology is coming.

“You are not going to stop the modeling in genetics, and you are not going to stop people from accessing it,” he says. “It’s going to get better and better.”

Sharp questions

Testing embryos for disease risks, including risks for diseases that develop only late in life, is considered ethically acceptable by U.S. fertility doctors. But the new DNA scoring models mean parents might be able to choose their kids on the basis of traits like IQ or adult weight. That’s because, just like type 1 diabetes, these traits are the result of complex genetic influences the predictor algorithms are designed to find.

“It’s the camel’s nose under the tent. Because if you are doing it for something more serious, then it’s trivially easy to look for anything else,” says Michelle Meyer, a bioethicist at the Geisinger Health System who analyzes issues in reproductive genetics. “Here is the genomic dossier on each embryo. And you flip through the book.” Imagine picking the embryo most likely to get into Harvard like Mom, or to be tall like Dad.

For Genomic Prediction, a tiny startup based out of a tech incubator in New Jersey, such questions will be especially sharply drawn. That is because of Hsu’s long-standing interest in genetic selection for superior intelligence.

In 2014, Hsu authored an essay titled “Super-Intelligent Humans Are Coming,” in which he argued that selecting embryos for intelligence could boost the resulting child’s IQ by 15 points.

Genomic Prediction says it will only report diseases—that is, identify those embryos it thinks would develop into people with serious medical problems. Even so, on his blog and in public statements, Hsu has for years been developing a vision that goes far beyond that.

“Suppose I could tell you embryo four is going to be the tallest, embryo three is going to be the smartest, embryo two is going to be very antisocial. Suppose that level of granularity was available in the reports,” he told the conservative radio and YouTube personality Stefan Molyneux this spring. “That is the near-term future that we as a civilization face. This is going to be here.”

Measuring height

The fuel for the predictive models is a deluge of new data, most recently genetic readouts and medical records for 500,000 middle-aged Britons that were released in July by the U.K. Biobank, a national precision-medicine project in that country.

The data trove included, for each volunteer, a map of about 800,000 single-nucleotide polymorphisms, or SNPs—points where their DNA differs slightly from another person’s. The release caused a pell-mell rush by geneticists to update their calculations about exactly how much of human disease, or even routine behaviors like bread consumption, these genetic differences could explain.

Armed with the U.K. data, Hsu and Tellier claimed a breakthrough. For one easily measured trait, height, they used machine-learning techniques to create a predictor that behaved flawlessly. They reported that the model could, for the most part, predict people’s height from their DNA data to within three or four centimeters.

Height is currently the easiest trait to predict. It’s determined mostly by genes, and it’s always recorded in population databases. But Tellier says genetic databases are “rapidly approaching” the size needed to make accurate predictions about other human features, including risk for diseases whose true causes aren’t even known.

Tellier says Genomic Prediction will zero in on disease traits for which the predictors already perform fairly well, or will soon. Those include autoimmune disorders like the illness Treff suffers from. In those conditions, a smaller set of genes dominates the predictions, sometimes making them more reliable.

A report from Germany in 2014, for instance, found it was possible to distinguish fairly accurately, from a polygenic DNA score alone, between a person with type 1 diabetes and a person without it. While the scores aren’t perfectly accurate, consider how they might influence a prospective parent. On average, children of a man with type 1 diabetes have a one in 17 chance of developing the ailment. Picking the best of several embryos made in an IVF clinic, even with an error-prone predictor, could lower the odds.

In the case of height, Genomic Prediction hopes to use the model to help identify embryos that would grow into adults shorter than 4’10”, the medical definition of dwarfism, says Tellier. There are many physical and psychological disadvantages to being so short. Eventually the company could also have the ability to identify intellectual problems, such as embryos with a predicted IQ of less than 70.

The company doesn’t intend to give out raw trait scores to parents, only to flag embryos likely to be abnormal. That is because the product has to be “ethically defensible,” says Hsu: “We would only reveal the negative outlier state. We don’t report, ‘This guy is going to be in the NBA.’”

Some scientists doubt the scores will prove useful at picking better people from IVF dishes. Even if they’re accurate on the average, for individuals there’s no guarantee of pinpoint precision. What’s more, environment has as big an impact on most traits as genes do. “There is a high probability that you will get it wrong—that would be my concern,” says Manuel Rivas, a professor at Stanford University who studies the genetics of Crohn’s disease. “If someone is using that information to make decisions about embryos, I don’t know what to make of it.”

Efforts to introduce this type of statistical scoring into reproduction have, in the past, drawn criticism. In 2013, 23andMe provoked outrage when it won a patent on the idea of drop-down menus parents could use to pick sperm or egg donors—say, to try to get a specific eye color. The company, funded by Google, quickly backpedaled.

But since then, polygenic scores have become a routine aspect of novelty DNA tests. A company called HumanCode sells a $199 test online that uses SNP scores to tell two people about how tall their kids might be. In the dairy cattle industry, polygenic tests are widely used to rate young animals for how much milk they’ll produce.

“At a broad level, our understanding of complex traits has evolved. It’s not that there are a few genes contributing to complex traits; it’s tens, or thousands, or even all genes,” says Meyer, the Geisinger professor. “That has led to polygenic risk scores. It’s many variants, each with small contributions of their own, but which have a significant contribution together. You add them up.” In his predictor for height, Hsu eventually made use of 20,000 variants to guess how tall each person was.

Measuring embryos

Around the world, a million couples undergo IVF each year; in the U.S., test-tube babies account for 1 percent of births. Preimplantation genetic diagnosis, or PGD, has been part of the technology since the 1990s. In that procedure, a few cells are plucked from a days-old embryo growing in a laboratory so they can be tested.

Until now, doctors have used PGD to detect embryos with major abnormalities, such as missing chromosomes, as well as those with “single gene” defects. Parents who carry the defective gene that causes Huntington’s disease, for instance, can use embryo tests to avoid having a child with the fatal brain ailment.

The obstacle to polygenic tests has been that with so few cells, it’s been difficult to get the broad, accurate view of an embryo’s genome necessary to perform the needed calculations. “It’s very hard to make reliable measurements on that little DNA,” says Rabinowitz, the Natera CEO.

Tellier says Genomic Prediction has developed an improved method for analyzing embryonic DNA, which he says will first be used to improve on traditional PGD, combing many single-gene tests into one. Tellier says the same technique is what will permit it to collect polygenic scores on embryos, although the company did not describe the method in detail. But other scientists have already demonstrated ways to overcome the accuracy barrier.

In 2015, a team led by Rabinowitz and Jay Shendure of the University of Washington did it by sequencing in detail the genomes of two parents undergoing IVF. That let them infer the embryo’s genome sequence, even though the embryo test itself was no more accurate than before. When the babies were born, they found they’d been right.

“We do have the technology to reconstruct the genome of an embryo and create a polygenic model,” says Rabinowitz, whose publicly traded company is worth about $600 million, and who says he has been mulling whether to enter the embryo-scoring business. “The problem is that the models have not quite been ready for prime time.”

That’s because despite Hsu’s success with height, the scoring algorithms have significant limitations. One is that they’re built using data mostly from Northern Europeans. That means they may not be useful for people from Asia or Africa, where the pattern of SNPs is different, or for people of mixed ancestry. Even their performance for specific families of European background can’t be taken for granted unless the procedure is carefully tested in a clinical study, something that’s never been done, says Akash Kumar, a Stanford resident physician who was lead author of the Natera study.

Kumar, who treats young patients with rare disorders, says the genetic predictors raise some “big issues.” One is that the sheer amount of genetic data becoming available could make it temptingly easy to assess nonmedical traits. “We’ve seen such a crazy change in the number of people we are able to study,” he says. “Not many have schizophrenia, but they all have a height and a body-mass index. So the number of people you can use to build the trait models is much larger. It’s a very unique place to be, thinking what we should do with this technology.”

Smarter kids

This week, Genomic Prediction manned a booth at the annual meeting of the American Society for Reproductive Medicine. That organization, which represents fertility doctors and scientists, has previously said it thinks testing embryos for late-life conditions, like Alzheimer’s, would be “ethically justified.” It cited, among other reasons, the “reproductive liberty” of parents.

The society has been more ambivalent about choosing the sex of embryos (something that conventional PGD allows), leaving it to the discretion of doctors. Combined, the society’s positions seem to open the door to any kind of measurement, perhaps so long as the test is justified for a medical reason.

Hsu has previously said he thinks intelligence is “the most interesting phenotype,” or trait, of all. But when he tried his predictor to see what it could say about how far along in school the 500,000 British subjects from the U.K. Biobank had gotten (years of schooling is a proxy for IQ), he found that DNA couldn’t predict it nearly as well as it could predict height.

Yet DNA did explain some of the difference. Daniel Benjamin, a geno-economist at the University of Southern California, says that for large populations, gene scores are already as predictive of educational attainment as whether someone grew up in a rich or poor family. He adds that the accuracy of the scores has been steadily improving. Scoring embryos for high IQ, however, would be “premature” and “ethically contentious,” he says.

Hsu’s prediction is that “billionaires and Silicon Valley types” will be the early adopters of embryo selection technology, becoming among the first “to do IVF even though they don’t need IVF.” As they start producing fewer unhealthy children, and more exceptional ones, the rest of society could follow suit.

“I fully predict it will be possible,” says Hsu of selecting embryos with higher IQ scores. “But we’ve said that we as a company are not going to do it. It’s a difficult issue, like nuclear weapons or gene editing. There will be some future debate over whether this should be legal, or made illegal. Countries will have referendums on it.”

This article was originally published by:
https://www.technologyreview.com/s/609204/eugenics-20-were-at-the-dawn-of-choosing-embryos-by-health-height-and-more/

What Does It Cost to Create a Cancer Drug? Less Than You’d Think

October 18, 2017

What does it really cost to bring a drug to market?

The question is central to the debate over rising health care costs and appropriate drug pricing. President Trump campaigned on promises to lower the costs of drugs.

But numbers have been hard to come by. For years, the standard figure has been supplied by researchers at the Tufts Center for the Study of Drug Development: $2.7 billion each, in 2017 dollars.

Yet a new study looking at 10 cancer medications, among the most expensive of new drugs, has arrived at a much lower figure: a median cost of $757 million per drug. (Half cost less, and half more.)

Following approval, the 10 drugs together brought in $67 billion, the researchers also concluded — a more than sevenfold return on investment. Nine out of 10 companies made money, but revenues varied enormously. One drug had not yet earned back its development costs.

The study, published Monday in JAMA Internal Medicine, relied on company filings with the Securities and Exchange Commission to determine research and development costs.

“It seems like they have done a thoughtful and rigorous job,” said Dr. Aaron Kesselheim, director of the program on regulation, therapeutics and the law at Brigham and Women’s Hospital.

“It provides at least something of a reality check,” he added.

The figures were met with swift criticism, however, by other experts and by representatives of the biotech industry, who said that the research did not adequately take into account the costs of the many experimental drugs that fail.

“It’s a bit like saying it’s a good business to go out and buy winning lottery tickets,” Daniel Seaton, a spokesman for the Biotechnology Innovation Organization, said in an email.

Dr. Jerry Avorn, chief of the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital, predicted that the paper would help fuel the debate over the prices of cancer drugs, which have soared so high “that we are getting into areas that are almost unimaginable economically,” he said.

A leukemia treatment approved recently by the Food and Drug Administration, for example, will cost $475,000 for a single treatment. It is the first of a wave of gene therapy treatments likely to carry staggering price tags.

“This is an important brick in the wall of this developing concern,” he said.

Dr. Vinay Prasad, an oncologist at Oregon Health and Science University, and Dr. Sham Mailankody, of Memorial Sloan Kettering Cancer Center, arrived at their figures after reviewing data on 10 companies that brought a cancer drug to market in the past decade.

Since the companies also were developing other drugs that did not receive approval from the F.D.A., the researchers were able to include the companies’ total spending on research and development, not just what they spent on the drugs that succeeded.

One striking example was ibrutinib, made by Pharmacyclics. It was approved in 2013 for patients with certain blood cancers who did not respond to conventional therapy.

Ibrutinib was the only drug out of four the company was developing to receive F.D.A. approval. The company’s research and development costs for their four drugs were $388 million, the company’s S.E.C. filings indicated.

The drug ibrutinib was developed to treat chronic lymphocytic leukemia, shown here in a CT reconstruction of a patient’s neck. The manufacturer’s return on investment was quite high, according to a new study. Credit LLC/Science Source

After the drug was approved, AbbVie acquired its manufacturer, Pharmacylics, for $21 billion. “That is a 50-fold difference between revenue post-approval and cost to develop,” Dr. Prasad said.

Accurate figures on drug development are difficult to find and often disputed. Although it is widely cited, the Tufts study also was fiercely criticized.

One objection was that the researchers, led by Joseph A. DiMasi, did not disclose the companies’ data on development costs. The study involved ten large companies, which were not named, and 106 investigational drugs, also not named.

But Dr. DiMasi found the new study “irredeemably flawed at a fundamental level.”

“The sample consists of relatively small companies that have gotten only one drug approved, with few other drugs of any type in development,” he said. The result is “substantial selection bias,” meaning that the estimates do not accurately reflect the industry as a whole.

Ninety-five percent of cancer drugs that enter clinical trials fail, said Mr. Seaton, of the biotech industry group. “The small handful of successful drugs — those looked at by this paper — must be profitable enough to finance all of the many failures this analysis leaves unexamined.”

“When the rare event occurs that a company does win approval,” he added, “the reward must be commensurate with taking on the multiple levels of risk not seen in any other industry if drug development is to remain economically viable for prospective investors.”

Cancer drugs remain among the most expensive medications, with prices reaching the hundreds of thousands of dollars per patient.

Although the new study was small, its estimates are so much lower than previous figures, and the return on investment so great, that experts say they raise questions about whether soaring drug prices really are needed to encourage investment.

”That seems hard to swallow when they make seven times what they invested in the first four years,” Dr. Prasad said.

The new study has limitations, noted Patricia Danzon, an economist at the University of Pennsylvania’s Wharton School.

It involved just ten small biotech companies whose cancer drugs were aimed at limited groups of patients with less common diseases.

For such drugs, the F.D.A. often permits clinical trials to be very small and sometimes without control groups. Therefore development costs may have been lower for this group than for drugs that require longer and larger studies.

But, Dr. Danzon said, most new cancer drugs today are developed this way: by small companies and for small groups of patients. The companies often license or sell successful drugs to the larger companies.

The new study, she said, “is shining a light on a sector of the industry that is becoming important now.” The evidence, she added, is “irrefutable” that the cost of research and development “is small relative to the revenues.”

When it comes to drug prices, it does not matter what companies spend on research and development, Dr. Kesselheim said.

“They are based on what the market will bear.”

Correction: September 14, 2017
An earlier version of this article incorrectly identified the company that acquired a drug maker. It was AbbVie, not Janssen Biotech (which jointly develops the drug). Additionally, the article incorrectly described what AbbVie acquired. It was the company Pharmacylics, which developed the drug Imbruvica, not the drug itself.

Original source: https://www.nytimes.com/2017/09/11/health/cancer-drug-costs.html

Saturn moon Titan has chemical that could form bio-like ‘membranes’ says NASA

August 06, 2017

NASA researchers have found large quantities (2.8 parts per billion) of acrylonitrile* (vinyl cyanide, C2H3CN) in Titan’s atmosphere that could self-assemble as a sheet of material similar to a cell membrane.

Acrylonitrile (credit: NASA Goddard)

Consider these findings, presented July 28, 2017 in the open-access journal Science Advances, based on data from the ALMA telescope in Chile (and confirming earlier observations by NASA’s Cassini spacecraft):

Azotozome illustration (credit: James Stevenson/Cornell)

1. Researchers have proposed that acrylonitrile molecules could come together as a sheet of material similar to a cell membrane. The sheet could form a hollow, microscopic sphere that they dubbed an “azotosome.”

A bilayer, made of two layers of lipid molecules (credit: Mariana Ruiz Villarreal/CC)

2. The azotosome sphere could serve as a tiny storage and transport container, much like the spheres that biological lipid bilayers can form. The thin, flexible lipid bilayer is the main component of the cell membrane, which separates the inside of a cell from the outside world.

“The ability to form a stable membrane to separate the internal environment from the external one is important because it provides a means to contain chemicals long enough to allow them to interact,” said Michael Mumma, director of the Goddard Center for Astrobiology, which is funded by the NASA Astrobiology Institute.

Organic rain falling on a methane sea on Titan (artist’s impression) (credit: NASA Goddard)

3. Acrylonitrile condenses in the cold lower atmosphere and rains onto its solid icy surface, ending up in seas of methane liquids on its surface.

Illustration showing organic compounds in Titan’s seas and lakes (ESA)

4. A lake on Titan named Ligeia Mare that could have accumulated enough acrylonitrile to form about 10 million azotosomes in every milliliter (quarter-teaspoon) of liquid. Compare that to roughly a million bacteria per milliliter of coastal ocean water on Earth.

Chemistry in Titan’s atmosphere. Nearly as large as Mars, Titan has a hazy atmosphere made up mostly of nitrogen with a smattering of organic, carbon-based molecules, including methane (CH4) and ethane (C2H6). Planetary scientists theorize that this chemical make-up is similar to Earth’s primordial atmosphere. The conditions on Titan, however, are not conducive to the formation of life as we know it; it’s simply too cold (95 kelvins or -290 degrees Fahrenheit). (credit: ESA)

6. A related open-access study published July 26, 2017 in The Astrophysical Journal Letters notes that Cassini has also made the surprising detection of negatively charged molecules known as “carbon chain anions” in Titan’s upper atmosphere. These molecules are understood to be building blocks towards more complex molecules, and may have acted as the basis for the earliest forms of life on Earth.

“This is a known process in the interstellar medium, but now we’ve seen it in a completely different environment, meaning it could represent a universal process for producing complex organic molecules,” says Ravi Desai of University College London and lead author of the study.

* On Earth, acrylonitrile  is used in manufacturing of plastics.


NASA Goddard | A Titan Discovery


Abstract of ALMA detection and astrobiological potential of vinyl cyanide on Titan

Recent simulations have indicated that vinyl cyanide is the best candidate molecule for the formation of cell membranes/vesicle structures in Titan’s hydrocarbon-rich lakes and seas. Although the existence of vinyl cyanide (C2H3CN) on Titan was previously inferred using Cassini mass spectrometry, a definitive detection has been lacking until now. We report the first spectroscopic detection of vinyl cyanide in Titan’s atmosphere, obtained using archival data from the Atacama Large Millimeter/submillimeter Array (ALMA), collected from February to May 2014. We detect the three strongest rotational lines of C2H3CN in the frequency range of 230 to 232 GHz, each with >4σ confidence. Radiative transfer modeling suggests that most of the C2H3CN emission originates at altitudes of ≳200 km, in agreement with recent photochemical models. The vertical column densities implied by our best-fitting models lie in the range of 3.7 × 1013 to 1.4 × 1014 cm−2. The corresponding production rate of vinyl cyanide and its saturation mole fraction imply the availability of sufficient dissolved material to form ~107 cell membranes/cm3 in Titan’s sea Ligeia Mare.

This article was originally published by:
http://www.kurzweilai.net/saturn-moon-titan-has-chemical-that-could-form-bio-like-membranes-says-nasa?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=0ad261ad5e-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-0ad261ad5e-282129417

CRISPR kills HIV and eats Zika ‘like Pac-man’. Its next target? Cancer

June 29, 2017

There’s a biological revolution underway and its name is CRISPR.

Pronounced ‘crisper’, the technique stands for Clustered Regularly Interspaced Short Palindromic Repeat and refers to the way short, repeated DNA sequences in the genomes of bacteria and other microorganisms are organised.

Inspired by how these organisms defend themselves from viral attacks by stealing strips of an invading virus’ DNA, the technique splices in an enzyme called Cas creating newly-formed sequences known as CRISPR. In bacteria, this causes RNA to make copies of these sequences, which help recognise virus DNA and prevent future invasions.This technique was transformed into a gene-editing tool in 2012 and was named Science magazine’s 2015 Breakthrough of the Year. While it’s not the first DNA-editing tool, it has piqued the interest of many scientists, research and health groups because of its accuracy, relative affordability and far-reaching uses. The latest? Eradicating HIV.

At the start of May, researchers at the Lewis Katz School of Medicine at Temple University (LKSOM) and the University of Pittsburgh demonstrated how they can remove HIV DNA from genomes of living animals – in this case, mice – to curb the spread of infection. The breakthrough was the first time replication of HIV-1 had been eliminated from infected cells using CRISPR following a 2016 proof-of-concept study.

 

In particular, the team genetically inactivated HIV-1 in transgenic mice, reducing the RNA expression of viral genes by roughly 60 to 95 per cent, before trialling the method on infected mice.

“During acute infection, HIV actively replicates,” Dr. Khalili explained. “With EcoHIV mice, we were able to investigate the ability of the CRISPR/Cas9 strategy to block viral replication and potentially prevent systemic infection.”

Since the HIV research was published, a team of biologists at University of California, Berkeley, described 10 new CRISPR enzymes that, once activated, are said to “behave like Pac-Man” to chew through RNA in a way that could be used as sensitive detectors of infectious viruses.

These new enzymes are variants of a CRISPR protein, Cas13a, which the UC Berkeley researchers reported last September in Nature, and could be used to detect specific sequences of RNA, such as from a virus. The team showed that once CRISPR-Cas13a binds to its target RNA, it begins to indiscriminately cut up all RNA making it “glow” to allow signal detection.

 

Two teams of researchers at the Broad Institute subsequently paired CRISPR-Cas13a with the process of RNA amplification to reveal that the system, dubbed Sherlock, could detect viral RNA at extremely low concentrations, such as the presence of dengue and Zika viral RNA, for example. Such a system could be used to detect any type of RNA, including RNA distinctive of cancer cells.

This piece has been updated to remove copy taken from WIRED US.

http://www.wired.co.uk/article/crispr-disease-rna-hiv