MIT’s AlterEgo headset can read words you say in your head

May 20, 2018

I don’t want to alarm you, but robots can now read your mind. Kind of.

AlterEgo is a new headset developed by MIT Media Lab. You strap it to your face. You talk to it. It talks to you. But no words are said. You say things in your head, like “what street am I on,” and it reads the signals your brain sends to your mouth and jaw, and answers the question for you.

Check out this handy explainer video MIT Media Lab made that shows some of the potential of AlterEgo:

So yes, according to MIT Media Lab, you may soon be able to control your TV with your mind.

The institution explained in its announcement that AlterEgo communicates with you through bone-conduction headphones, which circumvent the ear canal by transmitting sound vibrations through your face bones. Freaky. This, MIT Media Lab said, makes it easier for AlterEgo to talk to you while you’re talking to someone else.

Plus, in trials involving 15 people, AlterEgo had an accurate transcription rate of 92 percent.

Arnav Kapur, the graduate student who lead AlterEgo’s development, describes it as an “intelligence-augmentation device.”

“We basically can’t live without our cellphones, our digital devices,” said Pattie Maes, Kapur’s thesis advisor at MIT Media Lab. “But at the moment, the use of those devices is very disruptive.

“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

This article was originally published by: https://www.cnet.com/news/mit-alterego-headset-can-read-words-you-say-in-your-head/

Advertisements

Revolutionary 3D nanohybrid lithium-ion battery could allow for charging in just seconds

May 20, 2018

Cornell University engineers have designed a revolutionary 3D lithium-ion battery that could be charged in just seconds.

In a conventional battery, the battery’s anode and cathode* (the two sides of a battery connection) are stacked in separate columns (the black and red columns in the left illustration above). For the new design, the engineers instead used thousands of nanoscale (ultra-tiny) anodes and cathodes (shown in the illustration on the right above).

Putting those thousands of anodes and cathodes just 20 nanometers (billionths of a meter) apart allows for extremely fast charging** (in seconds or less) and also allows for holding more power for longer.

Left-to-right: The anode was made of self-assembling (automatically grown) thin-film carbon material with thousands of regularly spaced pores (openings), each about 40 nanometers wide. The pores were coated with a 10 nanometer-thick electrolyte* material (the blue layer between the black anode layer, as shown in the “Electrolyte coating” illustration), which is electronically insulating but conducts ions (an ion is an atom or molecule that has an electrical charge and is what flows inside a battery instead of electrons). The cathode was made from sulfur. (credit: Cornell University)

In addition, unlike traditional batteries, the electrolyte battery material does not have pinholes (tiny holes), which can lead to short-circuiting the battery, giving rise to fires in mobile devices, such as cellphones and laptops.

The engineers are still perfecting the technique, but they have applied for patent protection on the proof-of-concept work, which was funded by the U.S. Department of Energy and in part by the National Science Foundation.

Reference: Energy & Environmental Science (open source with registration) March 9, 2018. Source: Cornell University May 16, 2018.

* How batteries work

Batteries have three parts. An anode (-) and a cathode (+) — the positive and negative sides at either end of a traditional battery — which are hooked up to an electrical circuit (green); and the electrolyte, which keeps the anode and cathode apart and allows ions (electrically charged atoms or molecules) to flow. (credit: Northwestern University Qualitative Reasoning Group)

This article was originally published by:  http://www.kurzweilai.net/revolutionary-3d-nanohybrid-lithium-ion-battery-could-allow-for-charging-in-just-seconds?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=50d67a312d-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-50d67a312d-282129417

Is nature continuous or discrete? How the atomist error was born

May 20, 2018

The modern idea that nature is discrete originated in Ancient Greek atomism. Leucippus, Democritus and Epicurus all argued that nature was composed of what they called ἄτομος (átomos) or ‘indivisible individuals’. Nature was, for them, the totality of discrete atoms in motion. There was no creator god, no immortality of the soul, and nothing static (except for the immutable internal nature of the atoms themselves). Nature was atomic matter in motion and complex composition – no more, no less.

Despite its historical influence, however, atomism was eventually all but wiped out by Platonism, Aristotelianism and the Christian tradition that followed throughout the Middle Ages. Plato told his followers to destroy Democritus’ books whenever they found them, and later the Christian tradition made good on this demand. Today, nothing but a few short letters from Epicurus remain.

Atomism was not finished, however. It reemerged in 1417, when an Italian book-hunter named Poggio Bracciolini discovered a copy of an ancient poem in a remote monastery: De Rerum Natura (On the Nature of Things), written by Lucretius (c99-55 BCE), a Roman poet heavily influenced by Epicurus. This book-length philosophical poem in epic verse puts forward the most detailed and systematic account of ancient materialism that we’ve been fortunate enough to inherit. In it, Lucretius advances a breathtakingly bold theory on foundational issues in everything from physics to ethics, aesthetics, history, meteorology and religion. Against the wishes and best efforts of the Christian church, Bracciolini managed to get it into print, and it soon circulated across Europe.

This book was one of the most important sources of inspiration for the scientific revolution of the 16th and 17th centuries. Nearly every Renaissance and Enlightenment intellectual read it and became an atomist to some degree (they often made allowances for God and the soul). Indeed, this is the reason why, to make a long and important story very short, science and philosophy even today still tend to look for and assume a fundamental discreteness in nature. Thanks in no small part to Lucretius’ influence, the search for discreteness became part of our historical DNA. The interpretive method and orientation of modern science in the West literally owe their philosophical foundations to ancient atomism via Lucretius’ little book on nature. Lucretius, as Stephen Greenblatt says in his book The Swerve (2011), is ‘how the world became modern’.

There is a problem, however. If this story is true, then modern Western thought is based on a complete misreading of Lucretius’ poem. It was not a wilful misreading, of course, but one in which readers committed the simple error of projecting what little they knew second-hand about Greek atomism (mostly from the testimonia of its enemies) onto Lucretius’ text. They assumed a closer relationship between Lucretius’ work and that of his predecessors than actually exists. Crucially, they inserted the words ‘atom’ and ‘particle’ into the translated text, even though Lucretius never used them. Not even once! A rather odd omission for a so-called ‘atomist’ to make, no? Lucretius could easily have used the Latin words atomus (smallest particle) or particula (particle), but he went out of his way not to. Despite his best efforts, however, the two very different Latin terms he did use, corpora (matters) and rerum (things), were routinely translated and interpreted as synonymous with discrete ‘atoms’.

Further, the moderns either translated out or ignored altogether the nearly ubiquitous language of continuum and folding used throughout his book, in phrases such as ‘solida primordia simplicitate’ (simplex continuum). As a rare breed of scholar interested in both classical texts and quantum physics, the existence of this material continuum in the original Latin struck me quite profoundly. I have tried to show all of this in my recent translation and commentary, Lucretius I: An Ontology of Motion (2018), but here is the punchline: this simple but systematic and ubiquitous interpretive error constitutes what might well be the single biggest mistake in the history of modern science and philosophy.

This mistake sent modern science and philosophy on a 500-year quest for what Sean Carroll in his 2012 book called the ‘particle at the end of the universe’. It gave birth to the laudable virtues of various naturalisms and materialisms, but also to less praiseworthy mechanistic reductionisms, patriarchal rationalisms, and the overt domination of nature by humans, none of which can be found in Lucretius’ original Latin writings. What’s more, even when confronted with apparently continuous phenomena such as gravity, electric and magnetic fields, and eventually space-time, Isaac Newton, James Maxwell and even Albert Einstein fell back on the idea of an atomistic ‘aether’ to explain them. All the way back to the ancients, aether was thought to be a subtle fluid-like substance composed of insensibly tiny particles. Today, we no longer believe in the aether or read Lucretius as an authoritative scientific text. Yet in our own way, we still confront the same problem of continuity vs discreteness originally bequeathed to us by the moderns: in quantum physics.

Theoretical physics today is at a critical turning point. General relativity and quantum field theory are the two biggest parts of what physicists now call ‘the standard model’, which has enjoyed incredible predictive success. The problem, however, is that they have not yet been unified as two aspects of one overarching theory. Most physicists think that such unification is only a matter of time, even though the current theoretical frontrunners (string theory and loop quantum gravity) have yet to produce experimental confirmations.

Quantum gravity is of enormous importance. According to its proponents, it stands poised to show the world that the ultimate fabric of nature (space-time) is not continuous at all, but granular, and fundamentally discrete. The atomist legacy might finally be secured, despite its origins in an interpretive error.

There is just one nagging problem: quantum field theory claims that all discrete quanta of energy (particles) are merely the excitations or fluctuations in completely continuous quantum fields. Fields are not fundamentally granular. For quantum field theory, everything might be made of granules, but all granules are made of folded-up continuous fields that we simply measure as granular. This is what physicists call ‘perturbation theory’: the discrete measure of that which is infinitely continuous and so ‘perturbs one’s complete discrete measurement’, as Frank Close puts it in The Infinity Puzzle (2011). Physicists also have a name for the sub-granular movement of this continuous field: ‘vacuum fluctuations’. Quantum fields are nothing but matter in constant motion (energy and momentum). They are therefore never ‘nothing’, but more like a completely positive void (the flux of the vacuum itself) or an undulating ocean (appropriately called ‘the Dirac sea’) in which all discrete things are its folded-up bubbles washed ashore, as Carlo Rovelli puts it in Reality Is Not What it Seems (2016). Discrete particles, in other words, are folds in continuous fields.

The answer to the central question at the heart of modern science, ‘Is nature continuous or discrete?’ is as radical as it is simple. Space-time is not continuous because it is made of quantum granules, but quantum granules are not discrete because they are folds of infinitely continuous vibrating fields. Nature is thus not simply continuous, but an enfolded continuum.

This brings us right back to Lucretius and our original error. Working at once within and against the atomist tradition, Lucretius put forward the first materialist philosophy of an infinitely continuous nature in constant flux and motion. Things, for Lucretius, are nothing but folds (duplex), pleats (plex), bubbles or pores (foramina) in a single continuous fabric (textum) woven by its own undulations. Nature is infinitely turbulent or perturbing, but it also washes ashore, like the birth of Venus, in meta-stable forms – as Lucretius writes in the opening lines of De Rerum Natura: ‘Without you [Venus] nothing emerges into the sunlit shores of light.’ It has taken 2,000 years, but perhaps Lucretius has finally become our contemporary.

This article was originally published by: https://aeon.co/ideas/is-nature-continuous-or-discrete-how-the-atomist-error-was-born?utm_source=Aeon+Newsletter&utm_campaign=bb63ea6739-EMAIL_CAMPAIGN_2018_05_16&utm_medium=email&utm_term=0_411a82e59d-bb63ea6739-70411565

A new generation of brain-like computers comes of age

May 17, 2018

​Conventional computer chips aren’t up to the challenges posed by next-generation autonomous drones and medical implants. Kwabena Boahen has laid out a way forward.

For five decades, Moore’s law held up pretty well: Roughly every two years, the number of transistors one could fit on a chip doubled, all while costs steadily declined.

Today, however, transistors and other electronic components are so small they’re beginning to bump up against fundamental physical limits on their size. Moore’s law has reached its end, and it’s going to take something different to meet the need for computing that is ever faster, cheaper and more efficient.

As it happens, Kwabena Boahen, a professor of bioengineering and of electrical engineering, has a pretty good idea what that something more is: brain-like, or neuromorphic, computers that are vastly more efficient than the conventional digital computers we’ve grown accustomed to.

This is not a vision of the future, Boahen said. As he lays out in the latest issue of Computing in Science and Engineering, the future is now.

30 years in the making

It’s a moment Boahen has been working toward his entire adult life, and then some. He first got interested in computers as a teenager growing up in Ghana. But the more he learned, the more traditional computers looked like a giant, inelegant mess of memory chips and processors connected by weirdly complicated wiring.

Both the need for something new and the first ideas for what that would look like crystalized in the mid-1980s. Even then, Boahen said, some researchers could see the end of Moore’s law on the horizon. As transistors continued to shrink, they would bump up against fundamental physical limits on their size. Eventually, they’d get so small that only a single lane of electron traffic could get through under the best circumstances. What had once been electron superfreeways would soon be tiny mountain roads, and while that meant engineers could fit more components on a chip, those chips would become more and more unreliable.

At around the same time, Boahen and others came to understand that the brain had enormous computing power – orders of magnitude more than what people have built, even today – even though it used vastly less energy and remarkably unreliable components, neurons.

How does the brain do it?

While others have built brain-inspired computers, Boahen said, he and his collaborators have developed a five-point prospectus – manifesto might be the better word – for how to build neuromorphic computers that directly mimic in silicon what the brain does in flesh and blood.

The first two points of the prospectus concern neurons themselves, which unlike computers operate in a mix of digital and analog mode. In their digital mode, neurons send discrete, all-or-nothing signals in the form of electrical spikes, akin to the ones and zeros of digital computers. But they process incoming signals by adding them all up and firing only once a threshold is reached – more akin to a dial than a switch.

That observation led Boahen to try using transistors in a mixed digital-analog mode. Doing so, it turns out, makes chips both more energy efficient and more robust when the components do fail, as about 4 percent of the smallest transistors are expected to do.

From there, Boahen builds on neurons’ hierarchical organization, distributed computation and feedback loops to create a vision of an even more energy efficient, powerful and robust neuromorphic computer.

The future of the future

But it’s not just a vision. Over the last 30 years, Boahen’s lab has actually implemented most of their ideas in physical devices, including Neurogrid, one of the first truly neuromorphic computers. In another two or three years, Boahen said, he expects they will have designed and built computers implementing all of the prospectus’s five points.

Don’t expect those computers to show up in your laptop anytime soon, however. Indeed, that’s not really the point – most personal computers operate nowhere near the limits on conventional chips. Neuromorphic computers would be most useful in embedded systems that have extremely tight energy requirements, such as very low-power neural implants or on-board computers in autonomous drones.

“It’s complementary,” Boahen said. “It’s not going to replace current computers.”

The other challenge: getting others, especially chip manufacturers, on board. Boahen is not the only one thinking about what to do about the end of Moore’s law or looking to the brain for ideas. IBM’s TrueNorth, for example, takes cues from neural networks to produce a radically more efficient computer architecture. On the other hand, it remains fully digital, and, Boahen said, 20 times less efficient than Neurogrid would be had it been built with TrueNorth’s 28-nanometer transistors.

This article was originally published by: https://engineering.stanford.edu/magazine/article/new-generation-brain-computers-comes-age

DNA Robots Target Cancer

May 17, 2018

DNA nanorobots that travel the bloodstream, find tumors, and dispense a protein that causes blood clotting trigger the death of cancer cells in mice, according to a study published today (February 12) in Nature Biotechnology.

The authors have “demonstrated that it’s indeed possible to do site-specific drug delivery using biocompatible, biodegradable, DNA-based bionanorobots for cancer therapeutics,” says Suresh Neethirajan, a bioengineer at the University of Guelph in Ontario, Canada, who did not participate in the study. “It’s a combination of diagnosing the biomarkers on the surface of the cancer itself and also, upon recognizing that, delivering the specific drug to be able to treat it.”

The international team of researchers started with the goal of “finding a path to design nanorobots that can be applied to treatment of cancer in human[s],” writes coauthor Hao Yan of Arizona State University in an email to The Scientist.

Yan and colleagues first generated a self-assembling, rectangular, DNA-origami sheet to which they linked thrombin, an enzyme responsible for blood clotting. Then, they used DNA fasteners to join the long edges of the rectangle, resulting in a tubular nanorobot with thrombin on the inside. The authors designed the fasteners to dissociate when they bind nucleolin—a protein specific to the surface of tumor blood-vessel cells—at which point, the tube opens and exposes its cargo.

Nanorobot design. Thrombin is represented in pink and nucleolin in blue.S. LI ET AL., NATURE BIOTECHNOLOGY, 2018

The scientists next injected the nanorobots intravenously into nude mice with human breast cancer tumors. The robots grabbed onto vascular cells at tumor sites and caused extensive blood clots in the tumors’ vessels within 48 hours, but did not cause clotting elsewhere in the animals’ bodies. These blood clots led to tumor-cell necrosis, resulting in smaller tumors and a better chance for survival compared to control mice. Yan’s team also found that nanorobot treatment increased survival and led to smaller tumors in a mouse model of melanoma, and in mice with xenografts of human ovarian cancer cells.

The authors are “looking at specific binding to tumor cells, which is basically the holy grail for . . . cancer therapy,” says the University of Tennessee’s Scott Lenaghan, who was not involved in the work. The next step is to investigate any damage—such as undetected clots or immune-system responses—in the host organism, he says, as well as to determine how much thrombin is actually delivered at the tumor sites.

The authors showed in the study that the nanorobots didn’t cause clotting in major tissues in miniature pigs, which satisfies some safety concerns, but Yan agrees that more work is needed. “We are interested in looking further into the practicalities of this work in mouse models,” he writes.

Going from “a mouse model to humans is a huge step,” says Mauro Ferrari, a biomedical engineer at Houston Methodist Hospital and Weill Cornell Medical College who did not participate in the study. It’s not yet clear whether targeting nucleolin and delivering thrombin will be clinically relevant, he says, “but the breakthrough aspect is [that] this is a platform. They can use a similar approach for other things, which is really exciting. It’s got big implications.”

S. Li et al., “A DNA nanorobot functions as a cancer therapeutic in response to a molecular trigger in vivo,Nature Biotechnology, doi:10.1038/nbt.4071, 2018.

This article was originally published by: https://www.the-scientist.com/?articles.view/articleNo/51717/title/DNA-Robots-Target-Cancer/

Cancer ‘vaccine’ eliminates tumors in mice

May 17, 2018

Ronald Levy (left) and Idit Sagiv-Barfi led the work on a possible cancer treatment that involves injecting two immune-stimulating agents directly into solid tumors. Steve Fisch

Injecting minute amounts of two immune-stimulating agents directly into solid tumors in mice can eliminate all traces of cancer in the animals, including distant, untreated metastases, according to a study by researchers at the Stanford University School of Medicine.

The approach works for many different types of cancers, including those that arise spontaneously, the study found.

The researchers believe the local application of very small amounts of the agents could serve as a rapid and relatively inexpensive cancer therapy that is unlikely to cause the adverse side effects often seen with bodywide immune stimulation.

“When we use these two agents together, we see the elimination of tumors all over the body,” said Ronald Levy, MD, professor of oncology. “This approach bypasses the need to identify tumor-specific immune targets and doesn’t require wholesale activation of the immune system or customization of a patient’s immune cells.”

One agent is already approved for use in humans; the other has been tested for human use in several unrelated clinical trials. A clinical trial was launched in January to test the effect of the treatment in patients with lymphoma. (Information about the trial is available online.)

Levy, who holds the Robert K. and Helen K. Summy Professorship in the School of Medicine, is the senior author of the study, which was published Jan. 31 in Science Translational Medicine. Instructor of medicine Idit Sagiv-Barfi, PhD, is the lead author.

‘Amazing, bodywide effects’

Levy is a pioneer in the field of cancer immunotherapy, in which researchers try to harness the immune system to combat cancer. Research in his laboratory led to the development of rituximab, one of the first monoclonal antibodies approved for use as an anti-cancer treatment in humans.

Some immunotherapy approaches rely on stimulating the immune system throughout the body. Others target naturally occurring checkpoints that limit the anti-cancer activity of immune cells. Still others, like the CAR T-cell therapy recently approved to treat some types of leukemia and lymphomas, require a patient’s immune cells to be removed from the body and genetically engineered to attack the tumor cells. Many of these approaches have been successful, but they each have downsides — from difficult-to-handle side effects to high-cost and lengthy preparation or treatment times.

“All of these immunotherapy advances are changing medical practice,” Levy said. “Our approach uses a one-time application of very small amounts of two agents to stimulate the immune cells only within the tumor itself. In the mice, we saw amazing, bodywide effects, including the elimination of tumors all over the animal.”

Cancers often exist in a strange kind of limbo with regard to the immune system. Immune cells like T cells recognize the abnormal proteins often present on cancer cells and infiltrate to attack the tumor. However, as the tumor grows, it often devises ways to suppress the activity of the T cells.

Levy’s method works to reactivate the cancer-specific T cells by injecting microgram amounts of two agents directly into the tumor site. (A microgram is one-millionth of a gram). One, a short stretch of DNA called a CpG oligonucleotide, works with other nearby immune cells to amplify the expression of an activating receptor called OX40 on the surface of the T cells. The other, an antibody that binds to OX40, activates the T cells to lead the charge against the cancer cells. Because the two agents are injected directly into the tumor, only T cells that have infiltrated it are activated. In effect, these T cells are “prescreened” by the body to recognize only cancer-specific proteins.

Cancer-destroying rangers

Some of these tumor-specific, activated T cells then leave the original tumor to find and destroy other identical tumors throughout the body.

The approach worked startlingly well in laboratory mice with transplanted mouse lymphoma tumors in two sites on their bodies. Injecting one tumor site with the two agents caused the regression not just of the treated tumor, but also of the second, untreated tumor. In this way, 87 of 90 mice were cured of the cancer. Although the cancer recurred in three of the mice, the tumors again regressed after a second treatment. The researchers saw similar results in mice bearing breast, colon and melanoma tumors.

Mice genetically engineered to spontaneously develop breast cancers in all 10 of their mammary pads also responded to the treatment. Treating the first tumor that arose often prevented the occurrence of future tumors and significantly increased the animals’ life span, the researchers found.

Finally, Sagiv-Barfi explored the specificity of the T cells by transplanting two types of tumors into the mice. She transplanted the same lymphoma cancer cells in two locations, and she transplanted a colon cancer cell line in a third location. Treatment of one of the lymphoma sites caused the regression of both lymphoma tumors but did not affect the growth of the colon cancer cells.

“This is a very targeted approach,” Levy said. “Only the tumor that shares the protein targets displayed by the treated site is affected. We’re attacking specific targets without having to identify exactly what proteins the T cells are recognizing.”

The current clinical trial is expected to recruit about 15 patients with low-grade lymphoma. If successful, Levy believes the treatment could be useful for many tumor types. He envisions a future in which clinicians inject the two agents into solid tumors in humans prior to surgical removal of the cancer as a way to prevent recurrence due to unidentified metastases or lingering cancer cells, or even to head off the development of future tumors that arise due to genetic mutations like BRCA1 and 2.

“I don’t think there’s a limit to the type of tumor we could potentially treat, as long as it has been infiltrated by the immune system,” Levy said.

The work is an example of Stanford Medicine’s focus on precision health, the goal of which is to anticipate and prevent disease in the healthy and precisely diagnose and treat disease in the ill.

The study’s other Stanford co-authors are senior research assistant and lab manager Debra Czerwinski; professor of medicine Shoshana Levy, PhD; postdoctoral scholar Israt Alam, PhD; graduate student Aaron Mayer; and professor of radiology Sanjiv Gambhir, MD, PhD.

Levy is a member of the Stanford Cancer Institute and Stanford Bio-X.

Gambhir is the founder and equity holder in CellSight Inc., which develops and translates multimodality strategies to image cell trafficking and transplantation.

The research was supported by the National Institutes of Health (grant CA188005), the Leukemia and Lymphoma Society, the Boaz and Varda Dotan Foundation and the Phil N. Allen Foundation.

Stanford’s Department of Medicine also supported the work.

This article was originally published by: http://med.stanford.edu/news/all-news/2018/01/cancer-vaccine-eliminates-tumors-in-mice.html

Longevity industry systematized for first time

April 02, 2018

UK aging research foundation produces roadmap for the emerging longevity industry in a series of reports to be published throughout the year

The Biogerontology Research Foundation has embarked on a year-long mission to summarise in a single document the various emerging technologies and industries which can be brought to bear on aging, healthy longevity, and everything in between, as part of a joint project between The Global Longevity Consortium, consisting of the Biogerontology Research FoundationDeep Knowledge Life SciencesAging Analytics Agency and Longevity.International platform.

‍GLOBAL LONGEVITY SCIENCE LANDSCAPE 2017

For scientists, policy makers, regulators, government officials, investors and other stakeholders, a consensus understanding of the field of human longevity remains fragmented, and has yet to be systematized by any coherent framework, and has not yet been the subject of a comprehensive report profiling the field and industry as a whole by any analytical agency to date. The consortium behind this series of reports hope that they will come to be used as a sort of Encyclopedia Britannica and specialized Wikipedia of the emerging longevity industry, with the aim of serving as the foundation upon which the first global framework of the industry will be built, given the significant industry growth projected over the coming years.

Experts on the subject of human longevity, who tend arrive at the subject from disparate fields, have failed even to agree on a likely order of magnitude for future human lifespan. Those who foresee a 100-year average in the near future are considered extreme optimists by some, while others have even mooted the possibility of indefinite life extension through comprehensive repair and maintenance. As such the longevity industry has often defied real understanding and has proved a complex and abstract topic in the minds of many, investors and governments in particular.

The first of these landmark reports, entitled ‘The Science of Longevity‘, standing at almost 800 pages in length, seeks to rectify this.

Part 1 of the report ties together the progress threads of the constituent industries into a coherent narrative, mapping the intersection of biomedical gerontology, regenerative medicine, precision medicine, artificial intelligence, offering a brief history and snapshot of each. Part 2 lists and individually profiles 650 longevity-focused entities, including research hubs, non-profit organizations, leading scientists, conferences, databases, books and journals. Infographics are used to illustrate where research institutions stand in relation to each other with regard to their disruptive potential: companies and institutions specialising in palliative technologies are placed at the periphery of circular diagrams, whereas those involved with more comprehensive, preventative interventions, such as rejuvenation biotechnologies and gene therapies, are depicted as central.

In this report great care was taken to visualize the complex and interconnected landscape of this field via state of the art infographics so as to distill the many players, scientific subsectors and technologies within the field of geroscience into common understanding. Their hope was to create a comprehensive yet readily-understandable view of the entire field and its many players, to serve a similar function that Mendeleev’s periodic table did for the field of chemistry. While these are static infographics in the reports, their creators plan to create complimentary online versions that are interactive and filterable, and to convene a series of experts to analyze these infographics and continually update them as the geroscience landscapes shifts. Similar strategies are employed in Volume II to illustrate the many companies and investors within the longevity industry.

These reports currently profile the top 100 entities in each of the categories, but in producing them, analysts found that the majority of these categories have significantly more than 100 entities associated with them. One of their main conclusions upon finishing the report is that the longevity industry is indeed of substantial size, with many industry and academic players, but that it remains relatively fragmented, lacking a sufficient degree of inter-organization collaboration and industry-academic partnerships. The group plans to expand these lists in follow-up volumes so as to give a more comprehensive overview of the individual companies, investors, books, journals, conferences and scientists that serve as the foundation of this emerging industry.

Since these reports are being spearheaded by the UK’s oldest biomedical charity focused on healthspan extension, the Biogerontology Research Foundation is publishing them online, freely available to the public. While the main focus of this series of reports is an analytical report on the emerging longevity industry, the reports still delve deeply into the science of longevity, and Volume I is dedicated exclusively to an overview of the history, present and future state of ageing research from a scientific perspective.

The consortium of organizations behind these reports anticipate them to be the first comprehensive analytical report on the emerging longevity industry to date, and hope to increase awareness and interest from investors, scientists, medical personnel, regulators, policy makers, government officials and the public-at-large in both the longevity industry as well as geroscience proper by providing a report that simultaneously distills the complex network of knowledge underlying the industry and field into easily and intuitively comprehensible infographics, while at the same time providing a comprehensive backbone of chapters and profiles on the various companies, investors, organizations, labs, institutions, books, journals and conferences for those inclined for a deeper dive into the vast foundation of the longevity industry and the field of geroscience.

It is hoped that this report will assist others in visualising the present longevity landscape and elucidate the various industry players and components. Volume 2, The Business of Longevity, which at approximately 500 pages in length aims to be as comprehensive as Volume 1, is set to be published shortly thereafter, and will focus on the companies and investors working in the field of precision preventive medicine with a focus on healthy longevity, which will be necessary in growing the industry fast enough to avert the impending crisis of global aging demographics.

These reports will be followed up throughout the coming year with Volume 3 (“Special Case Studies”), featuring 10 special case studies on specific longevity industry sectors, such as cell therapies, gene therapies, AI for biomarkers of aging, and more, Volume 4 (“Novel Longevity Financial System”), profiling how various corporations, pension funds, investment funds and governments will cooperate within the next decade to avoid the crisis of demographic aging, and Volume 5 (“Region Case Studies”), profiling the longevity industry in specific geographic regions.

These reports are, however, only the beginning, and ultimately will serve as a launching pad for an even more ambitious project: Longevity.International, an online platform that will house these reports, and also serve as a virtual ecosystem for uniting and incentivizing the many fragmented stakeholders of the longevity industry, including scientists, entrepreneurs, investors, policy makers, regulators and government officials to unite in the common goal of healthspan extension and aversion of the looping demographic aging and Silver Tsunami crisis. The platform will use knowledge crowdsourcing of top tier experts to unite scientists with entrepreneurs, entrepreneurs to investors, and investors to policy-makers and regulators, where all stakeholders can aggregate and integrate intelligence and expertise from each other using modern IT technologies for these types of knowledge platforms, and all stakeholders can be rewarded for their services.

THE SCIENCE OF PROGRESSIVE MEDICINE 2017 LANDSCAPE

The consortium behind these reports are interested in collaboration with interested contributors, institutional partners, and scientific reviewers to assist with the ongoing production of these reports, to enhance their outreach capabilities and ultimately to enhance the overall impact of these reports upon the scientific and business communities operating within the longevity industry, and can be reached at info@longevity.international

This article was originally published by:
http://bg-rf.org.uk/press/longevity-industry-systematized-for-first-time

The world’s largest reforestation effort in history is underway

April 02, 2018

The largest tropical reforestation effort in history aims to restore 73 million trees in the Brazilian Amazon by 2023.

The multimillion dollar, six-year project, led by Conservation International, spans 30,000 hectares of land—the equivalent of the size of 30,000 soccer fields, or nearly 70,000 acres.

The effort will help Brazil move towards its Paris agreement target of reforesting 12 million hectares of land by 2030.

“This is a breathtakingly audacious project,” Dr. M. Sanjayan, CEO of Conservation International, said in astatement. “Together with an alliance of partners, we are undertaking the largest tropical forest restoration project in the world, driving down the cost of restoration in the process. The fate of the Amazon depends on getting this right—as do the region’s 25 million residents, its countless species and the climate of our planet.”

The Amazon is the world’s largest rainforest, home to indigenous communities and an immense variety and richness of biodiversity. The latest survey detailed 381 new species discovered in 2014-2015 alone.

But this precious land has been threatened by decades of commercial exploitation of natural resources, minerals and agribusiness, as Conservation International editorial director Bruno Vander Velde writes, “leading to about 20 percent of original forest cover to be replaced by pastures and agricultural crops, without securing the well-being of the local population.”

“The reforestation project fills an urgent need to develop the region’s economy without destroying its forests and ensuring the well-being of its people,” he notes.

Fast Company reports that instead of planting saplings—which is labor- and resource-intensive—the reforestation effort will involve the “muvuca” strategy, a Portuguese word that means many people in a small place. The strategy involves the spreading of seeds from more than 200 native forest species over every square meter of deforested land and allowing natural selection to weed out the weaker plants. As Fast Company notes, a 2014 study by the Food and Agriculture Organization and Bioversity International found that the muvuca technique allowed more than 90 percent of native tree species planted to germinate. Not only that, they especially resilient and suited to survive drought conditions for up to six months.

According to Rodrigo Medeiros, vice president of Conservation International’s Brazil office, priority areas for the restoration effort include southern Amazonas, Rondônia, Acre, Pará and the Xingu watershed. Restoration activities will include the enrichment of existing secondary forest areas, sowing of selected native species, and, when necessary, direct planting of native species, Medeiros said.

The Brazilian Ministry of Environment, the Global Environment Facility, the World Bank, the Brazilian Biodiversity Fund, and Rock in Rio’s environmental arm “Amazonia Live” are also partners in this effort.

This article was originally published by:
https://www.weforum.org/agenda/2017/11/the-worlds-largest-tropical-reforestation-project-has-begun-in-the-amazon

Prosthetic memory system successful in humans

April 02, 2018

Scientists at Wake Forest Baptist Medical Center and the University of Southern California (USC) have demonstrated the successful implementation of a prosthetic system that uses a person’s own memory patterns to facilitate the brain’s ability to encode and recall memory.

In the pilot study, published in today’s Journal of Neural Engineering, participants’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements.

“This is the first time scientists have been able to identify a patient’s own brain cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better, an important first step in potentially restoring memory loss,” said the study’s lead author Robert Hampson, Ph.D., professor of physiology/pharmacology and neurology at Wake Forest Baptist.

The study focused on improving episodic memory, which is the most common type of memory loss in people with Alzheimer’s disease, stroke and head injury. Episodic memory is information that is new and useful for a short period of time, such as where you parked your car on any given day. Reference memory is information that is held and used for a long time, such as what is learned in school.

The researchers enrolled epilepsy patients at Wake Forest Baptist who were participating in a diagnostic brain-mapping procedure that used surgically implanted electrodes placed in various parts of the brain to pinpoint the origin of the patients’ seizures. Using the team’s electronic prosthetic system based on a multi-input multi-output (MIMO) nonlinear mathematical model, the researchers influenced the firing patterns of multiple neurons in the hippocampus, a part of the brain involved in making new memories in eight of those patients.

First, they recorded the neural patterns or ‘codes’ while the study participants were performing a computerized memory task. The patients were shown a simple image, such as a color block, and after a brief delay where the screen was blanked, were then asked to identify the initial image out of four or five on the screen.

The USC team led by biomedical engineers Theodore Berger, Ph.D., and Dong Song, Ph.D., analyzed the recordings from the correct responses and synthesized a MIMO-based code for correct memory performance. The Wake Forest Baptist team played back that code to the patients while they performed the image recall task. In this test, the patients’ episodic memory performance showed a 37 percent improvement over baseline.

In a second test, participants were shown a highly distinctive photographic image, followed by a short delay, and asked to identify the first photo out of four or five others on the screen. The memory trials were repeated with different images while the neural patterns were recorded during the testing process to identify and deliver correct-answer codes.

After another longer delay, Hampson’s team showed the participants sets of three pictures at a time with both an original and new photos included in the sets, and asked the patients to identify the original photos, which had been seen up to 75 minutes earlier. When stimulated with the correct-answer codes, study participants showed a 35 percent improvement in memory over baseline.

“We showed that we could tap into a patient’s own memory content, reinforce it and feed it back to the patient,” Hampson said. “Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.

“To date we’ve been trying to determine whether we can improve the memory skill people still have. In the future, we hope to be able to help people hold onto specific memories, such as where they live or what their grandkids look like, when their overall memory begins to fail.”

The current study is built on more than 20 years of preclinical research on memory codes led by Sam Deadwyler, Ph.D., professor of physiology and pharmacology at Wake Forest Baptist, along with Hampson, Berger and Song. The preclinical work applied the same type of stimulation to restore and facilitate memory in animal models using the MIMO system, which was developed at USC.

The research was funded by the U.S. Defense Advanced Research Projects Agency (DARPA).

Story Source:

Materials provided by Wake Forest Baptist Medical Center. Note: Content may be edited for style and length.

This article was originally published by:
https://www.sciencedaily.com/releases/2018/03/180327194350.htm

Cancer ‘vaccine’ eliminates tumors in mice

February 05, 2018

Injecting minute amounts of two immune-stimulating agents directly into solid tumors in mice can eliminate all traces of cancer in the animals, including distant, untreated metastases, according to a study by researchers at the Stanford University School of Medicine.

The approach works for many different types of cancers, including those that arisespontaneously, the study found.

The researchers believe the local application of very small amounts of the agents could serve as a rapid and relatively inexpensive cancer therapy that is unlikely to cause the adverse side effects often seen with bodywide immune stimulation.

“When we use these two agents together, we see the elimination of tumors all over the body,” said Ronald Levy, MD, professor of oncology. “This approach bypasses the need to identify tumor-specific immune targets and doesn’t require wholesale activation of the immune system or customization of a patient’s immune cells.”

One agent is currently already approved for use in humans; the other has been tested for human use in several unrelated clinical trials. A clinical trial was launched in January to test the effect of the treatment in patients with lymphoma.

Levy, who holds the Robert K. and Helen K. Summy Professorship in the School of Medicine, is the senior author of the study, which was published Jan. 31 in Science Translational Medicine. Instructor of medicine Idit Sagiv-Barfi, PhD, is the lead author.

‘Amazing, bodywide effects’

Levy is a pioneer in the field of cancer immunotherapy, in which researchers try to harness the immune system to combat cancer. Research in his laboratory led to the development of rituximab, one of the first monoclonal antibodies approved for use as an anticancer treatment in humans.

Some immunotherapy approaches rely on stimulating the immune system throughout the body. Others target naturally occurring checkpoints that limit the anti-cancer activity of immune cells. Still others, like the CAR T-cell therapy recently approved to treat some types of leukemia and lymphomas, require a patient’s immune cells to be removed from the body and genetically engineered to attack the tumor cells. Many of these approaches have been successful, but they each have downsides — from difficult-to-handle side effects to high-cost and lengthy preparation or treatment times.

“All of these immunotherapy advances are changing medical practice,” Levy said. “Our approach uses a one-time application of very small amounts of two agents to stimulate the immune cells only within the tumor itself. In the mice, we saw amazing, bodywide effects, including the elimination of tumors all over the animal.”

Cancers often exist in a strange kind of limbo with regard to the immune system. Immune cells like T cells recognize the abnormal proteins often present on cancer cells and infiltrate to attack the tumor. However, as the tumor grows, it often devises ways to suppress the activity of the T cells.

Levy’s method works to reactivate the cancer-specific T cells by injecting microgram amounts of two agents directly into the tumor site. (A microgram is one-millionth of a gram). One, a short stretch of DNA called a CpG oligonucleotide, works with other nearby immune cells to amplify the expression of an activating receptor called OX40 on the surface of the T cells. The other, an antibody that binds to OX40, activates the T cells to lead the charge against the cancer cells. Because the two agents are injected directly into the tumor, only T cells that have infiltrated it are activated. In effect, these T cells are “prescreened” by the body to recognize only cancer-specific proteins.

Cancer-destroying rangers

Some of these tumor-specific, activated T cells then leave the original tumor to find and destroy other identical tumors throughout the body.

The approach worked startlingly well in laboratory mice with transplanted mouse lymphoma tumors in two sites on their bodies. Injecting one tumor site with the two agents caused the regression not just of the treated tumor, but also of the second, untreated tumor. In this way, 87 of 90 mice were cured of the cancer. Although the cancer recurred in three of the mice, the tumors again regressed after a second treatment. The researchers saw similar results in mice bearing breast, colon and melanoma tumors.

Mice genetically engineered to spontaneously develop breast cancers in all 10 of their mammary pads also responded to the treatment. Treating the first tumor that arose often prevented the occurrence of future tumors and significantly increased the animals’ life span, the researchers found.

Finally, Sagiv-Barfi explored the specificity of the T cells by transplanting two types of tumors into the mice. She transplanted the same lymphoma cancer cells in two locations, and she transplanted a colon cancer cell line in a third location. Treatment of one of the lymphoma sites caused the regression of both lymphoma tumors but did not affect the growth of the colon cancer cells.

“This is a very targeted approach,” Levy said. “Only the tumor that shares the protein targets displayed by the treated site is affected. We’re attacking specific targets without having to identify exactly what proteins the T cells are recognizing.”

The current clinical trial is expected to recruit about 15 patients with low-grade lymphoma. If successful, Levy believes the treatment could be useful for many tumor types. He envisions a future in which clinicians inject the two agents into solid tumors in humans prior to surgical removal of the cancer as a way to prevent recurrence due to unidentified metastases or lingering cancer cells, or even to head off the development of future tumors that arise due to genetic mutations like BRCA1 and 2.

“I don’t think there’s a limit to the type of tumor we could potentially treat, as long as it has been infiltrated by the immune system,” Levy said.

The work is an example of Stanford Medicine’s focus on precision health, the goal of which is to anticipate and prevent disease in the healthy and precisely diagnose and treat disease in the ill.

The study’s other Stanford co-authors are senior research assistant and lab manager Debra Czerwinski; professor of medicine Shoshana Levy, PhD; postdoctoral scholar Israt Alam, PhD; graduate student Aaron Mayer; and professor of radiology Sanjiv Gambhir, MD, PhD.

Levy is a member of the Stanford Cancer Institute and Stanford Bio-X.

Gambhir is the founder and equity holder in CellSight Inc., which develops and translates multimodality strategies to image cell trafficking and transplantation.

The research was supported by the National Institutes of Health (grant CA188005), the Leukemia and Lymphoma Society, the Boaz and Varda Dotan Foundation and the Phil N. Allen Foundation.

Stanford’s Department of Medicine also supported the work.

This article was originally published by:
https://med.stanford.edu/news/all-news/2018/01/cancer-vaccine-eliminates-tumors-in-mice.html