3 myths about the future of work (and why they ‘re not true)

July 18, 2018

“Will machines replace humans?” This question is on the mind of anyone with a job to lose. Daniel Susskind confronts this question and three misconceptions we have about our automated future, suggesting we ask something else: How will we distribute wealth in a world when there will be less — or even no — work?

Advertisements

Will Science Ever Solve the Mysteries of Consciousness, Free Will and God?

July 18, 2018

In 1967 British biologist and Nobel laureate Sir Peter Medawar famously characterized science as, in book title form, The Art of the Soluble. “Good scientists study the most important problems they think they can solve. It is, after all, their professional business to solve problems, not merely to grapple with them,” he wrote.

For millennia, the greatest minds of our species have grappled to gain purchase on the vertiginous ontological cliffs of three great mysteries—consciousness, free will and God—without ascending anywhere near the thin air of their peaks. Unlike other inscrutable problems, such as the structure of the atom, the molecular basis of replication and the causes of human violence, which have witnessed stunning advancements of enlightenment, these three seem to recede ever further away from understanding, even as we race ever faster to catch them in our scientific nets.

Are these “hard” problems, as philosopher David Chalmers characterized consciousness, or are they truly insoluble “mysterian” problems, as philosopher Owen Flanagan designated them (inspired by the 1960s rock group Question Mark and the Mysterians)? The “old mysterians” were dualists who believed in nonmaterial properties, such as the soul, that cannot be explained by natural processes. The “new mysterians,” Flanagan says, contend that consciousness can never be explained because of the limitations of human cognition. I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language. Call those of us in this camp the “final mysterians.”

Consciousness. The hard problem of consciousness is represented by the qualitative experiences (qualia) of what it is like to be something. It is the first-person subjective experience of the world through the senses and brain of the organism. It is not possible to know what it is like to be a bat (in philosopher Thomas Nagel’s famous thought experiment), because if you altered your brain and body from humanoid to batoid, you would just be a bat, not a human knowing what it feels like to be a bat. You would not be like the traveling salesman in Franz Kafka’s 1915 novella The Metamorphosis, who awakens to discover he has been transformed into a giant insect but still has human thoughts. You would just be an arthropod. By definition, only I can know my first-person experience of being me, and the same is true for you, bats and bugs.

Free will. Few scientists dispute that we live in a deterministic universe in which all effects have causes (except in quantum mechanics, although this just adds an element of randomness to the system, not freedom). And yet we all act as if we have free will—that we make choices among options and retain certain degrees of freedom within constraining systems. Either we are all delusional, or else the problem is framed to be conceptually impenetrable. We are not inert blobs of matter bandied about the pinball machine of life by the paddles of nature’s laws; we are active agents within the causal net of the universe, both determined by it and helping to determine it through our choices. That is the compatibilist position from whence volition and culpability emerge.

God. If the creator of the universe is supernatural—outside of space and time and nature’s laws—then by definition, no natural science can discover God through any measurements made by natural instruments. By definition, this God is an unsolvable mystery. If God is part of the natural world or somehow reaches into our universe from outside of it to stir the particles (to, say, perform miracles like healing the sick), we should be able to quantify such providential acts. This God is scientifically soluble, but so far all claims of such measurements have yet to exceed statistical chance. In any case, God as a natural being who is just a whole lot smarter and more powerful than us is not what most people conceive of as deific.

Although these final mysteries may not be solvable by science, they are compelling concepts nonetheless, well deserving of our scrutiny if for no other reason than it may lead to a deeper understanding of our nature as sentient, volitional, spiritual beings.

This article was originally published with the title “The Final Mysterians”

This article was originally published by: https://www.scientificamerican.com/article/will-science-ever-solve-the-mysteries-of-consciousness-free-will-and-god/

“Traveling” Brain Waves May Be Critical for Cognition

July 18, 2018

The electrical oscillations we call brain waves have intrigued scientists and the public for more than a century. But their function—and even whether they have one, rather than just reflecting brain activity like an engine’s hum—is still debated. Many neuroscientists have assumed that if brain waves do anything, it is by oscillating in synchrony in different locations. Yet a growing body of research suggests many brain waves are actually “traveling waves” that physically move through the brain like waves on the sea.

Now a new study from a team at Columbia University led by neuroscientist Joshua Jacobs suggests traveling waves are widespread in the human cortex—the seat of higher cognitive functions—and that they become more organized depending on how well the brain is performing a task. This shows the waves are relevant to behavior, bolstering previous research suggesting they are an important but overlooked brain mechanism that contributes to memory, perception, attention and even consciousness.

Brain waves were first discovered using electroencephalogram (EEG) techniques, which involve placing electrodes on the scalp. Researchers have noted activity over a range of different frequencies, from delta (0.5 to 4 hertz) through to gamma (25 to 140 Hz) waves. The slowest occur during deep sleep, with increasing frequency associated with increasing levels of consciousness and concentration. Interpreting EEG data is difficult due to its poor ability to pinpoint the location of activity, and the fact that passage through the head blurs the signals. The new study, published earlier this month in Neuron, used a more recent technique called electrocorticography (ECoG). This involves placing electrode arrays directly on the brain’s surface, minimizing distortions and vastly improving spatial resolution.

Scientists have proposed numerous possible roles for brain waves. A leading hypothesis holds that synchronous oscillations serve to “bind” information in different locations together as pertaining to the same “thing,” such as different features of a visual object (shape, color, movement, etcetera). A related idea is they facilitate the transfer of information among regions. But such hypotheses require brain waves to be synchronous, producing “standing” waves (analogous to two people swinging a jump rope up and down) rather than traveling waves (as in a crowd doing “the wave” at a sports event). This is important because traveling waves have different properties that could, for example, represent information about the past states of other brain locations. The fact they physically propagate through the brain like sound through air makes them a potential mechanism for moving information from one place to another.

These ideas have been around for decades (pdf), but the majority of neuroscientists have paid little attention. One likely reason is that until recently most previous reports of traveling waves—although there are exceptions—have merely described the waves without establishing their significance. “If you ask the average systems neuroscientist, they’ll say it’s an epiphenomenon [like an engine’s hum],” says computational neuroscientist Terry Sejnowski of the Salk Institute for Biological Studies who was not involved in the new study. “And since it has never been directly connected to any behavior or function, it’s not something that’s important.”

More at: https://www.scientificamerican.com/article/traveling-brain-waves-may-be-critical-for-cognition/

Novel antioxidant makes old blood vessels seem young again

May 31, 2018

Older adults who take a novel antioxidant that specifically targets cellular powerhouses, or mitochondria, see age-related vascular changes reverse by the equivalent of 15 to 20 years within six weeks, according to new University of Colorado Boulder research.

The study, published this week in the American Heart Association journal Hypertension, adds to a growing body of evidence suggesting pharmaceutical-grade nutritional supplements, or nutraceuticals, could play an important role in preventing heart disease-the nation’s No. 1 killer. It also resurrects the notion that oral antioxidants, which have been broadly dismissed as ineffective in recent years, could reap measurable health benefits if properly targeted, the authors say.

“This is the first clinical trial to assess the impact of a mitochondrial-specific antioxidant on vascular function in humans,” said lead author Matthew Rossman, a postdoctoral researcher in the department of integrative physiology. “It suggests that therapies like this may hold real promise for reducing the risk of age-related cardiovascular disease.”

For the study, Rossman and senior author Doug Seals, director of the Integrative Physiology of Aging Laboratory, recruited 20 healthy men and women age 60 to 79 from the Boulder area.

Half took 20 milligrams per day of a supplement called MitoQ, made by chemically altering the naturally-occurring antioxidant Coenzyme Q10 to make it cling to mitochondria inside cells.

The other half took a placebo.

After six weeks, researchers assessed how well the lining of blood vessels, or the endothelium, functioned, by measuring how much subjects’ arteries dilated with increased blood flow.

Then, after a two-week “wash out” period of taking nothing, the two groups switched, with the placebo group taking the supplement, and vice versa. The tests were repeated.

The researchers found that when taking the supplement, dilation of subjects’ arteries improved by 42 percent, making their , at least by that measure, look like those of someone 15 to 20 years younger. An improvement of that magnitude, if sustained, is associated with about a 13 percent reduction in heart disease, Rossman said. The study also showed that the improvement in dilation was due to a reduction in .

In participants who, under placebo conditions, had stiffer arteries, supplementation was associated with reduced stiffness.

Blood vessels grow stiff with age largely as a result of oxidative stress, the excess production of metabolic byproducts called which can damage the endothelium and impair its function. During youth, bodies produce enough antioxidants to quench those free radicals. But with age, the balance tips, as mitochondria and other cellular processes produce and the body’s antioxidant defenses can’t keep up, Rossman said.

Oral antioxidant supplements like vitamin C and vitamin E fell out of favor after studies showed them to be ineffective.

“This study breathes new life into the discredited theory that supplementing the diet with antioxidants can improve health,” said Seals. “It suggests that targeting a specific source-mitochondria-may be a better way to reduce oxidative stress and improve cardiovascular health with aging.”

More information: Matthew J. Rossman et al, Chronic Supplementation With a Mitochondrial Antioxidant (MitoQ) Improves Vascular Function in Healthy Older Adults, Hypertension (2018).

This article was originally published by: https://medicalxpress.com/news/2018-04-antioxidant-blood-vessels-young.html

Can the gene and cell therapy revolution scale up?

May 31, 2018

“I believe gene therapy will become a mainstay in treating, and maybe curing, many of our most devastating and intractable illnesses,” said FDA commissioner Dr Scott Gottlieb after Luxturna’s approval.

As innovative gene and cell therapies continue to make the transition from the laboratory to the clinic, they are bringing with them the promise of truly personalised medicine. The last few years have seen the regulatory approval of the first gene therapies that take a patient’s own immune cells and genetically engineer them to target cancer cells more effectively.

These chimeric antigen receptor T-cell (CAR-T) therapies now represent a rapidly growing field, with Novartis’s Kymriah, the first CAR-T therapy approved by the US Food and Drug Administration (FDA) in August 2017 for the treatment of a rare blood cancer, seen as the tip of the iceberg for this treatment class’ potential. Approval of Kite Pharma’s Yescarta, a CAR-T treatment for certain forms of non-Hodgkin lymphoma, followed just a few months later.

Transformative potential

“This has been utterly transformative in blood cancers,” Dr Stephan Grupp, director of cancer immunotherapy at the Children’s Hospital of Philadelphia, which collaborated with Novartis on Kymriah’s development, told the New York Times. “If it can start to work in solid tumours, it will be utterly transformative for the whole field.”

CAR-T, as well as other cell and gene therapies – such as Spark Therapeutics’ Luxturna, a gene therapy for inherited vision loss that was approved by the FDA in December – are offering the prospect of step changes in the treatment of genetic diseases and some of the deadliest forms of cancer.

“The cellular immunotherapies tend to be marketed for various types of cancer; these cause fewer side effects than traditional chemotherapies and as a result can be used in combination with other treatments in typically older patients, who can struggle to cope with drug-associated toxicity,” says PharmSource healthcare analyst Adam Bradbury. “Cellular immunotherapies will also be used in refractory cancers, which have become resistant to initial therapies.”

The regulatory landscape is also encouraging for gene and cell therapies; last year the FDA issued new guidelines to accelerate the assessment and approval of cell treatment and gene therapy, and the European Medicines Agency continues to focus on the area, publishing an action plan to foster the development of advanced treatments including gene therapy and somatic cell therapy.

“I believe gene therapy will become a mainstay in treating, and maybe curing, many of our most devastating and intractable illnesses,” said FDA commissioner Dr Scott Gottlieb after Luxturna’s approval.

The viral backlog

While the long-term transformative potential of gene and cell therapies is becoming increasingly clear, it is equally obvious that bringing the cutting-edge of personalised medicine to patients comes with no shortage of roadblocks. While traditional small molecule drugs and even complex biologics can be produced at large scales, cell and gene therapies require a new level of customisability and manufacturing expertise.

Although the cell and gene therapies that have so far been introduced to the market are indicated for rare diseases with small patient populations, and thus only require relatively small-scale manufacturing, the early successes of CAR-T therapies and the exploding pharma and biotech interest in cell and gene therapies stress the need for a rapid capacity expansion to support clinical research and commercial-scale production.

Viral vectors of various kinds – the most common being lentiviral and adenoviral vectors – are used in the production of many cell and gene therapies. These disabled viruses encase the genetic material to be introduced to the target cells in the patient; the harmless viral vectors essentially infect the relevant cells to deliver the therapy. Worryingly, there is already a significant backlog of viral vector availability for gene and cell therapy developers.

“As more related biologics have been approved and researched in recent years, the demand for viral vectors has increased,” says Bradbury. “Particularly following on from the clinical trial success of CAR-T cell therapies, more pharma and biotech companies are seeking to enter the market. The manufacturing process to produce viral vectors is complex, costly and highly regulated. There is a shortage of both related manufacturing facilities and appropriately qualified staff, which has meant that demand has outstripped supply and will continue to do so.”

As contract manufacturing organisations (CMOs) struggle to build capacity and expertise in the viral vector production that forms the basis for many gene and cell therapies, Bradbury notes that there is currently an average wait time of 16 months for CMOs to start new projects, even at the smaller clinical scale. Scaling up capacity is incredibly difficult and costly; the need for Good Manufacturing Practice (GMP) facilities to grow cells while ensuring vector sterility and purity means that the regulatory burden is high.

“The cost of constructing the viral vector manufacturing facilities is prohibitively expensive, in the range of hundreds of millions of dollars,” Bradbury says. “On the regulatory front, there is difficulty establishing all aspects of GMP at early phases of clinical trials. Virus manufacture can be considered as more problematic than that of mAbs [monoclonal antibodies] and requires cryopreservation at a far lower temperature than most biologics.”

Boosting capacity, cutting costs

Almost a year and a half is a long time to wait to kick off production for a clinical trial or research project, let alone commercial-scale manufacturing, and Bradbury says the backlog is likely to increase in the short term. While many large pharma firms will have the financial clout to build or acquire their own production facilities to support gene and cell therapy programmes, those that rely on external contractors will be hit hardest.

“Smaller and medium-sized companies will be affected most by a lack of CMOs involved with cell and gene therapy manufacture,” says Bradbury. “I expect both clinical and commercial manufacture to be affected; the demand is likely to drive up prices for CMO services, which in turn will affect institutions conducting clinical research that may not have the budget to run trials.”

With the capacity crunch in full effect, developers large and small have been scrambling to secure their viral supply chain. For the production of Kymriah, Novartis partnered with UK-based gene and cell therapy specialist Oxford BioMedica back in 2013, giving Novartis access to the company’s LentiVector delivery platform, as well as its facilities and expertise.

Smaller biotechs have been spreading their bets to help ensure a steady flow of viral vectors. Bluebird Bio has adopted this kitchen-sink strategy to fuel its ambitious pipeline of experimental gene therapies for genetic diseases and cancers, led by Lenti-D for cerebral adrenoleukodystrophy and LentiGlobin for blood disorder beta thalassaemia. On one hand, Bluebird has struck multi-year manufacturing agreements with Brammer Bio, MilliporeSigma and Belgium-based Novasep. Meanwhile, in late 2017, the company announced that it had spent $11.5m to acquire a facility of its own in Durham, North Carolina, which it will convert into a production site for lentiviral vector.

CMOs are also working to increase capacity, with Brammer Bio doubling its capacity in recent years, investing $50m last year alone. Shanghai-headquartered CMO WuXi AppTec opened a 150,000ft² cell and gene therapy manufacturing centre in Philadelphia, while Swiss biopharma giant Lonza is making a play to lead the space. In April, the company opened a 300,000ft² cell and gene therapy plant near Houston, Texas, the largest facility of its kind in the world. The plant will complement the company’s existing cell and gene therapy hubs in New Hampshire, the Netherlands and Singapore.

As well as increasing capacity, sustained investment in these facilities, and their underlying processes, will help address the manufacturing challenges that make these one-time treatments so expensive. Novartis’s Kymriah currently costs an eye-watering $475,000 per treatment, and as these therapies begin to target organs with larger surface areas, necessitating larger cell batches, costs at the current rate would rise to as much as $3m per patient, Oxford BioMedica chief executive John Dawson told the New York Times last year. As production processes mature and manufacturers start embracing automation, these costs will come down, making treatments affordable for health systems and commercially viable for developers.

“There is substantial scope to improve the manufacturing process,” Bradbury comments. “As a relatively novel treatment and one which is complex and costly to manufacture, there are significant issues to resolve to improve the commercial viability of a cell therapy. Quality control testing still has plenty of scope for optimisation. Cell therapy production must become automated, which should also increase manufacturing scale for commercial production. Viral vectors must also be more readily manufactured and available.”

This article was originally published by: https://www.pharmaceutical-technology.com/features/can-gene-cell-therapy-revolution-scale/

MIT’s AlterEgo headset can read words you say in your head

May 20, 2018

I don’t want to alarm you, but robots can now read your mind. Kind of.

AlterEgo is a new headset developed by MIT Media Lab. You strap it to your face. You talk to it. It talks to you. But no words are said. You say things in your head, like “what street am I on,” and it reads the signals your brain sends to your mouth and jaw, and answers the question for you.

Check out this handy explainer video MIT Media Lab made that shows some of the potential of AlterEgo:

So yes, according to MIT Media Lab, you may soon be able to control your TV with your mind.

The institution explained in its announcement that AlterEgo communicates with you through bone-conduction headphones, which circumvent the ear canal by transmitting sound vibrations through your face bones. Freaky. This, MIT Media Lab said, makes it easier for AlterEgo to talk to you while you’re talking to someone else.

Plus, in trials involving 15 people, AlterEgo had an accurate transcription rate of 92 percent.

Arnav Kapur, the graduate student who lead AlterEgo’s development, describes it as an “intelligence-augmentation device.”

“We basically can’t live without our cellphones, our digital devices,” said Pattie Maes, Kapur’s thesis advisor at MIT Media Lab. “But at the moment, the use of those devices is very disruptive.

“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

This article was originally published by: https://www.cnet.com/news/mit-alterego-headset-can-read-words-you-say-in-your-head/

Revolutionary 3D nanohybrid lithium-ion battery could allow for charging in just seconds

May 20, 2018

Cornell University engineers have designed a revolutionary 3D lithium-ion battery that could be charged in just seconds.

In a conventional battery, the battery’s anode and cathode* (the two sides of a battery connection) are stacked in separate columns (the black and red columns in the left illustration above). For the new design, the engineers instead used thousands of nanoscale (ultra-tiny) anodes and cathodes (shown in the illustration on the right above).

Putting those thousands of anodes and cathodes just 20 nanometers (billionths of a meter) apart allows for extremely fast charging** (in seconds or less) and also allows for holding more power for longer.

Left-to-right: The anode was made of self-assembling (automatically grown) thin-film carbon material with thousands of regularly spaced pores (openings), each about 40 nanometers wide. The pores were coated with a 10 nanometer-thick electrolyte* material (the blue layer between the black anode layer, as shown in the “Electrolyte coating” illustration), which is electronically insulating but conducts ions (an ion is an atom or molecule that has an electrical charge and is what flows inside a battery instead of electrons). The cathode was made from sulfur. (credit: Cornell University)

In addition, unlike traditional batteries, the electrolyte battery material does not have pinholes (tiny holes), which can lead to short-circuiting the battery, giving rise to fires in mobile devices, such as cellphones and laptops.

The engineers are still perfecting the technique, but they have applied for patent protection on the proof-of-concept work, which was funded by the U.S. Department of Energy and in part by the National Science Foundation.

Reference: Energy & Environmental Science (open source with registration) March 9, 2018. Source: Cornell University May 16, 2018.

* How batteries work

Batteries have three parts. An anode (-) and a cathode (+) — the positive and negative sides at either end of a traditional battery — which are hooked up to an electrical circuit (green); and the electrolyte, which keeps the anode and cathode apart and allows ions (electrically charged atoms or molecules) to flow. (credit: Northwestern University Qualitative Reasoning Group)

This article was originally published by:  http://www.kurzweilai.net/revolutionary-3d-nanohybrid-lithium-ion-battery-could-allow-for-charging-in-just-seconds?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=50d67a312d-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-50d67a312d-282129417

Is nature continuous or discrete? How the atomist error was born

May 20, 2018

The modern idea that nature is discrete originated in Ancient Greek atomism. Leucippus, Democritus and Epicurus all argued that nature was composed of what they called ἄτομος (átomos) or ‘indivisible individuals’. Nature was, for them, the totality of discrete atoms in motion. There was no creator god, no immortality of the soul, and nothing static (except for the immutable internal nature of the atoms themselves). Nature was atomic matter in motion and complex composition – no more, no less.

Despite its historical influence, however, atomism was eventually all but wiped out by Platonism, Aristotelianism and the Christian tradition that followed throughout the Middle Ages. Plato told his followers to destroy Democritus’ books whenever they found them, and later the Christian tradition made good on this demand. Today, nothing but a few short letters from Epicurus remain.

Atomism was not finished, however. It reemerged in 1417, when an Italian book-hunter named Poggio Bracciolini discovered a copy of an ancient poem in a remote monastery: De Rerum Natura (On the Nature of Things), written by Lucretius (c99-55 BCE), a Roman poet heavily influenced by Epicurus. This book-length philosophical poem in epic verse puts forward the most detailed and systematic account of ancient materialism that we’ve been fortunate enough to inherit. In it, Lucretius advances a breathtakingly bold theory on foundational issues in everything from physics to ethics, aesthetics, history, meteorology and religion. Against the wishes and best efforts of the Christian church, Bracciolini managed to get it into print, and it soon circulated across Europe.

This book was one of the most important sources of inspiration for the scientific revolution of the 16th and 17th centuries. Nearly every Renaissance and Enlightenment intellectual read it and became an atomist to some degree (they often made allowances for God and the soul). Indeed, this is the reason why, to make a long and important story very short, science and philosophy even today still tend to look for and assume a fundamental discreteness in nature. Thanks in no small part to Lucretius’ influence, the search for discreteness became part of our historical DNA. The interpretive method and orientation of modern science in the West literally owe their philosophical foundations to ancient atomism via Lucretius’ little book on nature. Lucretius, as Stephen Greenblatt says in his book The Swerve (2011), is ‘how the world became modern’.

There is a problem, however. If this story is true, then modern Western thought is based on a complete misreading of Lucretius’ poem. It was not a wilful misreading, of course, but one in which readers committed the simple error of projecting what little they knew second-hand about Greek atomism (mostly from the testimonia of its enemies) onto Lucretius’ text. They assumed a closer relationship between Lucretius’ work and that of his predecessors than actually exists. Crucially, they inserted the words ‘atom’ and ‘particle’ into the translated text, even though Lucretius never used them. Not even once! A rather odd omission for a so-called ‘atomist’ to make, no? Lucretius could easily have used the Latin words atomus (smallest particle) or particula (particle), but he went out of his way not to. Despite his best efforts, however, the two very different Latin terms he did use, corpora (matters) and rerum (things), were routinely translated and interpreted as synonymous with discrete ‘atoms’.

Further, the moderns either translated out or ignored altogether the nearly ubiquitous language of continuum and folding used throughout his book, in phrases such as ‘solida primordia simplicitate’ (simplex continuum). As a rare breed of scholar interested in both classical texts and quantum physics, the existence of this material continuum in the original Latin struck me quite profoundly. I have tried to show all of this in my recent translation and commentary, Lucretius I: An Ontology of Motion (2018), but here is the punchline: this simple but systematic and ubiquitous interpretive error constitutes what might well be the single biggest mistake in the history of modern science and philosophy.

This mistake sent modern science and philosophy on a 500-year quest for what Sean Carroll in his 2012 book called the ‘particle at the end of the universe’. It gave birth to the laudable virtues of various naturalisms and materialisms, but also to less praiseworthy mechanistic reductionisms, patriarchal rationalisms, and the overt domination of nature by humans, none of which can be found in Lucretius’ original Latin writings. What’s more, even when confronted with apparently continuous phenomena such as gravity, electric and magnetic fields, and eventually space-time, Isaac Newton, James Maxwell and even Albert Einstein fell back on the idea of an atomistic ‘aether’ to explain them. All the way back to the ancients, aether was thought to be a subtle fluid-like substance composed of insensibly tiny particles. Today, we no longer believe in the aether or read Lucretius as an authoritative scientific text. Yet in our own way, we still confront the same problem of continuity vs discreteness originally bequeathed to us by the moderns: in quantum physics.

Theoretical physics today is at a critical turning point. General relativity and quantum field theory are the two biggest parts of what physicists now call ‘the standard model’, which has enjoyed incredible predictive success. The problem, however, is that they have not yet been unified as two aspects of one overarching theory. Most physicists think that such unification is only a matter of time, even though the current theoretical frontrunners (string theory and loop quantum gravity) have yet to produce experimental confirmations.

Quantum gravity is of enormous importance. According to its proponents, it stands poised to show the world that the ultimate fabric of nature (space-time) is not continuous at all, but granular, and fundamentally discrete. The atomist legacy might finally be secured, despite its origins in an interpretive error.

There is just one nagging problem: quantum field theory claims that all discrete quanta of energy (particles) are merely the excitations or fluctuations in completely continuous quantum fields. Fields are not fundamentally granular. For quantum field theory, everything might be made of granules, but all granules are made of folded-up continuous fields that we simply measure as granular. This is what physicists call ‘perturbation theory’: the discrete measure of that which is infinitely continuous and so ‘perturbs one’s complete discrete measurement’, as Frank Close puts it in The Infinity Puzzle (2011). Physicists also have a name for the sub-granular movement of this continuous field: ‘vacuum fluctuations’. Quantum fields are nothing but matter in constant motion (energy and momentum). They are therefore never ‘nothing’, but more like a completely positive void (the flux of the vacuum itself) or an undulating ocean (appropriately called ‘the Dirac sea’) in which all discrete things are its folded-up bubbles washed ashore, as Carlo Rovelli puts it in Reality Is Not What it Seems (2016). Discrete particles, in other words, are folds in continuous fields.

The answer to the central question at the heart of modern science, ‘Is nature continuous or discrete?’ is as radical as it is simple. Space-time is not continuous because it is made of quantum granules, but quantum granules are not discrete because they are folds of infinitely continuous vibrating fields. Nature is thus not simply continuous, but an enfolded continuum.

This brings us right back to Lucretius and our original error. Working at once within and against the atomist tradition, Lucretius put forward the first materialist philosophy of an infinitely continuous nature in constant flux and motion. Things, for Lucretius, are nothing but folds (duplex), pleats (plex), bubbles or pores (foramina) in a single continuous fabric (textum) woven by its own undulations. Nature is infinitely turbulent or perturbing, but it also washes ashore, like the birth of Venus, in meta-stable forms – as Lucretius writes in the opening lines of De Rerum Natura: ‘Without you [Venus] nothing emerges into the sunlit shores of light.’ It has taken 2,000 years, but perhaps Lucretius has finally become our contemporary.

This article was originally published by: https://aeon.co/ideas/is-nature-continuous-or-discrete-how-the-atomist-error-was-born?utm_source=Aeon+Newsletter&utm_campaign=bb63ea6739-EMAIL_CAMPAIGN_2018_05_16&utm_medium=email&utm_term=0_411a82e59d-bb63ea6739-70411565

A new generation of brain-like computers comes of age

May 17, 2018

​Conventional computer chips aren’t up to the challenges posed by next-generation autonomous drones and medical implants. Kwabena Boahen has laid out a way forward.

For five decades, Moore’s law held up pretty well: Roughly every two years, the number of transistors one could fit on a chip doubled, all while costs steadily declined.

Today, however, transistors and other electronic components are so small they’re beginning to bump up against fundamental physical limits on their size. Moore’s law has reached its end, and it’s going to take something different to meet the need for computing that is ever faster, cheaper and more efficient.

As it happens, Kwabena Boahen, a professor of bioengineering and of electrical engineering, has a pretty good idea what that something more is: brain-like, or neuromorphic, computers that are vastly more efficient than the conventional digital computers we’ve grown accustomed to.

This is not a vision of the future, Boahen said. As he lays out in the latest issue of Computing in Science and Engineering, the future is now.

30 years in the making

It’s a moment Boahen has been working toward his entire adult life, and then some. He first got interested in computers as a teenager growing up in Ghana. But the more he learned, the more traditional computers looked like a giant, inelegant mess of memory chips and processors connected by weirdly complicated wiring.

Both the need for something new and the first ideas for what that would look like crystalized in the mid-1980s. Even then, Boahen said, some researchers could see the end of Moore’s law on the horizon. As transistors continued to shrink, they would bump up against fundamental physical limits on their size. Eventually, they’d get so small that only a single lane of electron traffic could get through under the best circumstances. What had once been electron superfreeways would soon be tiny mountain roads, and while that meant engineers could fit more components on a chip, those chips would become more and more unreliable.

At around the same time, Boahen and others came to understand that the brain had enormous computing power – orders of magnitude more than what people have built, even today – even though it used vastly less energy and remarkably unreliable components, neurons.

How does the brain do it?

While others have built brain-inspired computers, Boahen said, he and his collaborators have developed a five-point prospectus – manifesto might be the better word – for how to build neuromorphic computers that directly mimic in silicon what the brain does in flesh and blood.

The first two points of the prospectus concern neurons themselves, which unlike computers operate in a mix of digital and analog mode. In their digital mode, neurons send discrete, all-or-nothing signals in the form of electrical spikes, akin to the ones and zeros of digital computers. But they process incoming signals by adding them all up and firing only once a threshold is reached – more akin to a dial than a switch.

That observation led Boahen to try using transistors in a mixed digital-analog mode. Doing so, it turns out, makes chips both more energy efficient and more robust when the components do fail, as about 4 percent of the smallest transistors are expected to do.

From there, Boahen builds on neurons’ hierarchical organization, distributed computation and feedback loops to create a vision of an even more energy efficient, powerful and robust neuromorphic computer.

The future of the future

But it’s not just a vision. Over the last 30 years, Boahen’s lab has actually implemented most of their ideas in physical devices, including Neurogrid, one of the first truly neuromorphic computers. In another two or three years, Boahen said, he expects they will have designed and built computers implementing all of the prospectus’s five points.

Don’t expect those computers to show up in your laptop anytime soon, however. Indeed, that’s not really the point – most personal computers operate nowhere near the limits on conventional chips. Neuromorphic computers would be most useful in embedded systems that have extremely tight energy requirements, such as very low-power neural implants or on-board computers in autonomous drones.

“It’s complementary,” Boahen said. “It’s not going to replace current computers.”

The other challenge: getting others, especially chip manufacturers, on board. Boahen is not the only one thinking about what to do about the end of Moore’s law or looking to the brain for ideas. IBM’s TrueNorth, for example, takes cues from neural networks to produce a radically more efficient computer architecture. On the other hand, it remains fully digital, and, Boahen said, 20 times less efficient than Neurogrid would be had it been built with TrueNorth’s 28-nanometer transistors.

This article was originally published by: https://engineering.stanford.edu/magazine/article/new-generation-brain-computers-comes-age

DNA Robots Target Cancer

May 17, 2018

DNA nanorobots that travel the bloodstream, find tumors, and dispense a protein that causes blood clotting trigger the death of cancer cells in mice, according to a study published today (February 12) in Nature Biotechnology.

The authors have “demonstrated that it’s indeed possible to do site-specific drug delivery using biocompatible, biodegradable, DNA-based bionanorobots for cancer therapeutics,” says Suresh Neethirajan, a bioengineer at the University of Guelph in Ontario, Canada, who did not participate in the study. “It’s a combination of diagnosing the biomarkers on the surface of the cancer itself and also, upon recognizing that, delivering the specific drug to be able to treat it.”

The international team of researchers started with the goal of “finding a path to design nanorobots that can be applied to treatment of cancer in human[s],” writes coauthor Hao Yan of Arizona State University in an email to The Scientist.

Yan and colleagues first generated a self-assembling, rectangular, DNA-origami sheet to which they linked thrombin, an enzyme responsible for blood clotting. Then, they used DNA fasteners to join the long edges of the rectangle, resulting in a tubular nanorobot with thrombin on the inside. The authors designed the fasteners to dissociate when they bind nucleolin—a protein specific to the surface of tumor blood-vessel cells—at which point, the tube opens and exposes its cargo.

Nanorobot design. Thrombin is represented in pink and nucleolin in blue.S. LI ET AL., NATURE BIOTECHNOLOGY, 2018

The scientists next injected the nanorobots intravenously into nude mice with human breast cancer tumors. The robots grabbed onto vascular cells at tumor sites and caused extensive blood clots in the tumors’ vessels within 48 hours, but did not cause clotting elsewhere in the animals’ bodies. These blood clots led to tumor-cell necrosis, resulting in smaller tumors and a better chance for survival compared to control mice. Yan’s team also found that nanorobot treatment increased survival and led to smaller tumors in a mouse model of melanoma, and in mice with xenografts of human ovarian cancer cells.

The authors are “looking at specific binding to tumor cells, which is basically the holy grail for . . . cancer therapy,” says the University of Tennessee’s Scott Lenaghan, who was not involved in the work. The next step is to investigate any damage—such as undetected clots or immune-system responses—in the host organism, he says, as well as to determine how much thrombin is actually delivered at the tumor sites.

The authors showed in the study that the nanorobots didn’t cause clotting in major tissues in miniature pigs, which satisfies some safety concerns, but Yan agrees that more work is needed. “We are interested in looking further into the practicalities of this work in mouse models,” he writes.

Going from “a mouse model to humans is a huge step,” says Mauro Ferrari, a biomedical engineer at Houston Methodist Hospital and Weill Cornell Medical College who did not participate in the study. It’s not yet clear whether targeting nucleolin and delivering thrombin will be clinically relevant, he says, “but the breakthrough aspect is [that] this is a platform. They can use a similar approach for other things, which is really exciting. It’s got big implications.”

S. Li et al., “A DNA nanorobot functions as a cancer therapeutic in response to a molecular trigger in vivo,Nature Biotechnology, doi:10.1038/nbt.4071, 2018.

This article was originally published by: https://www.the-scientist.com/?articles.view/articleNo/51717/title/DNA-Robots-Target-Cancer/