DiscoverSingularity Hub Daily
Singularity Hub Daily

Singularity Hub Daily

Author: Singularity Hub

Subscribed: 76Played: 11,091
Share

Description

A constant stream of SingularityHub's high-quality articles, read to you via an AI system.
50 Episodes
Reverse
AI is continuously taking on new challenges, from detecting deepfakes (which, incidentally, are also made using AI) to winning at poker to giving synthetic biology experiments a boost. These impressive feats result partly from the huge datasets the systems are trained on. That training is costly and time-consuming, and it yields AIs that can really only do one thing well. For example, to train an AI to differentiate between a picture of a dog and one of a cat, it’s fed thousands—if not millions—of labeled images of dogs and cats. A child, on the other hand, can see a dog or cat just once or twice and remember which is which. How can we make AIs learn more like children do? A team at the University of Waterloo in Ontario has an answer: change the way AIs are trained. Here’s the thing about the datasets normally used to train AI—besides being huge, they’re highly specific. A picture of a dog can only be a picture of a dog, right? But what about a really small dog with a long-ish tail? That sort of dog, while still being a dog, looks more like a cat than, say, a fully-grown Golden Retriever. It’s this concept that the Waterloo team’s methodology is based on. They described their work in a paper published on the pre-print (or non-peer-reviewed) server arXiv last month. Teaching an AI system to identify a new class of objects using just one example is what they call “one-shot learning.” But they take it a step further, focusing on “less than one shot learning,” or LO-shot learning for short. LO-shot learning consists of a system learning to classify various categories based on a number of examples that’s smaller than the number of categories. That’s not the most straightforward concept to wrap your head around, so let’s go back to the dogs and cats example. Say you want to teach an AI to identify dogs, cats, and kangaroos. How could that possibly be done without several clear examples of each animal? The key, the Waterloo team says, is in what they call soft labels. Unlike hard labels, which label a data point as belonging to one specific class, soft labels tease out the relationship or degree of similarity between that data point and multiple classes. In the case of an AI trained on only dogs and cats, a third class of objects, say, kangaroos, might be described as 60 percent like a dog and 40 percent like a cat (I know—kangaroos probably aren’t the best animal to have thrown in as a third category). “Soft labels can be used to represent training sets using fewer prototypes than there are classes, achieving large increases in sample efficiency over regular (hard-label) prototypes,” the paper says. Translation? Tell an AI a kangaroo is some fraction cat and some fraction dog—both of which it’s seen and knows well—and it’ll be able to identify a kangaroo without ever having seen one. If the soft labels are nuanced enough, you could theoretically teach an AI to identify a large number of categories based on a much smaller number of training examples. The paper’s authors use a simple machine learning algorithm called k-nearest neighbors (kNN) to explore this idea more in depth. The algorithm operates under the assumption that similar things are most likely to exist near each other; if you go to a dog park, there will be lots of dogs but no cats or kangaroos. Go to the Australian grasslands and there’ll be kangaroos but no cats or dogs. And so on. To train a kNN algorithm to differentiate between categories, you choose specific features to represent each category (i.e. for animals you could use weight or size as a feature). With one feature on the x-axis and the other on the y-axis, the algorithm creates a graph where data points that are similar to each other are clustered near each other. A line down the center divides the categories, and it’s pretty straightforward for the algorithm to discern which side of the line new data points should fall on. The Waterloo team kept it simple and used plots of color on a...
Have you ever used Google Assistant, Apple’s Siri, or Amazon Alexa to make decisions for you? Perhaps you asked it what new movies have good reviews, or to recommend a cool restaurant in your neighborhood. Artificial intelligence and virtual assistants are constantly being refined, and may soon be making appointments for you, offering medical advice, or trying to sell you a bottle of wine. Although AI technology has miles to go to develop social skills on par with ours, some AI has shown impressive language understanding and can complete relatively complex interactive tasks. In several 2018 demonstrations, Google’s AI made haircut and restaurant reservations without receptionists realizing they were talking with a non-human. It’s likely the AI capabilities developed by tech giants such as Amazon and Google will only grow more capable of influencing us in the future. But What Do We Actually Find Persuasive? My colleague Adam Duhachek and I found AI messages are more persuasive when they highlight “how” an action should be performed, rather than “why.” For example, people were more willing to put on sunscreen when an AI explained how to apply sunscreen before going out, rather than why they should use sunscreen. We found people generally don’t believe a machine can understand human goals and desires. Take Google’s AlphaGo, an algorithm designed to play the board game Go. Few people would say the algorithm can understand why playing Go is fun, or why it’s meaningful to become a Go champion. Rather, it just follows a pre-programmed algorithm telling it how to move on the game board. Our research suggests people find AI’s recommendations more persuasive in situations where AI shows easy steps on how to build personalized health insurance, how to avoid a lemon car, or how to choose the right tennis racket for you, rather than why any of these are important to do in a human sense. Does AI Have Free Will? Most of us believe humans have free will. We compliment someone who helps others because we think they do it freely, and we penalize those who harm others. What’s more, we are willing to lessen the criminal penalty if the person was deprived of free will, for instance if they were in the grip of a schizophrenic delusion. But do people think AI has free will? We did an experiment to find out. Someone is given $100 and offers to split it with you. They’ll get $80 and you’ll get $20. If you reject this offer, both you and the proposer end up with nothing. Gaining $20 is better than nothing, but previous research suggests the $20 offer is likely to be rejected because we perceive it as unfair. Surely we should get $50, right? But what if the proposer is an AI? In a research project yet to be published, my colleagues and I found the rejection ratio drops significantly. In other words, people are much more likely to accept this “unfair” offer if proposed by an AI. This is because we don’t think an AI developed to serve humans has a malicious intent to exploit us—it’s just an algorithm, it doesn’t have free will, so we might as well just accept the $20. The fact that people could accept unfair offers from AI concerns me, because it might mean this phenomenon could be used maliciously. For example, a mortgage loan company might try to charge unfairly high interest rates by framing the decision as being calculated by an algorithm. Or a manufacturing company might manipulate workers into accepting unfair wages by saying it was a decision made by a computer. To protect consumers, we need to understand when people are vulnerable to manipulation by AI. Governments should take this into account when considering regulation of AI. We’re Surprisingly Willing to Divulge to AI In other work yet to be published, my colleagues and I found people tend to disclose their personal information and embarrassing experiences more willingly to an AI than a human. We told participants to imagine they’re at the doctor for a ...
Machine learning is taking medical diagnosis by storm. From eye disease, breast and other cancers, to more amorphous neurological disorders, AI is routinely matching physician performance, if not beating them outright. Yet how much can we take those results at face value? When it comes to life and death decisions, when can we put our full trust in enigmatic algorithms—“black boxes” that even their creators cannot fully explain or understand? The problem gets more complex as medical AI crosses multiple disciplines and developers, including both academic and industry powerhouses such as Google, Amazon, or Apple, with disparate incentives. This week, the two sides battled it out in a heated duel in one of the most prestigious science journals, Nature. On one side are prominent AI researchers at the Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard, MIT, and others. On the other side is the titan Google Health. The trigger was an explosive study by Google Health for breast cancer screening, published in January this year. The study claimed to have developed an AI system that vastly outperformed radiologists for diagnosing breast cancer, and can be generalized to populations beyond those used for training—a holy grail of sorts that’s incredibly difficult due to the lack of large medical imaging datasets. The study made waves across the media landscape, and created a buzz in the public sphere for medical AI’s “coming of age.” The problem, the academics argued, is that the study lacked sufficient descriptions of the code and model for others to replicate. In other words, we can only trust the study at its word—something that’s just not done in scientific research. Google Health, in turn, penned a polite, nuanced but assertive rebuttal arguing for their need to protect patient information and prevent the AI from malicious attacks. Academic discourse like these form the seat of science, and may seem incredibly nerdy and outdated—especially because rather than online channels, the two sides resorted to a centuries-old pen-and-paper discussion. By doing so, however, they elevated a necessary debate to a broad worldwide audience, each side landing solid punches that, in turn, could lay the basis of a framework for trust and transparency in medical AI—to the benefit of all. Now if they could only rap their arguments in the vein of Hamilton and Jefferson’s Cabinet Battles in Hamilton. Academics, You Have the Floor It’s easy to see where the academic’s arguments come from. Science is often painted as a holy endeavor embodying objectivity and truth. But as any discipline touched by people, it’s prone to errors, poor designs, unintentional biases or—in very small numbers—conscious manipulation to skew the results. Because of this, when publishing results, scientists carefully describe their methodology so others can replicate the findings. If a conclusion, say a vaccine that protects against Covid-19, happens in nearly every lab regardless of the scientist, the material, or the subjects, then we have stronger proof that the vaccine actually works. If not, it means that the initial study may be wrong—and scientists can then delineate why and move on. Replication is critical to healthy scientific evolution. But AI research is shredding the dogma. “In computational research, it’s not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress,” said author Dr. Benjamin Haibe-Kains at Princess Margaret Cancer Centre. For example, nuances in computer code or training samples and parameters could dramatically change training and evaluation of results—aspects that can’t be easily described using text alone, as is the norm. The consequence, said the team, is that it makes trying to verify the complex computational pipeline “not possible.” (For academics, that’s the equivalent of gloves off.) Although the academics took Goo...
Superconductivity could be the key to groundbreaking new technologies in energy, computing, and transportation, but so far it only occurs in materials chilled close to absolute zero. Now researchers have created the first ever room–temperature superconductor. As a current passes through a conductor it experiences resistance, which saps away useful energy into waste heat and limits the efficiency of all of the modern world’s electronics. But in 1911, Dutch physicist Heike Kamerlingh Onnes discovered that this doesn’t have to be the case. When he cooled mercury wire to just above absolute zero, the resistance abruptly disappeared. Over the next few decades superconductivity was found in other super-cooled materials, and in 1933 researchers discovered that superconductors also expel magnetic fields. That means that external magnetic fields, which normally pass through just about anything, can’t penetrate the superconductor’s interior and remain at its surface. These two qualities open up a whole host of possibilities, including lossless power lines and electronic circuits, ultra-sensitive sensors, and incredibly powerful magnets that could be used to levitate trains or make super-efficient turbines. Superconductors are at the heart of some of today’s most cutting-edge technologies, from quantum computers to MRI scanners and the Large Hadron Collider. The only problem is that they require bulky, costly, and energy-sapping cooling equipment that severely limits where they can be used. But now researchers from the University of Rochester have demonstrated superconductivity at the comparatively balmy temperature of 15 degrees celsius. “Because of the limits of low temperature, materials with such extraordinary properties have not quite transformed the world in the way that many might have imagined,” said lead researcher Ranga Dias in a press release. “Our discovery will break down these barriers and open the door to many potential applications.” The breakthrough, described in a paper in Nature, comes with some substantial caveats, though. The team was only able to create a tiny amount of the material, roughly the same volume as a single droplet from an inkjet printer. And to get it to superconduct they had to squeeze it between two diamonds to create pressures equivalent to three-quarters of those found at the center of the Earth. The researchers are also still unclear about the exact nature of the material they have made. They combined a mixture of hydrogen, carbon, and sulfur then fired a laser at it to trigger a chemical reaction and create a crystal. But because all these elements have very small atoms, it’s not been possible to work out how they are arranged or what the material’s chemical formula might be. Nonetheless, the result is a major leap forward for high-temperature superconductors. It follows a string of advances built on the back of Cornell University physicist Neil Ashcroft’s predictions that hydrogen-rich materials are a promising route to room-temperature conductivity, but it has blown the previous record of -13C out of the water. For the discovery to ever have practical applications though, the researchers will have to find a way to reduce the pressure required to achieve superconductivity. That will require a better understanding of the properties of the material they’ve created, but they suggest there is lots of scope for tuning their recipe to get closer to ambient pressures. How soon that could happen is anyone’s guess, but the researchers seem confident and have created a startup called Unearthly Materials to commercialize their work. If they get their way, electrical resistance may soon be a thing of the past. Image Credit: Gerd Altmann from Pixabay
When did something like us first appear on the planet? It turns out there’s remarkably little agreement on this question. Fossils and DNA suggest people looking like us, anatomically modern Homo sapiens, evolved around 300,000 years ago. Surprisingly, archaeology—tools, artifacts, cave art—suggest that complex technology and cultures, “behavioral modernity,” evolved more recently: 50,000 to 65,000 years ago. Some scientists interpret this as suggesting the earliest Homo sapiens weren’t entirely modern. Yet the different data tracks different things. Skulls and genes tell us about brains, artifacts about culture. Our brains probably became modern before our cultures. The “Great Leap” For 200,000 to 300,000 years after Homo sapiens first appeared, tools and artifacts remained surprisingly simple, little better than Neanderthal technology, and simpler than those of modern hunter-gatherers such as certain indigenous Americans. Starting about 65,000 to 50,000 years ago, more advanced technology started appearing: complex projectile weapons such as bows and spear-throwers, fishhooks, ceramics, sewing needles. People made representational art—cave paintings of horses, ivory goddesses, lion-headed idols, showing artistic flair and imagination. A bird-bone flute hints at music. Meanwhile, arrival of humans in Australia 65,000 years ago shows we’d mastered seafaring. This sudden flourishing of technology is called the “great leap forward,” supposedly reflecting the evolution of a fully modern human brain. But fossils and DNA suggest that human intelligence became modern far earlier. Anatomical Modernity Bones of primitive Homo sapiens first appear 300,000 years ago in Africa, with brains as large or larger than ours. They’re followed by anatomically modern Homo sapiens at least 200,000 years ago, and brain shape became essentially modern by at least 100,000 years ago. At this point, humans had braincases similar in size and shape to ours. Assuming the brain was as modern as the box that held it, our African ancestors theoretically could have discovered relativity, built space telescopes, written novels and love songs. Their bones say they were just as human as we are. Because the fossil record is so patchy, fossils provide only minimum dates. Human DNA suggests even earlier origins for modernity. Comparing genetic differences between DNA in modern people and ancient Africans, it’s estimated that our ancestors lived 260,000 to 350,000 years ago. All living humans descend from those people, suggesting that we inherited the fundamental commonalities of our species, our humanity, from them. All their descendants—Bantu, Berber, Aztec, Aboriginal, Tamil, San, Han, Maori, Inuit, Irish—share certain peculiar behaviors absent in other great apes. All human cultures form long-term pair bonds between men and women to care for children. We sing and dance. We make art. We preen our hair, adorn our bodies with ornaments, tattoos and makeup. We craft shelters. We wield fire and complex tools. We form large, multigenerational social groups with dozens to thousands of people. We cooperate to wage war and help each other. We teach, tell stories, trade. We have morals, laws. We contemplate the stars, our place in the cosmos, life’s meaning, what follows death. The details of our tools, fashions, families, morals and mythologies vary from tribe to tribe and culture to culture, but all living humans show these behaviors. That suggests these behaviors—or at least, the capacity for them—are innate. These shared behaviors unite all people. They’re the human condition, what it means to be human, and they result from shared ancestry. We inherited our humanity from peoples in southern Africa 300,000 years ago. The alternative—that everyone, everywhere coincidentally became fully human in the same way at the same time, starting 65,000 years ago—isn’t impossible, but a single origin is more likely. The Network Effect Archaeology and...
If you’ve seen the movie The Martian, you no doubt remember the rescue scene, in which (spoiler alert!) Matt Damon launches himself off Mars in a stripped-down rocket in hopes of his carefully-calculated trajectory taking him just close enough to his crew for them to pluck him from the void of outer space and bring him safely home to Earth. There’s a multitude of complex physics involved, and who knows how true-to-science the scene is, but getting the details right to successfully grab something in space certainly isn’t easy. So it will be fascinating to watch NASA attempt to do just that, as its OSIRIS-REx spacecraft attempts to pocket a fistful of rock and dust from an asteroid called Bennu then ferry it back to Earth—with the whole endeavor broadcast live on NASA’s website starting Tuesday, October 20 at 5pm Eastern time. Here are some details to know in advance. The Asteroid Bennu’s full name is 101955 Bennu, and it’s close enough to Earth to be classified as a near-Earth object, or NEO—that means it orbits within 1.3 AU of the sun. An AU is equivalent to the distance between Earth and the sun, which is about 93 million miles. The asteroid orbits the sun at an average distance of 105 million miles, which is just (“just” being a relative term here!) 12 million miles farther than Earth’s average orbital distance from the sun. Every six years, Bennu comes closer to Earth, getting to within 0.002 AU. Scientists say this means there’s a high likelihood the asteroid could impact Earth sometime in the late 22nd century. Luckily, an international team is already on the case (plus, due to Bennu’s size and composition, it likely wouldn’t do any harm). Bennu isn’t solid, but rather a loose clump of rock and dust whose density varies across its area (in fact, up to 40 percent of it might just be empty space!). Its shape is more similar to a spinning top than a basketball or other orb, and it’s not very big—about a third of a mile wide at its widest point. Since it’s small, it spins pretty fast, doing a full rotation on its axis in less than four and a half hours. That fast spinning also means it’s likely to eject material once in a while, with chunks or rock and other regolith dislodging and being flung into space. The Spacecraft OSIRIS-REx stands for Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer. Yeah—that’s a lot. It’s the size of a large van (bigger than a minivan, smaller than a bus), and looks sort of like a box with wings and one long arm. It’s been orbiting Bennu for about two years (since 2018) after taking two years to get there (it was launched in 2016). The spacecraft’s “arm” is called TAGSAM, which stands for Touch-And-Go Sample Acquisition Mechanism. It’s 11 feet long and has a round collection chamber attached to its end. OSIRIS-REx doesn’t have any legs to land on, but that’s for a good reason: landing isn’t part of the plan. Which brings us to... The Plan As far as plans go, this one is pretty cool. The spacecraft will approach the asteroid, and its arm will reach out to tap the surface. A pressurized canister will shoot out some nitrogen gas to try to dislodge some dust and rock from Bennu, and the collection chamber on the spacecraft’s arm will open up to grab whatever it can; scientists are hoping to get at least 60 grams’ worth of material (that’s only 4 tablespoons! It’s less than the cup of yogurt you eat in the morning!). And that’s not even the wildest detail; if the mission goes as planned and OSIRIS-REx scoops up those four tablespoons of precious cargo, scientists on Earth still won’t see them for almost three more years; the spacecraft is scheduled for a parachute landing in the Utah desert on September 24, 2023. The NASA team working on this project thinks it’s likely they’ll find organic material in the sample collection, and it may even give them clues to the origins of life on Earth. Does the mission have better odds of success than ...
If I had to place money on a neurotech that will win the Nobel Prize, it’s optogenetics. The technology uses light of different frequencies to control the brain. It’s a brilliant mind-meld of basic neurobiology and engineering that hijacks the mechanism behind how neurons naturally activate—or are silenced—in the brain. Thanks to optogenetics, in just ten years we’ve been able to artificially incept memories in mice, decipher brain signals that lead to pain, untangle the neural code for addiction, reverse depression, restore rudimentary sight in blinded mice, and overwrite terrible memories with happy ones. Optogenetics is akin to a universal programming language for the brain. But it’s got two serious downfalls: it requires gene therapy, and it needs brain surgery to implant optical fibers into the brain. This week, the original mind behind optogenetics is back with an update that cuts the cord. Dr. Karl Deisseroth’s team at Stanford University, in collaboration with the University of Minnesota, unveiled an upgraded version of optogenetics that controls behavior without the need for surgery. Rather, the system shines light through the skulls of mice, and it penetrates deep into the brain. With light pulses, the team was able to change how likely a mouse was to have seizures, or reprogram its brain so it preferred social company. To be clear: we’re far off from scientists controlling your brain with flashlights. The key to optogenetics is genetic engineering—without it, neurons (including yours) don’t naturally respond to light. However, looking ahead, the study is a sure-footed step towards transforming a powerful research technology into a clinical therapy that could potentially help people with neurological problems, such as depression or epilepsy. We are still far from that vision—but the study suggests it’s science fiction potentially within reach. Opto-What? To understand optogenetics, we need to dig a little deeper into how brains work. Essentially, neurons operate on electricity with an additional dash of chemistry. A brain cell is like a living storage container with doors—called ion channels—that separate its internal environment from the outside. When a neuron receives input and that input is sufficiently strong, the cells open their doors. This process generates an electrical current, which then gallops down a neuron’s output branch—a biological highway of sorts. At the terminal, the electrical data transforms into dozens of chemical “ships,” which float across a gap between neurons to deliver the message to its neighbors. This is how neurons in a network communicate, and how that network in turn produces memories, emotions, and behaviors. Optogenetics hijacks this process. Using viruses, scientists can add a gene for opsins, a special family of proteins from algae, into living neurons. Opsins are specialized “doors” that open under certain frequencies of light pulses, something mammalian brain cells can’t do. Adding opsins into mouse neurons (or ours) essentially gives them the superpower to respond to light. In classic optogenetics, scientists implant optical fibers near opsin-dotted neurons to deliver the light stimulation. Computer-programmed light pulses can then target these newly light-sensitive neurons in a particular region of the brain and control their activity like puppets on a string. It gets cooler. Using genetic engineering, scientists can also fine-tune which populations of neurons get that extra power—for example, only those that encode a recent memory, or those involved in depression or epilepsy. This makes it possible to play with those neural circuits using light, while the rest of the brain hums along. This selectivity is partially why optogenetics is so powerful. But it’s not all ponies and rainbows. As you can imagine, mice don’t particularly enjoy being tethered by optical fibers sprouting from their brains. Humans don’t either, hence the hiccup in adopting the t...
Exploiting the resources of outer space might be key to the future expansion of the human species. But researchers argue that the US is trying to skew the game in its favor, with potentially disastrous consequences. The enormous cost of lifting material into space means that any serious effort to colonize the solar system will require us to rely on resources beyond our atmosphere. Water will be the new gold thanks to its crucial role in sustaining life, as well as the fact it can be split into hydrogen fuel and oxygen for breathing. Regolith found on the surface of rocky bodies like the moon and Mars will be a crucial building material, while some companies think it will eventually be profitable to extract precious metals and rare earth elements from asteroids and return them to Earth. But so far, there’s little in the way of regulation designed to govern how these activities should be managed. Now two Canadian researchers argue in a paper in Science that recent policy moves by the US are part of a concerted effort to refocus international space cooperation towards short-term commercial interests, which could precipitate a “race to the bottom” that sabotages efforts to safely manage the development of space. Aaron Boley and Michael Byers at the University of British Columbia trace back the start of this push to the 2015 Commercial Space Launch Competitiveness Act, which gave US citizens and companies the right to own and sell space resources under US law. In April this year, President Trump doubled down with an executive order affirming the right to commercial space mining and explicitly rejecting the idea that space is a “global commons,” flying in the face of established international norms. Since then, NASA has announced that any countries wishing to partner on its forthcoming Artemis missions designed to establish a permanent human presence on the moon will have to sign bilateral agreements known as Artemis Accords. These agreements will enshrine the idea that commercial space mining will be governed by national laws rather than international ones, the authors write, and that companies can declare “safety zones” around their operations to exclude others. Speaking to Space.com Mike Gold, the acting associate administrator for NASA’s Office of International and Interagency Relations, disputes the authors’ characterization of the accords and says they are based on the internationally-recognized Outer Space Treaty. He says they don’t include agreement on national regulation of mining or companies’ rights to establish safety zones, though they do assert the right to extract and use space resources. But given that they’ve yet to be released or even finalized, it’s not clear how far these rights extend or how they are enshrined in the agreements. And the authors point out that the fact that they are being negotiated bilaterally means the US will be able to use its dominant position to push its interpretation of international law and its overtly commercial goals for space development. Space policy designed around the exploitation of resources holds many dangers, say the paper authors. For a start, loosely-regulated space mining could result in the destruction of deposits that could hold invaluable scientific information. It could also kick up dangerous amounts of lunar dust that can cause serious damage to space vehicles, increase the amount of space debris, or in a worst-case scenario, create meteorites that could threaten satellites or even impact Earth. By eschewing a multilateral approach to setting space policy, the US also opens the door to a free-for-all where every country makes up its own rules. Russia is highly critical of the Artemis Accords process and China appears to be frozen out of it, suggesting that two major space powers will not be bound by the new rules. That potentially sets the scene for a race to the bottom, where countries compete to set the laxest rules for space mining to attract inv...
A few years ago, I saw a guy in a jet suit take off in San Francisco’s Golden Gate Park. The roar was deafening, the smell of fuel overwhelming. Over the span of a few minutes, he hovered above the ground and moved about a bit. The jet suit’s inventor, Richard Browning, had left a career in the energy industry and a stint in the Royal Marines, to go after a childhood dream. Amazingly, he’d succeeded. But the jet suit seemed a bespoke, one-off kind of thing. It didn’t appear poised to revolutionize office commutes (remember those?) or even to divert oneself on the weekends. Not yet. Since then, however, Browning’s dialed in his invention, and in addition to a barnstorming tour, his company, Gravity Industries, has begun exploring ways his jet suit could help people. Which explains why, not too long ago, you’d have found Browning gliding up a mountainside to the aid of an “injured” hiker in England’s Lake District. It was a trial, in partnership with the Great North Air Ambulance Service (GNAAS), to see if a personal jet suit might be a new tool for emergency responders in wilderness areas. The idea isn’t to replace emergency personnel on foot or helicopters to airlift serious cases. Rather, the main motivation is getting a first responder on site as fast as possible. Whereas it would have taken emergency responders 25 minutes to get to the hikers on foot, Browning and his jet suit were on location in a mere 90 seconds. A clear advantage. The Lake District has dozens of patients in need of support every month, according to GNAAS director of operations and paramedic Andy Mawson. The first paramedic to reach a patient can assess the situation, communicate what’s needed to the team, and stabilize the patient. “We think this technology could enable our team to reach some patients much quicker than ever before. In many cases, this would ease the patient’s suffering. In some cases, it would save lives,” Mawson said in a press release. The test certainly demonstrated the jet suit’s speed. That said, it may not be useful in every situation. For example, the suit is limited to locations within 5 to 10 minutes flight time (one way). This is one reason the GNAAS chose the Lake District, which has a high volume of calls in a fairly compact geographic footprint. Also, Browning says he typically operates the jet suit near the ground for safety reasons. Extra steep or cliffy terrain might prove an impediment (though you could still fly to the base of any such features). And what about training, you may ask? Browning makes it look easy, but he invented the thing and has logged many hours flying it. According to a Red Bull interview from last year, it was no walk in the park to fly early on, requiring great balance and strength. But Browning has since refined the suit, including the addition of a rear jet for stability, goggles with a head-up display, and computer-automated thrust to compensate for the suit losing weight as it burns through fuel. These days, according to Browning, it’s a much more intuitive experience. “It’s a bit like riding a bicycle or skiing or one of those things where it’s just about you thinking about where you want to go and your body intuitively going there,” Browning told Digital Trends in a recent profile. “You’re not steering some joystick or a steering wheel. We’ve had people learn to do this in four or five goes—with each go just lasting around 90 seconds. All credit to the human subconscious—it’s just this floating, dreamlike state.” The biggest near-term barrier may actually be cost. According to the Red Bull article, at least one suit has sold for over $400,000. Depending on the customer and use case, then, that price tag might be a bit steep. But it needn’t stay that high forever. With enough demand, one could imagine a standardized manufacturing process bringing the cost down. Still, the test certainly impressed all participants, and GNAAS and Gravity Industries plan to continue ex...
Though the world’s population is no longer predicted to grow as much as we thought by the end of this century, there are still going to be a lot more people on Earth in 30, 50, and 80 years than there are now. And those people are going to need healthy food that comes from a sustainable source. Technologies like cultured meat and fish, vertical farming, and genetic engineering of crops are all working to feed more people while leaving a smaller environmental footprint. A new facility in northern France aims to help solve the future of food problem in a new, unexpected, and kind of cringe-inducing way: by manufacturing a huge volume of bugs—for eating. Before you gag and close the page, though, wait; these particular bugs aren’t intended for human consumption, at least not directly. Our food system and consumption patterns are problematic not just because of the food we eat, but because of the food our food eats. Factory farming uses up a ton of land and resources; a 2018 study found that while meat and dairy provide just 18 percent of the calories people consume, it uses 83 percent of our total farmland and produces 60 percent of agriculture’s greenhouse gas emissions. That farmland is partly taken up by the animals themselves, but it’s also used to grow crops like corn and soy exclusively for animal consumption. And we’re not just talking cows and pigs. Seafood is part of the problem, too. Farm-raised salmon, for example, are fed not just smaller fish (which depletes ecosystems), but also soy that’s grown on land. Enter the insects. Or, more appropriately in this case, enter Ÿnsect, the French company with big ambitions to help change the way the world eats. Ÿnsect raised $125 million in Series C funding in early 2019, and at the time already had $70 million worth of aggregated orders to fill. Now they’re building a bug-farming plant to churn out tiny critters in record numbers. You’ve probably heard of vertical farms in the context of plants; most existing vertical farms use LED lights and a precise mixture of nutrients and water to grow leafy greens or other produce indoors. They maximize the surface area used for growing by stacking several layers of plants on top of one another; the method may not make for as much space as outdoor fields have, but can yield a lot more than you might think. Ÿnsect’s new plant will use layered trays too, except they’ll be cultivating beetle larvae instead of plants. The ceilings of the facility are 130 feet high—that’s a lot of vertical space to grow bugs in. Those of us who are grossed out by the thought will be glad to know that the whole operation will be highly automated; robots will tend to and harvest the beetles, and AI will be employed to keep tabs on important growing conditions like temperature and humidity. The plant will initially be able to produce 20,000 tons of insect protein a year, and Ÿnsect is already working with the biggest fish feed company in the world, though production at the new facility isn’t slated to start until 2022. Besides fish feed, Ÿnsect is also marketing its product for use in fertilizer and pet food. It’s uncertain how realistic the pet food angle is, as I’d imagine most of us love our pets too much to feed them bugs. But who knows—there’s plenty of hypothesizing that insects will be a central source of protein for people in the future, as they’re not only more sustainable than meat, but in some cases more nutritious too. We’ll just have to come up with some really creative recipes. Image Credit: Ÿnsect
Black holes are perhaps the most mysterious objects in nature. They warp space and time in extreme ways and contain a mathematical impossibility, a singularity—an infinitely hot and dense object within. But if black holes exist and are truly black, how exactly would we ever be able to make an observation? Robert Penrose is a theoretical physicist who works on black holes, and his work has influenced not just me but my entire generation through his series of popular books that are loaded with his exquisite hand-drawn illustrations of deep physical concepts. Yesterday the Nobel Committee announced that the 2020 Nobel Prize in physics will be awarded to three scientists—Sir Roger Penrose, Reinhard Genzel and Andrea Ghez—who helped discover the answers to such profound questions. Andrea Ghez is only the fourth woman to win the Nobel Prize in physics. As a graduate student in the 1990s at Penn State, where Penrose holds a visiting position, I had many opportunities to interact with him. For many years I was intimidated by this giant in my field, only stealing glimpses of him working in his office, sketching strange-looking scientific drawings on his blackboard. Later, when I finally got the courage to speak with him, I quickly realized that he is among the most approachable people around. Dying Stars Form Black Holes Sir Roger Penrose won half the prize for his seminal work in 1965 which proved, using a series of mathematical arguments, that under very general conditions, collapsing matter would trigger the formation of a black hole. This rigorous result opened up the possibility that the astrophysical process of gravitational collapse, which occurs when a star runs out of its nuclear fuel, would lead to the formation of black holes in nature. He was also able to show that at the heart of a black hole must lie a physical singularity—an object with infinite density, where the laws of physics simply break down. At the singularity, our very conceptions of space, time, and matter fall apart and resolving this issue is perhaps the biggest open problem in theoretical physics today. Penrose invented new mathematical concepts and techniques while developing this proof. Those equations that Penrose derived in 1965 have been used by physicists studying black holes ever since. In fact, just a few years later, Stephen Hawking, alongside Penrose, used the same mathematical tools to prove that the Big Bang cosmological model—our current best model for how the entire universe came into existence—had a singularity at the very initial moment. These are results from the celebrated Penrose-Hawking Singularity Theorem. The fact that mathematics demonstrated that astrophysical black holes may exactly exist in nature is exactly what has energized the quest to search for them using astronomical techniques. Indeed, since Penrose’s work in the 1960s, numerous black holes have been identified. Black Holes Play Yo-Yo With Stars The remaining half of the prize was shared between astronomers Reinhard Genzel and Andrea Ghez, who each lead a team that discovered the presence of a supermassive black hole, four million times more massive than the sun, at the center of our Milky Way galaxy. Genzel is an astrophysicist at the Max Planck Institute for Extraterrestrial Physics, Germany and the University of California, Berkeley. Ghez is an astronomer at the University of California, Los Angeles. Genzhel and Ghez used the world’s largest telescopes (Keck Observatory and the Very Large Telescope) and studied the movement of stars in a region called Sagittarius A* at the center of our galaxy. They both independently discovered that an extremely massive—four million times more massive than our sun—invisible object is pulling on these stars, making them move in very unusual ways. This is considered the most convincing evidence of a black hole at the center of our galaxy. This 2020 Nobel Prize, which follows on the heels of the 2017 Nobel Prize fo...
Synthetic biology is like a reality-altering version of Minecraft. Rather than digital blocks, synthetic biology rejiggers the basic building blocks of life—DNA, proteins, biochemical circuits—to rewire living organisms or even build entirely new ones. In theory, the sky’s the limit on rewriting life: lab-grown meat that tastes like the real thing with far less impact on our environment. Yeast cells that pump out life-saving drugs. Recyclable biofuel. But there’s a catch: to get there, we first need to be able to predict how changing a gene or a protein ultimately changes a cell. It’s a tough problem. A human cell carries over 20,000 genes, each of which can be turned on, shut off, or changed in expression levels. So far, synthetic biologists have taken the trial-and-error approach. Part of the reason is that life’s biological circuits are incredibly difficult to decipher. Changes to one gene or protein may trigger a “butterfly effect” type of repercussion that propagates unpredictably through the cell. Rather than getting yeast to pump out insulin, for example, the cell could produce a bastardized, non-working version, or just die off. Designing new biological circuits takes time—lots of it. But maybe there’s another way. This month, a team at the Department of Energy’s Lawrence Berkeley National Laboratory, led by Dr. Hector Garcia Martin, suggested it might not be necessary to meticulously tease apart the molecular dance inside a cell to be able to manipulate it. Instead, the team tapped into the power of machine learning and showed that even with a limited dataset, the AI was able to predict how changes to a cell’s genes can affect its biochemistry and behavior. What’s more, the algorithm could also make recommendations on how to further improve the next bioengineering cycle using simulations. The program provides predictions on how likely an additional genetic change is to lead to a syn-bio project goal—for example, making hoppy Indian Pale Ales (IPAs) but without actual hops in the mix. “The possibilities are revolutionary,” said Martin. “Right now, bioengineering is a very slow process. It took 150 person-years to create the anti-malarial drug artemisinin. If you’re able to create new cells to specification in a couple weeks or months instead of years, you could really revolutionize what you can do with bioengineering.” Limits to Power Similar to germline genome editing in humans and AI, synthetic biology has the power to change the world. Considered one of the “Top Ten Emerging Technologies” by the World Economic Forum in 2016, syn-bio includes many branches of research—wiping out all mosquitoes with gene drives, or designing microbiomes for agriculture to replace environment-damaging fertilizers. However, metabolic engineering is its current golden child. Everything alive requires metabolism. The concept in science is a bit different than the everyday vernacular. If you think of the cell as a car manufacturing facility, and every cellular component as raw material, then “metabolism” is the process of making a car out of these raw materials but at a cellular scale. Tweaking the manufacturing process, as had happened during Covid-19, can change a car manufacturer into one that makes ventilators without fundamentally altering the factory. In essence, synthetic biology does the same thing. It tweaks a cell so that its normal production is now directed to something else—a yeast that has no concept of blood sugar can now pump out insulin. Yet due to its complexity, reprogramming a cell is far harder than rewriting software code. Here’s where AI can help. “Machine learning arises as an effective tool to predict biological system behavior,” said the team. Rather than fully characterizing how molecular circuits work together, machine learning can extract trends from experimental data, and in turn provide predictions on how a synthetic biology tweak changes a cell. Better yet, it can do so even without ...
Nuclear fusion has gone from a scientists’ pipe dream to a technology attracting serious investment. Now one of the startups chasing this holy grail of energy production has published a series of peer-reviewed scientific papers that validate the underlying physics of their approach. For decades, the leading hope for achieving fusion power has been the International Thermonuclear Experimental Reactor (ITER) being built in France. News earlier this year that construction is now underway has provided hope that the goal might finally be within reach. But the project isn’t expected to be fully operational until 2035, and with a price tag of at least $22 billion, it seems there’s still some way to go before the technology can go mainstream. A growing number of startups seem to think they can do things faster and cheaper, but judging the feasibility of these private endeavors has proven challenging. Now researchers from Commonwealth Fusion Systems, one of the leaders of the pack, and their collaborators at MIT have published seven papers describing their progress in a special issue of the Journal of Plasma Physics. The results are promising, suggesting their reactor design should work and could even exceed their expectations. Like the ITER plant, the company’s SPARC reactor is a tokamak, the name for a specific design of fusion reactors. The machine consists of a doughnut-shaped chamber used to contain an incredibly hot plasma made up of two different isotypes of hydrogen fusing together to create helium and a huge amount of energy as a byproduct. Containing this roiling sea of high energy particles requires powerful magnetic fields. In conventional tokamaks they are provided by enormous electromagnets made from superconducting wires that need to be cryogenically cooled. The secret to the SPARC reactor is that its magnets will be built from new high-temperature superconductors that require much less cooling and can produce far more powerful magnetic fields. That means the reactor can be ten times more compact than ITER while achieving similar performance. As with any cutting-edge technology, converting principles into practice is no simple matter. But the analysis detailed in the papers suggests that the reactor will achieve its goal of producing more energy than it sucks up. So far, all fusion experiments have required more energy to heat the plasma and sustain it than has been generated by the reaction itself. The SPARC reactor is designed to achieve a Q factor of at least two, which means it will produce twice as much energy as it uses, but the analysis suggests that figure might actually rise to ten or more. The papers used the same physics and simulations as the ITER design team and other previous fusion experiments. Martin Greenwald, deputy director of MIT’s Plasma Science and Fusion Center, said in a press release that there are still many details to work out, particularly when it comes to actually designing and building the machine. But the results suggest there are no major obstacles and that they should be able to meet their goal of starting construction midway through next year. The next major milestone for the group will be the successful demonstration of the magnet technology at the heart of their design. Commonwealth said in a press release that they hope to demonstrate a 20 Tesla large-bore magnet in 2021. If everything remains on track they expect SPARC to demonstrate the first ever energy-positive fusion reaction by 2025, paving the way for a commercial fusion power plant the company calls ARC. Cary Forest, a physicist at the University of Wisconsin, told the New York Times that the group’s timelines might be a little ambitious, but the results suggest that the reactor will work as they hope. It seems like the hope of near-limitless clean energy may not be as far off as we thought. Image Credit: Commonwealth Fusion Systems
Are we alone in the universe? It’s a question whose answer—whether it’s yes or no—would philosophically and scientifically rock our world to the core. To find out, scientists have long been turning powerful radio telescopes to the cosmos. The theory is that, like us, other intelligent species are perhaps broadcasting radio signals with the distinctly “unnatural” signature of a technological civilization. But despite decades of intent listening, we’ve yet to pick anything up. Even in a recent survey of 10 million stars by the Murchison Widefield Array radio telescope in Western Australia—one of the most extensive to date—scientists found nothing of note. Where is everyone? Theories abound, but one possibility is we simply haven’t looked enough. Our galaxy, with its hundreds of billions of stars and countless planets, is a very big place. The scientists conducting the Murchison survey said it was like searching an area the size of a swimming pool in the ocean. In our search for a needle, maybe we just need to sift more straw. But there’s a problem. Our own civilization’s ceaseless radio chatter—which, in theory at least, would be similar to the signals SETI’s searching for—is growing louder, making it much harder for scientists to filter out local noise. While researchers have techniques and software to remove human signals, some are suggesting a more radical solution. Might we escape the noise entirely? The further from civilization you go—Australia’s outback or Chile’s Atacama desert—the more the chatter fades. And if you keep following this line of reasoning to its end, you’ll land in a place with the most profound silence of all: the far side of the moon. It’s no surprise, then, that scientists have been dreaming of an observatory on the moon for years. Equally unsurprising is the fact no such observatory yet exists. But a recent paper—written by Breakthrough Listen sponsored researchers Eric Michaud, Andrew Siemion, Jamie Drew, and Pete Worden—makes the case for a SETI (search for extraterrestrial intelligence) observatory on the moon or in lunar orbit. And notably, they suggest that such a project is, perhaps, for the first time approaching feasibility. SETI on the Far Side The far side of the moon is an ideal place to search for radio signals from other civilizations for a few reasons. The first, as noted, is its exquisite radio silence. Astronomer Phillipe Zarka, quoted by the authors, says, “The far side of the moon during the lunar night is the most radio-quiet place in our local universe.” And for radio signals of human origin, this is a permanent condition. The moon is tidally locked, so the far side always faces away from Earth. How quiet is quiet? According to the authors, an early-1970s NASA orbiter found radio noise from Earth declined by one to three orders of magnitude as the satellite passed behind the moon. Simulations suggest this effect would be even greater on the lunar surface. One study found that near the crater Daedalus some radio signals from Earth would be reduced by as much as 10 orders of magnitude (10 billion-fold). The only remaining radio interference of human origin would be from rovers and probes elsewhere in the Solar System—of which there are, of course, far fewer than in Earth orbit. Added to an environment largely devoid of human radio interference, lunar nights last two weeks, allowing for extended viewing parties. And the cherry on top: An observatory on the moon could detect wavelengths in parts of the radio spectrum that are blocked by Earth’s ionosphere. Together, these attributes make the moon a uniquely desirable destination for SETI—if you can, in fact, figure out how to fund and build an observatory that takes advantage of them. The paper outlines two options. A Lunar Arecibo The easiest route would be to place a telescope in lunar orbit. The orbiter would scan the sky for signals from behind the moon—taking advantage of the far side’s ra...
Last year I did a VR experience meant to simulate what it’s like to be at the US-Mexico border wall. The tall, foreboding wall towered above me, and as I turned from side to side there were fields of grass with some wildlife and a deceivingly harmless-looking border patrol station. I wanted to explore more, so I took a few steps toward the wall, hoping to catch a glimpse of the Mexico side through its tall metal slats. “Oops!” a voice called out. A hand landed lightly on my arm. “Look out, you’re about to run into the wall.” The “wall” was in fact a curtain—the experience took place in a six-foot-by-eight-foot booth alongside dozens of similar VR booths—and I had, in fact, just about walked through it. Virtual reality is slowly getting better, but there are all kinds of improvements that could make it feel more lifelike. More detailed graphics and higher screen resolution can make the visual aspect (which is, of course, most important) more captivating, and haptic gloves or full haptic suits can lend a sense of touch to the experience. Now there’s a solution to the motion problem, too: a pair of robotic VR boots lets users walk around within an experience or game without actually leaving the enclosed space they’re in. Dubbed Ekto One, the boots are the first product to come out of a Pittsburgh-based startup called Ekto VR. They’re made of lightweight carbon fiber and are sort of like a pair of skates with automatically-engaging brakes. They’re a bit clunky, because they have to accommodate the rotating discs built into the bottom, which turn in the direction the wearer moves. They’re basically little treadmills in boot form; your foot touches the floor and a set of wheels pull your leg back while you walk forwards, matching the motion it looks like you’re taking in your headset. For the boots to make a VR experience more realistic, users will have to forget that the roller-skate-like contraptions are strapped to their feet. But this is just the first iteration, and as most flashy new tech tends to do, the boots (and other VR accessories) will get smaller, sleeker, and cheaper over time. Pretty soon we may be able to walk all the way to Mexico without ever leaving the room. Image Credit: Ekto VR
Though far too many sectors of the economy have suffered enormous losses during the coronavirus pandemic, a few are doing alright. One of those is car sales. It’s sort of counter-intuitive—because with everything closed, where can we even go in our cars?—but digging a bit deeper, there are some logical reasons why lots of folks might be dropping cash on a new set of wheels. With everyone cooped up at home, people have been saving more disposable income than probably ever before, so the money’s there and ready to go. For those who still need a loan, interest rates are at all-time lows. Road trips and domestic travel have largely superseded flying and international travel. And finally, buying a shiny new car just feels good—and we need all the feel-goods we can get right now. Combustion-engine cars still dominate the market, making up 97 percent of global car sales through 2019. In China, though, electric cars are giving gas cars a run for their money—one in particular, the Wuling Hong Guang Mini EV. Launched in late July, the Hong Guang generated over 15,000 orders within 20 days of its release, and racked up another 35,000 in the subsequent month. At a total of 50,000 orders in under two months, then, it briskly surpassed Chinese orders for Tesla Model 3s in the same period. So why are Chinese drivers so eager to drop their hard-earned cash on this teeny tiny car? For starters, it’s not all that much cash, relatively speaking. The no-frills model of the Hong Guang goes for 28,800 yuan (about $4,200 at current exchange rates). That’s less than a tenth of the cost of a Tesla Model 3 (291,800 yuan). Of course, you’re getting a very different car for your money, but Chinese consumers seem to be a-ok with that, especially because the mini electric vehicle meets a lot of their practical needs for getting around big cities. It doesn’t quite have the sleek, futuristic look you might expect of a new car, with its designers instead opting for a boxy shape to try to maximize—and simultaneously minimize—available space. GM markets the car as “small on the outside, big on the inside.” It’s 9.5 feet long by 4.9 feet wide, and 5.3 feet tall. That’s comparable to Mercedes’ Smart Fortwo, and ideal for squeezing into cramped parking spots on bustling city streets. The car has 12 different storage compartments in the cabin, including a smartphone tray on the dashboard. Its max speed is 62 miles per hour (100 kilometers per hour), which isn’t quite fast enough for long trips on highways, but works great for moving around a city and its environs. Drivers can go about 105 miles on a single charge, and can monitor and control the car’s battery functions on a smartphone app. If sales are off to a strong start, they’re likely going to get even stronger; GM’s Chinese joint venture is planning to open around 100 “experience stores” in China to continue to promote the vehicle. Though the pandemic has thrown a wrench in the growth of China’s middle class, it is nonetheless growing, meaning millions more people per year have the means to acquire possessions like cars. It’s important that electric cars, whether Tesla Model 3s or Wuling Hong Guangs, continue to take over more of this market share; given the state of the climate crisis, putting millions more combustion-engine cars on the road in an already very polluted country would be counterproductive in a big way. Hopefully once the pandemic has subsided, people will go back to riding public transit. But in the meantime, if we’ve got to buy cars, they may as well be small, practical, and electric. Image Credit: General Motors
Amid the horrific public health and economic fallout from a fast-moving pandemic, a more positive phenomenon is playing out: Covid-19 has provided opportunities to businesses, universities, and communities to become hothouses of innovation. Around the world, digital technologies are driving high-impact interventions. Community and public health leaders are handling time-sensitive tasks and meeting pressing needs with technologies that are affordable and inclusive, and don’t require much technical knowledge. Our research reveals the outsized impact of inexpensive, readily available digital technologies. In the midst of a maelstrom, these technologies—among them social media, mobile apps, analytics, and cloud computing—help communities cope with the pandemic and learn crucial lessons. To gauge how this potential is playing out, our research team looked at how communities incorporate readily available digital technologies in their responses to disasters. Community Potential As a starting point, we used a model of crisis management developed in 1988 by organizational theorist Ian Mitroff. The model has five phases: Signal detection to identify warning signs Probing and prevention to actively search and reduce risk factors Damage containment to limit its spread Recovery to normal operations Learning to glean actionable insights to apply to the next incident Although this model was developed for organizations dealing with crises, it is applicable to communities under duress and has been used to analyze organizational responses to the current pandemic. Our research showed that readily available digital technologies can be deployed effectively during each phase of a crisis. Phase 1: Signal Detection Being able to identify potential threats from rivers of data is no easy task. Readily available digital technologies such as social media and mobile apps are useful for signal detection. They offer connectivity any time and anywhere, and allow for rapid sharing and transmission of information. New Zealand, for example, has been exploring an early warning system for landslides based on both internet-of-things sensors and digital transmission through social media channels such as Twitter. Phase 2: Prevention and Preparation Readily available digital technologies such as cloud computing and analytics enable remote and decentralized activities to support training and simulations that heighten community preparedness. The federal government, for example, has developed the COVID Alert app for mobile devices that will tell users whether they have been near someone who has tested positive for Covid-19 during the previous two weeks. Phase 3: Containment Although crises cannot always be averted, they can be contained. Big data analytics can isolate hot spots and “superspreaders,” limiting exposure of larger populations to the virus. Taiwan implemented active surveillance and screening systems to quickly react to Covid-19 cases and implement measures to control its spread. Phase 4: Recovery Social capital, personal and community networks, and shared post-crisis communication are essential factors for the recovery process. Readily available digital technologies can help a community get back on its feet by enabling people to share experiences and resource information. For example, residents of Fort McMurray, Alta., have experienced the pandemic, flooding, and the threat of wildfires. As part of the response, the provincial government offers northern Alberta residents virtual addiction treatment support via Zoom videoconferencing. During recovery, it is also important to foster equity to avoid a privileged set of community members receiving preferential services. To address this need, anti-hoarding apps for personal protective equipment and apps that promote volunteerism can prove useful. Phase 5: Learning It is usually difficult for communities to gather knowledge on recovery and renewal from multiple...
Is social media ruining the world? Dramatic political polarization. Rising anxiety and depression. An uptick in teen suicide rates. Misinformation that spreads like wildfire. The common denominator of all these phenomena is that they’re fueled in part by our seemingly innocuous participation in digital social networking. But how can simple acts like sharing photos and articles, reading the news, and connecting with friends have such destructive consequences? These are the questions explored in the new Netflix docu-drama The Social Dilemma. Directed by Jeff Orlowski, it features several former Big Tech employees speaking out against the products they once upon a time helped build. Their reflections are interspersed with scenes from a family whose two youngest children are struggling with social media addiction and its side effects. There are also news clips from the last several years where reporters decry the technology and report on some of its nefarious impacts. Tristan Harris, a former Google design ethicist who co-founded the Center for Humane Technology (CHT) and has become a crusader for ethical tech, is a central figure in the movie. “When you look around you it feels like the world is going crazy,” he says near the beginning. “You have to ask yourself, is this normal? Or have we all fallen under some kind of spell?” Also featured are Aza Raskin, who co-founded CHT with Harris, Justin Rosenstein, who co-founded Asana and is credited with having created Facebook’s “like” button, former Pinterest president Tim Kendall, and writer and virtual reality pioneer Jaron Lanier. They and other experts talk about the way social media gets people “hooked” by exploiting the brain’s dopamine response and using machine learning algorithms to serve up the customized content most likely to keep each person scrolling/watching/clicking. The movie veers into territory explored by its 2019 predecessor The Great Hack—which dove into the Cambridge Analytica scandal and detailed how psychometric profiles of Facebook users helped manipulate their political leanings—by having its experts talk about the billions of data points that tech companies are constantly collecting about us. “Every single action you take is carefully monitored and recorded,” says Jeff Siebert, a former exec at Twitter. The intelligence gleaned from those actions is then used in conjunction with our own psychological weaknesses to get us to watch more videos, share more content, see more ads, and continue driving Big Tech’s money-making engine. “It’s the gradual, slight, imperceptible change in your own behavior and perception that is the product,” says Lanier. “That’s the only thing there is for them to make money from: changing what you do, how you think, who you are.” The elusive “they” that Lanier and other ex-techies refer to is personified in the film by three t-shirt clad engineers working tirelessly in a control room to keep peoples’ attention on their phones at all costs. Computer processing power, a former Nvidia product manager points out, has increased exponentially just in the last 20 years; but meanwhile, the human brain hasn’t evolved beyond the same capacity it’s had for hundreds of years. The point of this comparison seems to be that if we’re in a humans vs. computers showdown, we humans haven’t got a fighting chance. But are we in a humans vs. computers showdown? Are the companies behind our screens really so insidious as the evil control room engineers imply, aiming to turn us all into mindless robots who are slaves to our lizard-brain impulses? Even if our brain chemistry is being exploited by the design of tools like Facebook and YouTube, doesn’t personal responsibility kick in at some point? The Social Dilemma is a powerful, well-made film that exposes social media’s ills in a raw and immediate way. It’s a much-needed call for government regulation and for an actionable ethical reckoning within the tech industry itself. But it ...
Even as toddlers we’re good at inferences. Take a two-year-old that first learns to recognize a dog and a cat at home, then a horse and a sheep in a petting zoo. The kid will then also be able to tell apart a dog and a sheep, even if he can’t yet articulate their differences. This ability comes so naturally to us it belies the complexity of the brain’s data-crunching processes under the hood. To make the logical leap, the child first needs to remember distinctions between his family pets. When confronted with new categories—farm animals—his neural circuits call upon those past remembrances, and seamlessly incorporate those memories with new learnings to update his mental model of the world. Not so simple, eh? It’s perhaps not surprising that even state-of-the-art machine learning algorithms struggle with this type of continuous learning. Part of the reason is how these algorithms are set up and trained. An artificial neural network learns by adjusting synaptic weights—how strongly one artificial neuron connects to another—which in turn leads to a sort of “memory” of its learnings that’s embedded into the weights. Because retraining the neural network on another task disrupts those weights, the AI is essentially forced to “forget” its previous knowledge as a prerequisite to learn something new. Imagine gluing together a bridge made out of toothpicks, only having to rip apart the glue to build a skyscraper with the same material. The hardware is the same, but the memory of the bridge is now lost. This Achilles’ heel is so detrimental it’s dubbed “catastrophic forgetting.” An algorithm that isn’t capable of retaining its previous memories is severely kneecapped in its ability to infer or generalize. It’s hardly what we consider intelligent. But here’s the thing: if the human brain can do it, nature has already figured out a solution. Why not try it on AI? A recent study by researchers at the University of Massachusetts Amherst and the Baylor College of Medicine did just that. Drawing inspiration from the mechanics of human memory, the team turbo-charged their algorithm with a powerful capability called “memory replay”—a sort of “rehearsal” of experiences in the brain that cements new learnings into long-lived memories. What came as a surprise to the authors wasn’t that adding replay to an algorithm boosted its ability to retain its previous trainings. Rather, it was that replay didn’t require exact memories to be stored and revisited. A bastardized version of the memory, generated by the network itself based on past experiences, was sufficient to give the algorithm a hefty memory boost. Playing With Replay In the 1990s, while listening in on the brain’s electrical chatter in sleeping mice, memory researchers stumbled across a perplexing finding. The region of the brain called the hippocampus, which is critical for spatial navigation and memory, sparked with ripples of electrical waves in sleep. The ripples weren’t random—rather, they recapitulated in time and space the same neural activity the team observed earlier, while the mice were learning to navigate a new maze. Somehow, the brain was revisiting the electrical pattern encoding the mice’s new experiences during sleep—but compressed and distorted, as if rewinding and playing a fraying tape in fast-forward. Scientists subsequently found that memory replay is fundamental to strengthening memories in mice and men. In a way, replay provides us with additional simulated learning trials to practice our learnings and stabilize them into a library of memories from which new experiences can build upon rather than destroy. It’s perhaps not surprising that deep neural networks equipped with replay stabilize their memories—with the caveat that the algorithm needs to perfectly “remember” all previous memories as input for replay. The problem with this approach, the team said, is that it’s not scalable. The need to access prior experiences rapidly skyrockets data...
People have a bottomless appetite for all things space these days. Some space news is truly mind-blowing, like the first image of a black hole last year or this year’s time lapse of said black hole’s dancing shadow. Then there’s news of the less mind-blowing variety. Second only to full coverage of every supermoon are headlines of near (but harmless) misses by asteroids. However, while a supermoon isn’t too much more breathtaking than a run-of-the-mill full moon—those harmless near-misses actually do hint at something more significant. As our planet plows through space, its orbit inevitably crosses the orbits of other inhabitants of the solar system. Among this group are asteroids of all sizes. Most of these are so small they’d be vaporized by the atmosphere, but others are big enough to impact the surface and do serious damage. Infamously, an asteroid the size of a city crashed into Mexico’s Yucatán Peninsula some 66 million years ago. The blast was enormous. It sent mega-tsunamis racing across the ocean and dug a roughly 93-mile-wide crater, throwing much of that material into the atmosphere where it blocked and dimmed the sun for years. Many scientists believe the Chicxulub impact was the prime culprit behind the extinction of the dinosaurs and 75% of life on the planet at the time. There’s been no repeat performance in the last 66 million years, and the day-to-day risk of a major impact is very, very low. But it’s a near certainty that another large object will at some point in the future collide with our planet—unless we do something about it. Luckily, while the last killer space rock dropped out of the sky with no warning, we have a few tools the dinosaurs didn’t. In addition to telescopes to chart potentially hazardous asteroids, we can visit and, theoretically, divert an asteroid’s course before it reaches us. Now, the world’s space agencies are teaming up to take planetary defense beyond theory. This month, the European Space Agency (ESA) approved and funded their part of the Asteroid Impact Deflection Assessment (AIDA), a joint mission with NASA and other space agencies to, for the first time, attempt to alter the orbit of a sizable asteroid in deep space. The asteroid in question poses no threat to Earth—rather, it’s a test case for how we may deflect a hypothetical future asteroid that does pose a risk. The missions aim to yield the first hard data in humanity’s quest to avoid the fate of the dinosaurs. Desperately Seeking NEOs Of course, to divert an asteroid, you have to find it first. In 1998, NASA launched a program to chart 90% of all the asteroids and comets in our neighborhood—known as near-Earth objects (NEOs)—larger than a kilometer in diameter. NASA hit this mark in 2010, but by then, the space agency had already been re-tasked to find 90% of all NEOs larger than 140 meters by the end of 2020. The new mandate would include objects that can wreak significant global havoc—as in the case of Chicxulub—but also smaller strikes that would nonetheless do serious damage to the regions they impact. To date, we’ve charted 9,334 NEOs larger than 140 meters (including comets). But that number is only a little more than a third of the total estimated population of 25,000. While current telescopes have done a great job, a space-based, infrared telescope would speed progress. So, it’s good news that, after years awaiting approval, the NEO Surveillance Mission (previously NEOCam) won first funding to kick the search into high gear with such a telescope. The mission aims to round out our list of 90% of NEOs bigger than 140 meters within a decade of launch. Of course, it doesn’t end there. Once we’ve found a killer asteroid or comet, it’d be nice to be able to do something about it—like, you know, gently (or not so gently) take it by the elbow and usher it from our planet’s path. So, how does one move a mountain in space? All It Takes Is a (Maybe Nuclear) Nudge There are many ideas about how t...
loading
Comments (2)

Kirk Nankivell

Awesome!!

Jul 23rd
Reply (1)
Download from Google Play
Download from App Store