A team scoured the human proteome for antimicrobial molecules and found thousands, plus a surprise about how animals evolved to fight infections.
MARCELO DER TOROSSIAN Torres lifted the clear plastic cover off of a petri dish one morning last June. The dish, still warm from its sleepover in the incubator, smelled of rancid broth. Inside it sat a rubbery bed of amber-colored agar, and on that bed lay neat rows of pinpricks—dozens of colonies of drug-resistant bacteria sampled from the skin of a lab mouse.
Torres counted each pinprick softly to himself, then did some quick calculations. Untreated for the infection, the samples taken from an abscess on the mouse had yielded billions of superbugs, or antibiotic-resistant bacteria. But to his surprise, some of the other rows on the petri dish seemed empty. These were the ones corresponding to samples from mice that received an experimental treatment—a novel antibiotic.
Torres dug up other dishes cultured from more concentrated samples, taken from the same mice who had gotten the antibiotic. These didn’t look empty. When he counted them up, he found that the antibiotic had nuked the bacterial load so that it was up to a million times sparser than the sample from the untreated mouse. “I got very excited,” says Torres, a postdoc specializing in chemistry at the University of Pennsylvania. But this custom antibiotic wasn’t entirely his own recipe. It took an artificial intelligence algorithm scouring a database of human proteins to help Torres and his team find it.
Read the full story in WIRED.
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.
ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.
Organizations rely on AI to approve a loan or shape a defendant’s sentence. But the foundations upon which these intelligent systems are built are susceptible to bias. Bias from the data, from the programmer, and from a powerful company’s bottom line can snowball into unintended consequences. This is the reality AI researcher Timnit Gebru cautioned against at a RE:WIRED talk on Tuesday.
“There were companies purporting [to assess] someone’s likelihood of determining a crime again,” Gebru said. “That was terrifying for me.”
Read the full story in WIRED.
Two teams found different ways for quantum computers to process nonlinear systems by first disguising them as linear ones.
Sometimes, it’s easy for a computer to predict the future. Simple phenomena, such as how sap flows down a tree trunk, are straightforward and can be captured in a few lines of code using what mathematicians call linear differential equations. But in nonlinear systems, interactions can affect themselves: When air streams past a jet’s wings, the air flow alters molecular interactions, which alter the air flow, and so on. This feedback loop breeds chaos, where small changes in initial conditions lead to wildly different behavior later, making predictions nearly impossible — no matter how powerful the computer.
“This is part of why it’s difficult to predict the weather or understand complicated fluid flow,” said Andrew Childs, a quantum information researcher at the University of Maryland. “There are hard computational problems that you could solve, if you could [figure out] these nonlinear dynamics.”
That may soon be possible. In separate studies posted in November, two teams — one led by Childs, the other based at the Massachusetts Institute of Technology — described powerful tools that would allow quantum computers to better model nonlinear dynamics.
Read the full story in Quanta Magazine