Subscribe for future emails here!
Check out our new audio digest!
Deepfakes, synthetic media that are often indistinguishable from the real thing, are arguably one of the scariest technologies available today. Indeed, by being able to create videos of politicians that could seem real, mimicking voices and appearances, deepfakes have the potential to spread massive amounts of disinformation. While the cat has long been out of the box on deepfakes, there may be some small redemption in finding positive uses for the technology. A new HBO documentary, Welcome to Chechnya, provides such an example. Chechnya’s LGBTQ population has faced “significant persecution, including unlawful detentions, torture, and other forms of abuse.” While survivors cannot safely reveal their identities, deepfake-like technology allowed the makers of Welcome to Chechnya to overlay volunteers’ faces on top of survivors’ faces to allow those survivors to speak out about their experiences. Just as deepfakes can allow the film to “shine light on human rights abuses while minimizing the risk for victims involved in the production”, they may have other uses that could help bring attention to the unfortunate, and sometimes horrid, issues that many of us remain unaware of.
Among the adverse side-effects brought on by the era of deep learning has been an immense carbon footprint. The massive amount of compute it takes to train deep learning models, compounded by the need to iterate on models and reproduce experiments, does not come without cost: a 2019 MIT Technology Review article notes that training a single AI model can emit as much carbon as five cars in their lifetimes. With so much attention being paid to the climate crisis, AI’s environmental impact has come under scrutiny as well. In efforts to stimulate change, a movement called “Green AI” seeks to prioritize leaner, less power-hungry AI models that can achieve similar performance on other benchmarks. But progress requires measurement: by how much can we reduce AI’s carbon footprint? To that end, a team of researchers from Stanford, Facebook AI Research, and McGill University has developed a tool that measures the carbon emissions of a machine learning project. As machine learning systems become more ubiquitous, they will inevitably represent a greater share of carbon emissions. Tools such as this one, paired with a commitment to including low carbon emissions as an objective for machine learning systems, will become more and more important in the future.
Advances & Business
Entering a Building May Soon Involve a Thermal Scan and Facial Recognition - Businesses install facial recognition that simultaneously screens for fevers in what could become the new normal.
Elementary Robotics is making its quality assurance robots commercially available - Two years and over $17 million after it first began working on its robots for quality assurance, the Los Angeles-based Elementary Robotics has finally made its products commercially available.
Self-driving forklifts are here to revolutionize warehouses, for better or worse - Imagine a future Blade Runner-esque workplace in which human and robot co-workers work side by side without it seeming in the least bit remarkable.
Why China’s Race For AI Dominance Depends On Math - The world first took notice of Beijing’s prowess in artificial intelligence (AI) in late 2017, when BBC reporter John Sudworth, hiding in a remote southwestern city, was located by China’s CCTV system in just seven minutes. At the time, it was a shocking demonstration of power.
Concerns & Hype
Detroit Police Chief: Facial Recognition Software Misidentifies 96% of the Time - Detroit regulated facial recognition software. It’s still used only on Black people.
ACM calls for governments and businesses to stop using facial recognition - An Association for Computing Machinery (ACM) tech policy group today urged lawmakers to immediately suspend use of facial recognition by businesses and governments, citing documented ethnic, racial, and gender bias. In a letter (PDF) released today by the U.S.
MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs - MIT has taken offline its highly cited dataset that trained AI systems to potentially describe people using racist, misogynistic, and other problematic terms. The database was removed this week after The Register alerted the American super-college.
We need a new field of AI to combat racial bias - Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement.
Machine learning systems, fair or biased, reflect our moral standards - While machine learning systems and algorithms are theoretically more objective than humans, this does not necessarily result in different or fairer results.
Analysis & Policy
AWS, Google, and Mozilla back national AI research cloud bill in Congress - A group of more than 20 organizations including tech giants like AWS, Google, IBM, and Nvidia joined schools like Stanford University and The Ohio State University today in backing the idea of a national AI research cloud.
Artificial Intelligence: The time for ethics is over - Organising ethical debates has long been an efficient way for industry to delay and avoid hard regulation. Europe now needs strong, enforceable rights for its citizens, writes Green MEP Alexandra Geese.
US installing AI-based border monitoring system - The US border control agency said Thursday it was expanding an unmanned monitoring system based on artificial intelligence on the US-Mexico frontier, granting a key contract to Silicon Valley startup Anduril Industries.
Expert Opinions & Discussion within the field
Reflecting on a year of making machine learning actually useful - For those of you who don’t know my story, I’ll give you the short version: I did machine learning research for two years, decided not to get a PhD at the time, and became the first machine learning engineer at Viaduct, a startup that provides an end-to-end machine learning platform for automakers.
Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias - This is an updated version. Turing Award Winner and Facebook Chief AI Scientist Yann LeCun has announced his exit from popular social networking platform Twitter after getting involved in a long and often acrimonious dispute regarding racial biases in AI.