Last Week in AI #145: DeepMind AI helps discover new math theorems, Timnit Gebru's new research center, Clearview AI warned to stop using UK data, and more!
DeepMind AI proposes new knot theory conjectures, Ex-Googler Timnit Gebru starts her own AI research center, UK regulators warn Clearview AI to stop using illegally obtained face data, and more!
In a recent Nature paper, DeepMind and collaborators demonstrated how to use AI to discover new mathematical theorems. The AI in this case helped to propose potential correlations between the algebraic and geometric theory of knots from a large dataset of knot patterns. Mathematicians took these suggested correlations and tried to prove them. Some suggestions were quickly shown to be false, but eventually, the mathematicians proved one new theorem that is "complex, subtle, and unintuitive."
While computers have aided mathematical proofs for decades, they've mostly done so by doing brute force search of contradictions of a conjecture which can disprove it. This marks the first time that a computer program proposed potential conjectures on its own and helped to make a new mathematical discovery.
Our take: I really enjoyed Numberphile's podcast on this development with interviews of some of the researchers involved. This type of automated conjecture proposal will be especially helpful for branches of math that can build a large dataset of existing patterns, and we may soon see more mathematicians adopt this tool to make new discoveries. For types of math where such a dataset cannot be built, the approach here will be less useful.
When asked about the future potential of AI in math, one researcher in the podcast gave a very positive outlook, arguing that comparatively, humans' ability to do math is much less developed than our other, more innate abilities, like vision, so there may be many low-hanging fruits that AI-assisted math can help identify soon.
Almost exactly a year ago, prominent AI ethics researcher Timnit Gebru announced that she had been fired from Google over a dispute over a paper she had co-written. Now, we know her current project -- the Distributed Artificial Intelligence Research (DAIR) Institute, of which she is a founder and executive director. The institute’s website provides the following description of what DAIR is:
“We are an interdisciplinary and globally distributed AI research institute rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial. Our research reflects our lived experiences and centers our communities.”
A press release announcing the venture came out last week that explained this venture as follows:
“DAIR [is] an independent, community-rooted institute set to counter Big Tech’s pervasive influence on the research, development and deployment of AI ... and is a response to the need [Timnit Gebru] sees for independent spaces where researchers across the globe can set the agenda and conduct AI research rooted in their communities and lived experiences … With DAIR, Gebru aims to create an environment that is independent from the structures and systems that incentivize profit over ethics and individual well-being.”
The press release also stresses that the institute is funded by non profit foundations, and is therefore free of the influence of big tech. The announcement arrives just as the major AI Neurips conference begins, where DAIR’s published research (“Constructing a Visual Dataset to Study the Effects of Spatial Apartheid in South Africa”) will be presented. As of now, DAIR’s website lists 4 people on its team, with the lead author of the paper being its first research fellow.
Our take: Dr. Gebru has accomplished much as a researcher of AI ethics, so this announcement is certainly exciting. There’s no doubt that working on that subject within big tech companies such as Google and Facebook can result in conflicts of interest and restrictions on what may be published (as was the case with the paper that resulted in Dr. Gebru being fired). So, it feels right for this venture to be deliberate about not being funded by these companies and free to scrutinize their use of AI.
The idea of it being a distributed institute is a novel one and may make it harder to scale, though COVID 19 has certainly shown this may be possible. Overall, I’m excited to see what comes next from DAIR!
Clearview AI is again on the defense. The Information Commissioner's Office (ICO), citing a number of suspected breaches of national data protection law, declared its provisional intention to fine Clearview over £17 million (~$22.6M). This follows similar actions by regulators in Australia and Canada. The ICO claimed Clearview's database likely includes pictures of many UK citizens taken without their knowledge. Clearview had been offered on a free trial basis to law enforcement in the UK, which has been discontinued. CEO Hoan Ton-That responded that he was disappointed the UK "misinterpreted" the goal of his company's technology, stating that Clearview AI might have been useful for UK law enforcement investigations into child sexual abuse.
Our take: I find debates over privacy in general quite difficult because while it feels sensible to value our digital privacy, the fact is that we often willingly give away our data in exchange for something of greater value. A person's data is worth little to nothing on its own, but the aggregated data of a large number of people is very valuable to a company like Google, which can then provide a valuable service in return without charging money. I think Clearview is a much trickier case given how it obtained its data without consent, and Hoan Ton-That's previous endeavors also feel incredibly sketchy. Clearview aside, the development of facial recognition technology has fueled a loss of privacy.
Even if I have misgivings about Ton-That's previous companies and Clearview particularly, I think his point that facial recognition could help in cases like catching child predators is one that deserves to be taken into account when considering facial recognition technology more generally. I don't mean to make a normative claim about facial recognition either way here, but just to say that there are tradeoffs in its use: it has positive and negative consequences, and I think the discussion has focused far more on the negative. I think we shouldn't take a distaste for Clearview AI and use that as justification to dismiss the technology entirely, but consider the technology's effects more generally when deciding how we feel about it.
Check out our newest editorial! If you are a paid subscriber (thanks!) you can read all of it, and if you are a regular subscriber (also thanks!) you can read about 1/4th of it as a free preview — and you can also check out our last free-for-all editorial. Enjoy!
Ericsson And Uppsala University Team Up To Research Air Quality Prediction Using Machine Learning And Federated learning - "The researchers aspire to create prediction tools that can help figure out what steps can be taken ahead of time to enhance air quality and protect vulnerable groups from its consequences. "
The Development of Machine Learning Model that Understands Object Relationships - "In an effort to solve this problem, MIT researchers have developed a model that understands the underlying relationships between objects in a scene"
Yale researchers combat biases in machine learning algorithms - "Three Yale scientists are on a mission to produce objective machine learning algorithms from inherently biased training data."
Machine learning aids studies of quantum magnets – Physics World - "Machine-learning techniques can help us understand how quantum spin liquids behave, and thereby support experimentalists in their study of “candidate” materials that may (or may not) be quantum spin liquids."
Machine Learning Reduces Uncertainty in Breast Cancer Diagnoses - "A Michigan Tech-developed machine learning model uses probability to more accurately classify breast cancer shown in histopathology images and evaluate the uncertainty of its predictions."
How AI Is Aiming at the Bad Math of Drug Development - "A new drug typically takes more than a decade to develop, at a cost of almost $3 billion. That’s because about 90% of experimental medicines fail during the various stages of chemical engineering, or during animal or human trials. So drugmakers and investors are spending billions of dollars to turbocharge the search for new treatments using artificial intelligence. "
Robotic Kitchen Automation Levels - "How does a level 5 automated kitchen look like?"
Procedural storytelling is exploding the possibilities of video game narratives - "Procedural stories in video games often induce a specific kind of delight."
Botto, the decentralized AI/human artist, makes its first million - "An AI algorithm called Botto has made somewhere around US$1.3 million at auction for its first six NFT artworks. Botto generates thousands of images, and a community of humans vote to influence its direction and decide which pieces go to auction."
Baidu finds strength in AI and autonomous vehicles - "Baidu’s fully consolidated non-GAAP operating income dropped by 38% year-on-year in the third quarter, but analysts point to strength for the company in its shift to automotive applications for artificial intelligence (AI)."
Standard AI Completes Computer Vision Acquisition, Bolsters ML Leadership to Drive the Future of Autonomous Retail - "Addition of ThirdEye Labs’ ML team and Sameer Qureshi as Vice-President of Machine Learning brings wave of new AI and machine learning talent to Standard AI"
New list highlights 40 top startups solving business problems with artificial intelligence - "Four Seattle-area companies made a list of top private companies building business applications with artificial intelligence and machine learning."
Waymo's co-CEO on the next stop for driverless cars : curbside grocery delivery - "Autonomously driven cars from Waymo, a subsidiary of Google parent Alphabet, will start delivering groceries curbside in San Francisco in early 2022."
Twitch Introduces Machine Learning Feature to Detect Suspicious Users - "Twitch introduces new technology that will combat people that create new accounts in order to circumvent ban evasions."
Softbank led funding round for sustainability start-up Clarity AI - "Softbank's Vision Fund 2 has led the latest funding round for sustainability data technology platform Clarity AI, alongside existing investors including BlackRock, valuing the start-up at $450 million, its founder told Reuters."
Why Adversarial Image Attacks Are No Joke - "Attacking image recognition systems with carefully-crafted adversarial images has been considered an amusing but trivial proof-of-concept over the last five years."
AI Is Learning to Manipulate Us, and We Don’t Know Exactly How - "Trying to decide what to give your brother for Christmas? Where to invest your savings? Whether to paint the kitchen white or yellow? Don’t worry! AI is here to help. And that’s scary."
Artists criticise Spotify CEO Daniel Ek’s investment in AI defence tech - "Spotify‘s CEO Daniel Ek’s investment in artificial intelligence defence technology has been met with backlash by artists and users on the streaming platform. "
Europe’s AI Act falls far short on protecting fundamental rights, civil society groups warn - "Civil society has been poring over the detail of the European Commission’s proposal for a risk-based framework for regulating applications of artificial intelligence which was proposed by the EU’s executive back in April."
San Francisco agency opposes Cruise robotaxi application, citing safety - "San Francisco's public transit operator has challenged an application by Cruise to charge for robotaxi rides, saying on Wednesday promotional videos from the General Motors Co (GM.N) unit show Cruise passengers illegally hopping in and out of vehicles in the middle of the street instead of at the curb."
US rejects calls for regulating or banning ‘killer robots’ - "The US has rejected calls for a binding agreement regulating or banning the use of “killer robots”, instead proposing a “code of conduct” at the United Nations."
Who, exactly, authored this AI-generated spin on Alfred Hitchcock’s Vertigo? - "Generated from running the Alfred Hitchcock classic Vertigo (1958) through an artificial intelligence computer 20 times, the resulting film offers a glimpse into the technology’s current capabilities and limitations."
Pentagon weighing reorganization of AI, data offices - "In an effort to streamline processes and create a cohesive approach to the use of artificial intelligence and data, the Pentagon is considering a reorganization of three key technological offices"
AI Robotics alongside us today - "Autonomous robots have long been part of an aspirational future. My guests are proving how recent advances in Artificial Intelligence (AI) are enabling turning this into a reality."
Reith Lectures: AI and why people should be scared - "Prof Stuart Russell's four lectures, Living With Artificial Intelligence, address the existential threat from machines more powerful than humans - and offer a way forward."
Check out our audio discussions of last week’s AI news stories!
Copyright © 2021 Skynet Today, All rights reserved.