There has been widespread opinion that machine learning algorithms are giving political bots the ability to learn from surroundings and interact with people in a sophisticated way. In events such as Brexit where researchers believe political bots and disinformation played a key role, there is also a belief that AI allowed computers to pose as humans and manipulate the public conversation. However, AI has actually played little role in computation propaganda campaigns thus far. In fact, it is only beginning to be used.
Recently, tech leaders such as Mark Zuckerberg have claimed that AI will be the solution to the problem of digital disinformation. In certain cases, machine learning can help social-media firms pick up and verify fact-checks, but there is debate over whether identifying potentially false information is actually effective. However, these efforts come after the fact, after false articles go viral. Facebook, Google, and others like them are not very zealous in their efforts to get rid of disinformation.
It will take a combination of human labor and AI to succeed in combating computational propaganda, but it is not clear how exactly this will happen. While AI has accomplished a great deal, it still lacks common sense and general intelligence–the propaganda and disinformation we see and attribute to AI are therefore fundamentally human-driven. To address the problem of computational propaganda, we will need to focus on the people behind the tools.
Worries about risks to humanity from “strong” AI have been expressed by many, including Stephen Hawking and Elon Musk. While these are legitimate long-term worries, prioritizing them distracts from the ethical questions “weak” AI is already raising. AI is already working behind the scenes of many of our social systems, influencing high-stakes domains from criminal justice to healthcare. Deploying today’s “weak” AI will require making consequential choices that demand greater democratic oversight not just from AI developers, but from all members of society.
While there has been discussion of creating better algorithms to deal with bias, the data algorithms are trained on are themselves biased–therefore even if algorithms manage to achieve neutrality in themselves, they are not likely to be neutral when trained and deployed. For example, predictive policing systems are fed data that we know is biased–using these data has the potential to drive overpolicing of marginalized communities because those communities are overrepresented in policing data.
While countermeasures to correct for bias in data might be a step in the right direction, they cannot correct for the problem of what kind of data has been collected in the first place. Any approach focused on optimizing for procedural fairness—without attention to the social context in which these systems operate—is going to be insufficient. If our society were just then neutral algorithms might secure just solutions, but in the unjust society we live in they will serve only to reinforce the status quo. Even with the “weak” AI we have today, the road to solving algorithmic bias is not a purely technical one. We will have to treat algorithmic bias as a moral and political problem in which all of us have a stake.
Advances & Business
The Tools of Generative Art, from Flash to Neural Networks - If the history of generative art is our generation’s history, let it be told through the narrative of the talented artists who embraced the new ideas and tools of our time.
What Happens When Machines Learn to Write Poetry - If kids can learn to write poetry, what about machines? A number of researchers, such as Kevin Knight and his students at USC, have been tackling the challenge of using AI to generate sonnets and other styles of poetry.
Can an AI be an inventor? Not yet. - Most of the time, artificial intelligence is simply a tool that helps inventors - for example, by synthesizing enormous data sets to find promising drugs or discover new materials. But what would happen if it were fully responsible for the act of invention itself? Ryan Abbott, a lawyer working on the Artificial Inventor Project, says that there are times when a piece of software or an algorithm should be considered the inventor of something.
Using Machine Learning to “Nowcast” Precipitation in High Resolution - The weather can affect a person’s daily routine in both mundane and serious ways, and the precision of forecasting can strongly influence how they deal with it.
‘Smile with your eyes’: How to beat South Korea’s AI hiring bots and land a job - In cram school-obsessed South Korea, students fork out for classes in everything from K-pop auditions to real estate deals. Now, top Korean firms are rolling out artificial intelligence in hiring - and jobseekers want to learn how to beat the bots.
An emotionally intelligent AI could support astronauts on a trip to Mars - A digital assistant that recognized emotions and responded to astronauts with empathy might vastly improve space missions that take months or years.
The Military Is Building Long-Range Facial Recognition That Works in the Dark - The U.S. military is spending more than $4.5 million dollars to develop facial recognition technology that reads the pattern of heat being emitted by faces in order to identify specific people.
Cool AI Highlights At CES - AI was definitely the dominant theme at CES this week. According to a keynote from LG’s president and CTO, Dr. I.P. Park, this technology is “an opportunity of our lifetime to open the next chapter in … human progress.”
An algorithm that learns through rewards may show how our brain does too - By optimizing reinforcement-learning algorithms, DeepMind uncovered new details about how dopamine helps the brain learn.
Using neural networks to solve advanced mathematics equations - Facebook AI Research has developed the first AI system that can solve advanced mathematics equations using symbolic reasoning. François Charton and Guillaume Lample’s system is capable of solving integration problems and both first- and second-order differential equations, outperforming traditional computation systems.
Neuro-symbolic A.I. is the future of artificial intelligence. Here’s how it works - Neuro-symbolic A.I. attempts to combine learning and logic, allowing A.I. systems to break the world into symbols rather than relying on humans to do it for them and incorporating common sense reasoning and domain knowledge into Big Ol' Neural Nets. The results could lead to significant advances in A.I. systems tackling complex tasks from self-driving to natural language processing.
Walmart Expands Its Robotic Workforce to 650 Additional Stores - Walmart Inc.’s robot army is growing. The world’s largest retailer will add shelf-scanning robots to 650 more U.S. stores by the end of the summer, bringing its fleet to 1,000.
Remembering robotics companies we lost in 2019 - There are many reasons robotics companies fail. From an ill-conceived idea to burn rate and poor execution, building and running a sustainable robotics company is challenging. Robotics development requires a combination of technology expertise, team building and business acumen.
Robots on the move: Professional service robots set for double-digit growth - The idea of robots picking items from warehouse shelves may still seem futuristic today. But the future may be closer than many people think.
Concerns & Hype
Researchers: Are we on the cusp of an “AI winter”? - The last decade was a big one for artificial intelligence but researchers in the field believe that the industry is about to enter a new phase. Hype surrounding AI has peaked and troughed over the years as the abilities of the technology get overestimated and then re-evaluated.
AOC is sounding the alarm about the rise of facial recognition: ‘This is some real life Black Mirror stuff’ - Rep. Alexandria Ocasio-Cortez is raising concerns about the spread of facial recognition, arguing that the technology will quickly become dystopic without regulation.
What the Struggles of Pizza and Coffee-Making Robots Mean for Investors - In recent weeks, two high-profile robotics startups specializing in food preparation have both significantly slashed costs. Automated coffee shop Café X shut down stores and laid off workers while Zume Pizza fired workers and pivoted from pizza-making robots to creating sustainable food packaging.
Analysis & Policy
The US just released 10 principles that it hopes will make AI safer - The White House has released 10 principles for government agencies to adhere to when proposing new AI regulations for the private sector.
Expert Opinions & Discussion within the field
Google Research: Looking Back at 2019, and Forward to 2020 and Beyond - There is enormous potential for machine learning to help with many important societal issues. We have been doing work in several such areas, as well as working to enable others to apply their creativity and skills to solving such problems.
The simplest explanation of machine learning you’ll ever read - You’ve probably heard of machine learning and artificial intelligence, but are you sure you know what they are? If you’re struggling to make sense of them, you’re not alone.
10 ML & NLP Research Highlights of 2019 - This post gathers ten ML and NLP research directions that I found exciting and impactful in 2019. For each highlight, I summarise the main advances that took place this year, briefly state why I think it is important, and provide a short outlook to the future.
Challenges of real-world reinforcement learning - Last week we looked at some of the challenges inherent in automation and in building systems where humans and software agents collaborate. When we start talking about agents, policies, and modelling the environment, my thoughts naturally turn to reinforcement learning (RL). Today’s paper choice sets out some of the current (additional) challenges we face getting reinforcement learning to work well in many real-world systems.