Last Week in AI #203: ChatGPT's new rival, generative AI continues to make waves, new concerns about Deepfakes and more!
Anthropic’s Claude improves on ChatGPT but still suffers from limitations, Conservatives Think ChatGPT Has Gone ‘Woke’, Google is freaking out about ChatGPT
You’ve obviously heard of ChatGPT. Now, meet Claude. Anthropic, the AI alignment startup founded by ex-OpenAI employees, has developed a ChatGPT-like system that appears to improve upon OpenAI’s. Using a technique called “constitutional AI,” Anthropic’s researchers had a model write responses to a variety of prompts, then revise those responses according to a set of “constitutional” principles. The model is available in closed beta, and after a media embargo lifted researchers have posted their interactions with the system on Twitter, observing improvements in robustness and creativity. But, of course, Claude is not without its own limitations: for example, asking the system questions in base 64 bypasses its built-in filters for harmful content. Users report it is worse at math than ChatGPT, and doesn’t solve the problem of writing factually wrong statements.
Our Take: As of this writing, OpenAI has moved to monetize ChatGPT. Unless things change drastically, I do not expect Anthropic to do anything of the sort. I think a more interesting point, though, follows from the impact of OpenAI’s earlier InstructGPT, the first GPT model offered through OpenAI’s API that was trained using Reinforcement Learning form Human Feedback (RLHF), the same training scheme that has made ChatGPT so nice to work with. RLHF was, to our knowledge, the first product of AI alignment research that made its way into a product that had significant usage. ChatGPT is important for similar reasons, while Claude represents further work by alignment researchers that may end up in systems used by many. This may portend an even greater role for AI alignment in mainstream ML research going forward.
How should automated systems like ChatGPT handle complex issues of truth and fairness, and is there bias embedded within them? Following a recent article published in the National Review, concerns have been rising from right-wing commentators over AI “going woke”. Yet these criticisms ignore the overwhelming history of AI’s biased impacts: which have largely targeted the marginalized groups that these newer systems aim to protect. A rush to write off ChatGPT and other technologies as “left-wing” may distract from more pressing conversations regarding the safety and equity of AI, such as the racial discrepancies in facial recognition systems. Political considerations will always be present in determining which inputs and outputs these systems have access to. As new algorithms begin to move in a more progressive direction, are they “adding bias” or merely working against the biases that have existed since their creation?
Our Take: The article makes a crucial point that has been forgotten in recent discussions on AI ethics: those who fear AI gaining a left-leaning bias cannot ignore the widespread history of other biases that these technologies have proliferated. If past algorithms hadn’t displayed as much potential to target marginalized groups, we wouldn’t be needing to wire stricter guidelines into the systems to prevent them from following that tendency. Even today, the impact of something like ChatGPT is far from uniform: left-leaning biases may sometimes be present, but discriminatory ones remain. While many people are selective in which biases they draw attention to, these conversations don’t have to be fully unproductive. I checked out some of the comments on the post that started this debate, and found many issues where consensus may be closer than it seems. For example, some people fear that integrating AI into the justice system would “criminalize conservative opinions”. Yet those on the other side have also raised concerns of legal AI perpetuating mass incarceration and discrimination. Rather than disparaging AI ethics as “woke propaganda”, the path forward will come through offering concrete solutions on how to ensure the fairness of all AI systems: acknowledging the past to move into the future.
Recent advancements in deepfake technology have sparked concern among AI experts and foreign policy analysts from Northwestern University and the Brookings Institute, who have released a report outlining the growing challenges posed by deepfakes in the near future. With the advent of newer generative AI methods such as stable diffusion, the ease of developing deepfakes has significantly increased, making them a potential threat for targeted military and intelligence operations. The report, released last week, calls for a collaborative effort to address the issue, including the development of a code of conduct for government use of deepfakes, as well as increased public awareness and the development of detection methods.
Our Take: Deepfakes have been a cause for concern for some time now, and with the advancements in AI technology, their quality and ease of production have become even more sophisticated. The report released by Northwestern University and the Brookings Institute highlights the need for a shared responsibility in addressing this issue. As citizens, it is important to exercise caution and think twice before generating and sharing deepfakes. Additionally, government and policy groups must establish regulations and guidelines surrounding the technology. From our perspective, it is crucial to develop effective methods for detecting and combating deepfakes, utilizing the wealth of data available from AI-synthesized sources. However, it is also important to anticipate and prepare for more advanced and creative deepfake generation methods in the future. Overall, addressing the challenges posed by deepfakes requires a comprehensive and collaborative effort across various sectors.
Researchers use AI to triage patients with chest pain - “Artificial intelligence (AI) may help improve care for patients who show up at the hospital with acute chest pain, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).”
AI21 Labs intros an AI writing assistant that cites its sources - “ChatGPT, the AI that can write poems, emails, spreadsheet formulas and more, has attracted a lot of negative publicity lately.”
Giant Chinese drone-carrying AI ship enters service as research vessel - “The Zhuhaiyun carries dozens of unmanned vessels that can monitor the sea and air.”
Do Users Write More Insecure Code with AI Assistants? - “Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.”
Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk - “OpenAI researchers collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes.”
CNET Is Experimenting With an AI Assist. Here’s Why - “There’s been a lot of talk about AI engines and how they may or may not be used in newsrooms, newsletters, marketing and other information-based services in the coming months and years.”
Infinite AI Interns for Everybody - “Robert Solow, the Nobel Prize–winning economist, famously said in 1987 that you can see the computer revolution everywhere but in the productivity statistics. I predict 2023 is the year that finally changes, thanks to artificial intelligence.”
Inside CNET’s AI-powered SEO money machine - “Every morning around 9AM ET, CNET publishes two stories listing the day’s mortgage rates and refinance rates. The story templates are the same every day. Affiliate links for loans pepper the page.”
Therapy by chatbot? The promise and challenges in using AI for mental health - “Just a year ago, Chukurah Ali had fulfilled a dream of owning her own bakery — Coco’s Desserts in St. Louis, Mo. — which specialized in the sort of custom-made ornate wedding cakes often featured in baking show competitions.”
AI Passes U.S. Medical Licensing Exam - “Two artificial intelligence (AI) programs – including ChatGPT – have passed the U.S. Medical Licensing Examination (USMLE), according to two recent papers.”
Detroit’s Atwater Brewery releases AI-made beer and we’re not sure how to feel about it - “Atwater Brewery sees your AI-generated Instagram portraits and raises you an AI-designed beer. That’s right. The Detroit-based brewery is releasing a beer conceived by artificial intelligence, appropriately called Artificial Intelligence IPA.”
Tesla video promoting self-driving was staged, engineer testifies - “A 2016 video that Tesla used to promote its self-driving technology was staged to show capabilities like stopping at a red light and accelerating at a green light that the system did not have, according to testimony by a senior engineer.”
Company creates 2 artificial intelligence interns: ‘They are hustling and grinding’ - “Artificial intelligence isn’t just making inroads in technology. Soon, AI may replace human beings in jobs as evidenced by one company that has created two AI interns.”
Microsoft launches Azure OpenAI service with ChatGPT coming soon - “Microsoft is rolling out its Azure OpenAI service this week, allowing businesses to integrate tools like DALL-E into their own cloud apps.”
That Microsoft deal isn’t exclusive, video is coming, and more from OpenAI CEO Sam Altman - “OpenAI co-founder and CEO Sam Altman sat down for a wide-ranging interview with this editor late last week, answering questions about some of his most ambitious personal investments, as well as about the future of OpenAI. There was much to discuss.”
Davos 2023: CEOs buzz about ChatGPT-style AI at World Economic Forum - “DAVOS, Switzerland, Jan 17 (Reuters) - Business titans trudging through Alpine snow can’t stop talking about a chatbot from San Francisco.”
Google’s Treasured AI Unit Gets Swept Up in 12,000 Job Cuts - “(Bloomberg) – Alphabet Inc. is reorganizing its treasured artificial intelligence unit as part of the company’s sweeping job cuts announced on Friday, according to an internal memo.”
Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight - “Last month, Larry Page and Sergey Brin, Google’s founders, held several meetings with company executives. The topic: a rival’s new chatbot, a clever A.I. product that looked as if it could be the first notable threat in decades to Google’s $149 billion search business.”
Adobe, Facing Blowback, Says Customer Data Not Used to Train AI - “Adobe Inc. Chief Product Officer Scott Belsky said the company has never trained its generative artificial-intelligence services on customer projects, responding to a wave of user criticism.”
Google is freaking out about ChatGPT - “The recent launch of OpenAI’s AI chatbot ChatGPT has raised alarms within Google, according to reports from The New York Times.”
Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content - “Getty Images is suing Stability AI, creators of popular AI art tool Stable Diffusion, over alleged copyright violation.”
This AI expert has 90 days to find a job — or leave the U.S. - “Huy Tu still remembers their first day of work at Instagram. Tu grew up in Ho Chi Minh City, Vietnam, in a working class family. The idea of getting a job at a world-famous company like Instagram seemed like a fantasy.”
Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach - “While grading essays for his world religions course last month, Antony Aumann, a professor of philosophy at Northern Michigan University, read what he said was easily “the best paper in the class.”
CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors - “The news sparked outrage. Critics pointed out that the experiment felt like an attempt to eliminate work for entry-level writers, and that the accuracy of current-generation AI text generators is notoriously poor.”
Tesla-induced pileup involved driver-assist tech, government data reveals - “The Tesla Model S that braked sharply and triggered an eight-car crash in San Francisco in November had the automaker’s controversial driver-assist software engaged at the time, according to data the federal government released Tuesday.”
AI Art Generators Hit With Copyright Suit Over Artists’ Images - “A group of artists is taking on AI generators Stability AI Ltd., Midjourney Inc., and DeviantArt Inc. in what would be a first-of-its-kind copyright infringement class action over using copyrighted images to train AI tools.”
OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic - “ChatGPT was hailed as one 2022’s most impressive technological innovations upon its release last November.”
FBI chief says he’s ‘deeply concerned’ by China’s AI program - “FBI Director Christopher Wray says he is “deeply concerned” about the Chinese government’s artificial intelligence program, asserting that it was “not constrained by the rule of law.”
Cheaters Hacked an AI Bot—and Beat the Rocket League Elite - “Last week, Reed Wilen, an elite gamer who uses the handle “Chicago” in Rocket League, a popular vehicular-soccer game, encountered a strange and troubling new opponent. The player seemed like a novice at first, moving their rocket-powered vehicle in a hesitant and awkward way.”
A rocky past haunts the mysterious company behind the Lensa AI photo app - “A Belarussian millionaire living in Cyprus. A dinner with the CEO of Snap. A six-figure patent troll case. They are all part of the history of Prisma Labs, a largely obscure artificial intelligence startup that spent years under the radar until November, when the company introduced “Magic Avatars.”
Why Artificial Intelligence Often Feels Like Magic - “In 2022, artificial-intelligence firms produced an overwhelming spectacle, a rolling carnival of new demonstrations. Curious people outside the tech industry could line up to interact with a variety of alluring and mysterious machine interfaces, and what they saw was dazzling.”
How Smart Are the Robots Getting? - “Franz Broseph seemed like any other Diplomacy player to Claes de Graaff. The handle was a joke — the Austrian emperor Franz Joseph I reborn as an online bro — but that was the kind of humor that people who play Diplomacy tend to enjoy”
Nick Cave criticises AI attempt to write Nick Cave lyrics: ‘This song sucks’ - “Nick Cave has written a scathing review of an artificial intelligence system that tried to write a song “in the style of Nick Cave”. The Bad Seeds frontman responded after a fan sent him lyrics written by ChatGPT, a chatbot that can be directed to imitate other people’s styles.”
New report outlines recommendations for defending against deepfakes - “Although most public attention surrounding deepfakes has focused on large propaganda campaigns, the problematic new technology is much more insidious, according to a new report by artificial intelligence (AI) and foreign policy experts at Northwestern University and the Brookings Institution.”
AI Research Task Force Votes to Send Final Report to Congress, President - “A majority of the National Artificial Intelligence Research Resource Task Force voted on Friday to approve the group’s report—an implementation plan and roadmap for a resource infrastructure supporting AI research—that will be sent to Congress and President Joe Biden in the next couple of week”