Discover more from Last Week in AI
Last Week in AI #243: CA revokes Cruise's self-driving license, Jina v2 embedding model outperforms OpenAI, White House's new AI rules, Artists fight back against Gen AI with Nightshade, and more!
Cruise suspended after withholding injury info from DMV, Jina's open-source SOTA embedding model, White House's new AI rules for federal agencies, artists use data-poisoning to avoid data scraping
Cruise, one of the two autonomous vehicle companies offering fully driverless taxi rides in the U.S., has had its license revoked by the California Department of Motor Vehicles (DMV). The revocation was due to Cruise withholding video footage from an investigation into an incident where a Cruise vehicle ran over a pedestrian and then attempted a "pullover maneuver" while the pedestrian was still underneath the vehicle. The DMV only learned of the full extent of the incident from another government agency. Despite Cruise's spokesperson claiming that they had shown the full video to the DMV, the suspension remains effective immediately. Cruise can still operate in driverless mode but with a safety driver. The company has paused operations on driverless cars in San Francisco as a result of the suspension.
Berlin-based firm, Jina AI, has launched its second-generation text embedding model, jina-embeddings-v2, becoming the first fully open-source offering to support an 8K (8192 tokens) context length. This is significant as Jina V2 matches the capabilities and performance of OpenAI's proprietary model, text-embedding-ada-002, on the Massive Text Embedding Benchmark (MTEB) leaderboard. Notably, in datasets, Jina's model outperformed the OpenAI model in Classification Average, Reranking Average, Retrieval Average, and Summarization Average. The extended 8K context length allows for enhanced utilization in various fields, including law, medical research, financial forecasting, and conversational AI. The models are free to download on Huggingface, and Jina AI provides two size options, a base model for heavy-duty tasks and a smaller model for lightweight applications. In the future, Jina AI plans to publish an academic paper on the model, develop an OpenAI-like embeddings API platform, and explore multilingual embeddings.
The Biden administration is set to announce a comprehensive executive order on artificial intelligence (AI), marking the U.S. government's most significant attempt to regulate this rapidly evolving technology. The order will require advanced AI models to undergo assessments before federal workers can use them, and it will also ease immigration barriers for highly skilled workers to boost the U.S.'s technological edge. The order comes as the European Union and other governments are working to regulate the riskiest uses of AI. The executive order is expected to build on voluntary commitments signed by 15 companies, including OpenAI, Google, Adobe, and Nvidia, to develop technology to identify AI-generated images and share safety data with the government and academics. The assessments, known as "red teaming," will likely be led by the National Institute of Standards and Technology (NIST). The order also includes provisions to change the immigration process and requires federal agencies to assess the current size of the AI workforce.
The article discusses a new data poisoning tool, Nightshade, designed to help artists protect their work from being used without consent by generative AI models. The tool, while potentially open to misuse, would require thousands of poisoned samples to impact larger models significantly. Experts, such as Vitaly Shmatikov from Cornell University and Gautam Kamath from the University of Waterloo, highlight the need for robust defenses against such attacks and the importance of addressing vulnerabilities in AI models. The tool is seen as a potential deterrent for AI companies, encouraging them to respect artists' rights and possibly prompting them to pay royalties. Artists, such as Eva Toorenent and Autumn Beverly, express hope that Nightshade will shift the power balance back to the creators, protecting their work from unauthorized use.
Twelve Labs is building models that can understand videos at a deep level - Twelve Labs is building AI models that can understand videos at a deep level, allowing developers to create apps that can search through videos, classify scenes, extract topics, and more, with potential applications in ad insertion, content moderation, media analytics, and automatic highlight reel generation.
Microsoft opens early access to AI assistant for infosec, Security Copilot - Microsoft is opening up the early access program for its flagship cybersecurity AI product, Security Copilot, which aims to save time and upskill security teams by providing step-by-step instructions on managing incidents and generating natural language reports.
Google Maps is becoming more like Search — thanks to AI - Google is adding new AI-powered features to Maps, including immersive navigation, easier-to-follow driving directions, and better organized search results, in an effort to make Maps more like Search and maintain a competitive edge over rivals like Apple and Microsoft.
Amazon now lets advertisers use generative AI to pretty up their product shots - Amazon is beta testing AI image generation tools for advertisers, allowing them to easily create lifestyle imagery for their product ads, potentially leading to higher click-through rates.
Canva launches free AI 'Classroom Magic' tools for educators - Canva has launched a suite of AI tools called Classroom Magic, designed to assist educators with lesson planning, content editing, document reformatting, image and text editing, multilingual lesson support, and accessibility checking.
Sick of meetings? Microsoft’s new AI assistant will go in your place - Microsoft has unveiled Copilot, an AI assistant that can attend meetings on behalf of employees, generating transcripts, summaries, and notes once the meeting is over, but some managers and workers are skeptical about the use of AI in meetings, citing the lack of nuanced human judgment and social skills needed for effective participation.
AI-based data center optimization startup MangoBoost raises $55M Series A - AI-based data center optimization startup MangoBoost has raised $55 million in a Series A funding round to develop its DPU hardware and software solutions that help enterprises and data centers manage massive amounts of data to optimize workloads, with the goal of reducing power consumption and improving performance with cost efficiency and security.
Qualcomm’s next big Snapdragon chip has leaked, and it’s full of AI features - Qualcomm's upcoming Snapdragon 8 Gen 3 chip for Android phones will have a heavy focus on AI, with features such as AI camera tools and the ability to run various AI models, making it a potential competitor for Google's Tensor processors.
Lenovo and NVIDIA Announce Hybrid AI Solutions to Help Enterprises Quickly Adopt GenAI - Lenovo and NVIDIA have announced an expansion of their partnership to bring generative AI to every enterprise, offering fully integrated systems that enable businesses to deploy tailored AI applications across various industries.
Wall Street is demanding financial results that support all the AI hype. Microsoft’s latest earnings finally delivered - Wall Street is eager to see financial results that support the hype around AI, and Microsoft's latest earnings report, which showcased the business implications of AI adoption, delivered impressive numbers that exceeded expectations.
AI Chip Startup Rebellions Is in Talks to Raise $100 Million - AI chip startup Rebellions is in discussions to secure $100 million in funding to accelerate the advancement of their next-generation AI chip.
Intel’s in trouble as Nvidia and AMD reportedly prepare Arm-based desktop CPUs - Nvidia and AMD are reportedly working on Arm-based desktop CPUs, potentially posing a threat to Intel's dominance in the market.
GOOGLE, MICROSOFT, OPENAI, ANTHROPIC DRIVE FRONTIER MODEL FORUM'S DIRECTOR APPOINTMENT AND $10M AI FUND - The Frontier Model Forum has appointed Chris Meserole as its Executive Director and launched a $10 million AI Safety Fund, backed by tech giants like Google and Microsoft, to support independent researchers in innovating evaluation methods and "red teaming" tactics to ensure the safe development of advanced AI systems.
AI companies drive demand for office space in tech hubs, new study finds - The boom in artificial intelligence is driving demand for office space in tech hubs, with AI companies seizing more office space, especially in the San Francisco Bay Area, as they look to grow quickly.
Meet The New AI Unicorns Of 2023 - One in five of the new billion-dollar startups to join The Crunchbase Unicorn Board in 2023 were AI companies, collectively adding $21 billion in value and dominated by generative AI companies in various sectors.
Luzia lands $10 million in funding to expand its WhatsApp-based chatbot - Spain-based startup Luzia has raised $10 million in funding to expand its WhatsApp-based chatbot, which aims to introduce AI chatbot technology to non-tech affluent audiences in the Spanish and Portuguese-speaking market.
Stability AI General Counsel, HR Chief Depart From Startup - Executive departures at an AI startup suggest further instability within the company.
Large language models propagate race-based medicine - Large language models (LLMs) integrated into healthcare systems may perpetuate harmful, race-based medicine by providing inaccurate and biased responses to medical questions related to race, according to a study assessing four commercially available LLMs.
AI ‘breakthrough’: neural net has human-like ability to generalize language - Scientists have created a neural network with human-like ability to make generalizations about language, which could lead to machines that interact with people more naturally and address the gaps and inconsistencies in current AI systems.
Researchers develop ‘Woodpecker’: A groundbreaking solution to AI’s hallucination problem - Researchers have developed a framework called "Woodpecker" that corrects hallucinations in multimodal large language models (MLLMs) without the need for retraining, offering a promising solution to a significant problem in AI.
Proximal Policy Optimization (PPO): The Key to LLM Alignment - Recent AI research has shown that reinforcement learning (RL), specifically reinforcement learning from human feedback (RLHF), is crucial for training large language models (LLMs), and Proximal Policy Optimization (PPO) is a popular and effective RL algorithm used in the alignment of LLMs.
Contrastive Prefence Learning: Learning from Human Feedback without RL - The article discusses Contrastive Preference Learning, a method of learning from human feedback without the use of Reinforcement Learning.
Branch-Solve-Merge Improves Large Language Model Evaluation and Generation - Branch-Solve-Merge (BSM) is a Large Language Model program that improves the evaluation and generation of text by decomposing tasks into sub-tasks and fusing their solutions, resulting in enhanced correctness, consistency, and coherence.
Exploring the Boundaries of GPT-4 in Radiology - The article discusses the exploration of the boundaries of GPT-4 in the field of radiology.
SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding - The article discusses the merging of vision foundation models for semantic and spatial understanding in AI.
Towards Understanding Sycophancy in Language Models - The article discusses the issue of sycophancy in language models and presents recommendations for addressing it.
MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models - MusicAgent is an AI agent that utilizes large language models to understand and generate music.
SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents - SOTOPIA is an open-ended environment that simulates social interactions between artificial agents and evaluates their social intelligence, revealing significant differences between models and identifying challenging scenarios for AI systems.
Improving Wikipedia verifiability with AI - AI is being used to improve the verifiability of information on Wikipedia by developing tools that can retrieve and verify citations more accurately.
DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation - The article discusses the importance of community support and donations for arXiv, an open access platform for scientific research.
FreeNoise: Tuning-Free Longer Video Diffusion Via Noise Rescheduling - A study proposes FreeNoise, a tuning-free and time-efficient paradigm to enhance the generative capabilities of pretrained video diffusion models, allowing for the generation of longer videos conditioned on multiple text prompts.
AI risk must be treated as seriously as climate crisis, says Google DeepMind chief - The risks of artificial intelligence must be treated as seriously as the climate crisis, according to Demis Hassabis, the CEO of Google's AI unit, who called for the creation of an oversight body similar to the Intergovernmental Panel on Climate Change (IPCC) to address the dangers posed by AI, including the creation of bioweapons and the existential threat of super-intelligent systems.
Health providers say AI chatbots could improve care. But research says some are perpetuating racism - Popular AI chatbots used in healthcare are perpetuating racist and debunked medical ideas, potentially worsening health disparities for Black patients, according to a study by Stanford School of Medicine researchers.
AI-created child sexual abuse images ‘threaten to overwhelm internet’ - The Internet Watch Foundation has warned that artificial intelligence-generated child sexual abuse images are becoming a reality and pose a significant threat to the internet, with nearly 3,000 AI-made abuse images that broke UK law already identified.
Top AI Shops Fail Transparency Test - Fifteen major AI companies have failed a transparency test despite signing on to the White House's commitments to manage AI risks.
AI researchers uncover ethical, legal risks to using popular data sets - AI researchers have found that many popular data sets used to train generative AI systems are improperly licensed, with about 70% not specifying the correct license or being mislabeled, leading to confusion and potential copyright issues for developers.
Google Pixel’s face-altering photo tool sparks AI manipulation debate - Google Pixel's new face-altering photo tool, Best Take, uses machine learning to mix and match expressions from past photos, sparking a debate about AI manipulation.
Open-source AI firm Hugging Face confirms ‘regrettable accessibility issues’ in China - Hugging Face, an open-source AI firm, acknowledges accessibility issues in China after the country blocked access to its platform, but it remains unclear why the censorship occurred.
Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown - Concerns about the impact of generative AI on the misinformation landscape are overblown, as evidence suggests that the increased quantity, quality, and personalization of misinformation are speculative and have limited effects on the spread of misinformation.
Is my co-worker AI? Bizarre product reviews leave Gannett staff wondering - Gannett staff are questioning whether some product reviews on their site were written by AI or humans, with the company claiming they were created by third-party freelancers hired by a marketing agency partner.
AI Has a Hotness Problem - AI image-generation tools tend to produce attractive faces because they are trained on databases of existing photos that are biased towards attractive people, and the tools themselves tend to generate faces that look like averaged faces, resulting in a phenomenon where AI-generated faces are often more attractive than real people.
How the Foundation Model Transparency Index Distorts Transparency - The Foundation Model Transparency Index (FMTI) released by Stanford's Center for Research on Foundation Models (CRFM) is criticized for misleadingly measuring transparency for foundation models and instead measuring how well-documented commercial products are, as well as making critical factual errors and being biased against openly released models.
Ideologies of Awe & AI Art at the MoMA - Refik Anadol's "Unsupervised - Machine Hallucinations" at the MoMA uses AI to create visualizations of the museum's art archive, but the work unintentionally reinforces the idea that AI is complex and unknowable, leaving viewers passive and in awe of its power.
AI Godfathers Bengio and Hinton: Major tech companies should devote a third of AI budget to managing AI risk - Yoshua Bengio and Geoffrey Hinton, along with other AI experts, propose that major tech companies and governments allocate a third of their AI research and development budgets to AI safety and ethical use, in order to address the growing risks associated with artificial intelligence.
What the U.N.’s AI Advisory Group Will Do - The U.N. has unveiled a new advisory body dedicated to developing consensus around the risks posed by artificial intelligence and how international cooperation can help meet those challenges, with the body's recommendations potentially deciding the form and function of a U.N. agency for the governance of AI.
UK to set up world's first AI safety institute, Sunak says - UK Prime Minister Rishi Sunak announces plans to establish the world's first AI safety institute, which will focus on advancing knowledge of AI safety and evaluating the risks associated with new AI models.
OpenAI forms new team to assess ‘catastrophic risks’ of AI - OpenAI is forming a new team to assess and mitigate the potential catastrophic risks associated with AI, including nuclear threats, chemical and biological threats, autonomous replication, AI tricking humans, and cybersecurity threats.
We don’t want to set up global AI regulator, says UK tech secretary - The UK government does not plan to establish a global regulator for artificial intelligence, but instead aims to set up international networks and frameworks to manage risks through its AI safety summit and Frontier AI Taskforce.
The Beatles: ‘final’ song Now and Then to be released thanks to AI technology - The long-awaited "final" Beatles song, Now and Then, featuring all four members, is set to be released next week thanks to AI technology that was used to enhance the audio on Peter Jackson's documentary Get Back.
Copyright © 2023 Skynet Today, All rights reserved.