• The Hundred
  • Posts
  • AI Weekly Digest: Latest Insights on Artificial Intelligence - July 2 to July 8, 2023

AI Weekly Digest: Latest Insights on Artificial Intelligence - July 2 to July 8, 2023

AI Weekly Digest: Latest Insights on Artificial Intelligence - July 2 to July 8, 2023

Dear AI Enthusiasts,

We have an exciting update to share with you! Starting from this edition onwards, we're taking our newsletter in a new direction, exclusively curating the most captivating AI-related content just for you. Get ready to immerse yourself in the cutting-edge world of artificial intelligence, as we bring you a curated selection of the most fascinating articles, insightful videos, thought-provoking podcasts, and engaging discussions—all centered around AI and its ever-evolving landscape.

  • GPT-4 API is now generally available

    GPT-4 is now generally available to all paying API customers, providing more capabilities than previous models. The company is focusing development efforts on the Chat Completions API, which handles 97% of API usage. In 6 months, older models using the Completions API will be deprecated and replaced with newer models. Developers will need to migrate to the Chat Completions API or fine-tune models to continue using them beyond January 2024. Older embedding and edits models will also be deprecated, and users will need to migrate to newer alternatives. The company will provide support to help with the transition and cover costs for users re-embedding content with the new models. This shift will allow the company to optimize compute capacity and invest more in the chat-based API. (Source)

  • Stanford Researchers Develop AI Agents that "Self-Reflect" for Better Adaptation

    Stanford researchers have developed a new training method called "curious replay" that enables AI agents to self-reflect on the most novel and interesting things they've recently encountered, thereby improving their performance in changing environments. The method was inspired by animal behaviors and was tested in a simple task where an AI agent and a mouse were placed in an environment with a new object. The AI agent, equipped with curious replay, was able to engage with the new object much faster. The method also significantly improved AI performance in a game based on Minecraft, called Crafter. The researchers believe that this approach will lead to more adaptive, flexible technologies, from household robotics to personalized learning tools. (Source)

  • The Potential Impact of Artificial Intelligence on Mathematics

    Artificial intelligence is poised to transform the field of mathematics. Tools like proof assistants and automated reasoning are helping mathematicians verify proofs and solve complex problems. However, some mathematicians are concerned about the "black box" nature of AI and whether AI-assisted solutions truly count as mathematical understanding. Mathematics may serve as a litmus test for the capabilities and limitations of machine learning, as mathematical reasoning remains an unsolved problem for AI. If AI could combine intuitive leaps with logical reasoning, it would fundamentally change the field of mathematics. (Source)

  • Open-Source AI Boom: A Threat to Big Tech's AI Dominance or a Precarious Bubble?

    The open-source AI boom, which has seen the release of numerous large language models that rival those of tech giants like Google and OpenAI, is causing a stir in Silicon Valley. These models, which are free and can be modified by researchers and developers, have driven innovation and democratized access to AI technology. However, this boom relies heavily on the large models released by big firms. If companies like OpenAI and Meta decide to close their doors due to competition fears, the open-source AI boom could quickly deflate. The future of AI development is at a crossroads, with the potential for the next generation of AI breakthroughs to be monopolized by the world's richest AI labs. (Source)

  • AI and the Automation of Work: A New Wave of Change

    Benedict Evans, in his blog post, discusses the impact of generative AI, Large Language Models (LLMs), and ChatGPT on the future of work. He emphasizes that these technologies represent a generational shift in automation, with potential to transform job roles and create new kinds of automation. Despite the rapid adoption of these technologies, Evans argues that their impact on job displacement and creation will be similar to previous waves of automation, such as typewriters, mainframes, and PCs. He also addresses concerns about the speed of this change and the error rate of LLMs, suggesting that while these technologies can perform tasks at an unprecedented scale, their outputs still require human verification. Evans concludes by stating that without AGI, this is just another wave of automation, and there's no clear reason why this should be more or less disruptive than previous ones. (Source)

  • Douglas Hofstadter Revises His Stance on Deep Learning and AI Risk

    In a recent podcast interview, renowned AI researcher Douglas Hofstadter revisited his previous criticisms of GPT-2/3 models, deep learning, and compute-heavy GOFAI. He expressed concern about the rapid advancements in AI, suggesting that these technologies are not only becoming more like human consciousness but are also surpassing human capabilities in terms of knowledge and speed. Hofstadter also voiced his fears about the potential risks associated with AI, including the possibility of humanity being eclipsed by AI entities. His comments reflect a significant shift in his views on deep learning and AI, which he has been discussing privately since at least 2014. (Source)

  • Swedish Firms Fined €1M for Using Google Analytics: A First in GDPR Violations

    The Swedish data protection authority (IMY) has issued its first major fine of €1 million against Tele2, a telecom provider, and online retailer CDON for using Google Analytics on their websites. This action follows 101 complaints by noyb, a non-profit organization, about unlawful EU-US data transfers. While other European authorities have previously determined that using Google Analytics violates the General Data Protection Regulation (GDPR), this is the first time financial penalties have been imposed for such violations. The ruling underscores the ongoing tension between EU data protection laws and US tech companies, and could set a precedent for future enforcement actions. (Source)

  • The Hidden Human Labor Behind AI: A Deep Dive into Annotation Work

    The Verge recently highlighted the often overlooked human labor involved in artificial intelligence (AI), focusing on a Remotasks office in Nairobi, a subsidiary of Scale AI. The office specializes in annotation, a task that uses human labor to parse images that confuse algorithms, with the aim of improving these systems. Despite the rise of generative AI systems, they still heavily rely on human labor, and most of these workers do not share in the benefits. The industry, known as business process outsourcing (BPO), is ready to take on any work that companies want to cut labor costs on. While annotation work is not as traumatic as content moderation, the pay is not better, and the work quickly becomes invisible. The gap between those hyping the product and those doing the work to prop it up is becoming increasingly apparent. (Source)

  • Human Outsmarts AI in Go: A Lesson in AI Vulnerability

    In a surprising turn of events, an amateur Go player, Kellin Pelrine, managed to defeat KataGo, the current best AI player of Go, 14 out of 15 times. Pelrine, who is also one of the authors of a study on KataGo's vulnerabilities, exploited these weaknesses to trick the AI into making serious blunders. This victory is not about the game itself but highlights the fact that high performance in AI does not always equate to robustness. The AI's failure in this context is akin to a self-driving car crashing due to an unexpected variable. This event underscores the need for caution when deploying AI systems in real-world situations, as they may not be prepared for all scenarios. (Source)

  • Cognitive Scientists Challenge Grandiose AI Claims

    The article discusses the ongoing debate about the potential of Artificial General Intelligence (AGI) and the claims made by proponents of large language models (LLMs) like OpenAI's GPT-4. Critics argue that while LLMs can generate impressive results, they do not truly understand language or think like humans. They also express concerns about the lack of transparency in the development of these models. The article highlights that the concept of AGI, which has been around since the 1980s, is still poorly understood and difficult to define. It also mentions the legal and ethical challenges faced by companies like OpenAI, including lawsuits for scraping copyrighted data and the potential for LLMs to harbor racial and societal biases. (Source)

  • AI to Replace Human Programmers in Five Years, Predicts Stability AI CEO

    Emad Mostaque, CEO of Stability AI, predicts that artificial intelligence (AI) will replace human programmers within five years. He bases his prediction on data from GitHub, which shows that 41% of all code is AI-generated. Stability AI, known for Stable Diffusion, the world's most popular open-source image generator, aims to create the building blocks for a "society OS." Mostaque envisions AI models fully resident on mobile phones by the end of next year, revolutionizing our conversational interactions. Despite concerns about job security, Mostaque views AI as a tool that enhances human potential rather than a threat. He emphasizes the importance of decentralizing AI and democratizing access to AI technology. (Source)

  • AI: A Recurrence of Overconfidence and Hubris

    In the late 1800s, during the Victorian era, there was a belief that human knowledge was on the brink of perfection. This was due to significant advancements in science, medicine, industry, and transportation. However, the 20th century brought about world wars, pandemics, and economic depressions, challenging this notion of progress. Today, we see a similar pattern with artificial intelligence (AI). There's a growing belief that we're witnessing an AI renaissance, with systems like OpenAI's GPT-3 and GPT-4, Google's Bard, and Microsoft's Bing. However, critics argue that these systems, while impressive, are fundamentally different from human minds. They are based on machine learning and neural networks, which are not similar to our own minds. The human mind operates on far smaller amounts of information and is rule-bound. The question remains whether anything but an organic brain can think like an organic brain does. (Source)

  • Human Translators Still Relevant Despite AI Advancements

    Despite the rise of AI translation tools like ChatGPT, human translators remain in demand, particularly in specialized fields such as law, medicine, and video game localization. These areas require a nuanced understanding of context, cultural nuances, and specialized terminology that AI tools currently struggle to grasp. However, the advent of hybrid translation services, where AI produces a first draft and a human checks for errors, has led to a decrease in costs and an increase in the volume of translations. While this has not led to widespread job losses, it has put downward pressure on wages and increased competition in the field. The article suggests that automation is a gradual process and that there are many ways for human workers to adapt. (Source)

  • OpenAI Introduces Code Interpreter for ChatGPT Plus Users

    OpenAI has announced that a new feature, Code Interpreter, will be made available to all ChatGPT Plus users over the next week. This feature allows ChatGPT to execute code, with optional access to user-uploaded files. Users can request ChatGPT to perform tasks such as data analysis, chart creation, file editing, and mathematical operations. ChatGPT Plus users can opt into this feature via their settings.

  • Decentralized AI: The Future of Tech Investment?

    The article discusses the concept of decentralized AI, its potential benefits, and its viability as an investment. The author suggests that the time for decentralized AI may have arrived due to factors such as GPU shortages, the maturation of privacy-enhancing technologies, and concerns about AI monopolies. Despite the technical and commercial challenges, the author believes that decentralized AI offers a unique approach to building AI systems. The article also explores the potential of blockchain-based data marketplaces and native payment rails for resources, which could give decentralized AI a data advantage. However, the author acknowledges that the success of decentralized AI will depend on how these factors play out in the future. (Source)

  • The Dangers of Overreliance on Large Language Models

    The article discusses the potential risks associated with the increasing use of large language models (LLMs) like GPT-3 and GPT-4. As more and more online text is generated by these models, the quality of the language found online could degrade, a phenomenon the authors call "model collapse". This is because the models are trained on existing online text, and as they generate more content, they are essentially training on their own output, leading to a loss of originality and richness in the language. The authors compare this to the environmental impact of plastic waste and carbon dioxide emissions. They suggest that this could give an advantage to firms that have already scraped the web for training data or control access to human interfaces at scale. The authors conclude by noting that while LLMs are useful tools, they also have the potential to pollute the digital environment. (Source)

  • Declining Interest in ChatGPT Signals Potential Slowdown in AI Revolution

    The article reports a drop in the number of people visiting the AI chatbot ChatGPT's website and downloading its app for the first time since its launch. According to internet data firm Similarweb, worldwide traffic to ChatGPT’s website fell by 9.7% in June from the previous month. The bot's iPhone app downloads have also been declining since peaking in early June. The decline suggests that the limitations of the technology are becoming apparent and that some of the initial hype surrounding chatbots may have been exaggerated. The bot, developed by OpenAI, has been criticized for making up false information. Some companies have even banned their employees from using ChatGPT at work due to concerns about potential data leaks. The drop in usage could also be attributed to the end of the school year in the U.S. and Europe, as well as concerns about upcoming regulations. (Source)

  • The Emergence of AI Engineering: A New Frontier in Software Development

    The article discusses the rise of AI engineering as a new subdiscipline in software development. The author identifies the role of AI engineers as those who apply AI and effectively use emerging technologies. The article suggests that AI engineering is more akin to web development than machine learning, with a focus on Python and JS scripting. The author outlines foundational concepts such as large language models, embeddings, RLHF, and prompt engineering. The article also discusses the importance of understanding different models like GPT-4, Claude/Bard, and LLaMa. The author emphasizes the importance of building on existing models rather than training new ones from scratch and the need for AI engineers to stay agile due to the rapidly evolving nature of the field. (Source)

  • The AI Dividend: A Proposal for Sharing the Profits from AI

    The blog post by Bruce Schneier and Barath Raghavan proposes the idea of an "AI Dividend," akin to Alaska's Permanent Fund, where Big Tech companies would pay a small licensing fee for using public data to train their AI models. The fees would be collected into a fund and distributed equally among all residents nationwide. The authors argue that since AI companies are profiting from the public's data without their knowledge or consent, the public should share in the profits. The proposal exempts hobbyists and small businesses from fees, only requiring Big Tech companies to pay. The authors believe this plan could result in an annual dividend payment of a few hundred dollars per person. (Source)

  • AI Matches Top 1% of Human Thinkers in Creativity Test

    A study led by Dr. Erik Guzik from the University of Montana has found that ChatGPT, an application powered by the GPT-4 AI engine, can match the top 1% of human thinkers in a standard test for creativity. The research used the Torrance Tests of Creative Thinking (TTCT), a widely recognized tool for assessing human creativity. The AI application scored in the top percentile for fluency and originality, and in the 97th percentile for flexibility. This is the first time an AI has performed in the top 1% for originality. The research suggests that AI is developing creative abilities on par with or even exceeding human abilities. (Source)

  • The High-Stakes AI Game: Inflection AI Raises the Bar with $1 Billion Investment

    Inflection AI, an AI startup founded in March 2022, has raised $1 billion in its second funding round, setting a new standard for AI startups in terms of financial backing, infrastructure, and software stacks. The company is not just raising funds but also building a powerful system based on Nvidia's "Hopper" H100 GPUs and Quantum 2 InfiniBand networking. Inflection AI's founders include Reid Hoffman, co-founder of LinkedIn, and two long-time AI researchers, Karén Simonyan and Mustafa Suleyman. The company aims to make its AI assistant, Pi, available to everyone on the planet. The funding will be used to build a cloud-based AI cluster with over 22,000 H100 GPU accelerators, which will be among the most powerful systems globally. (Source)

  • NVIDIA and Run:ai have collaborated to streamline the deployment of AI applications across different platforms. NVIDIA's Cloud Native Stack Virtual Machine Image (VMI) allows developers to build AI applications on a GPU-powered on-premises or cloud instance and deploy them on any GPU-powered platform without code changes. The VMI comes with the NVIDIA GPU Operator, which automates the management of software required for GPUs on Kubernetes. Run:ai's Atlas platform, certified on NVIDIA AI Enterprise, accelerates the data science pipeline by streamlining the development and deployment of AI models. The platform also provides GPU orchestration capabilities to efficiently manage AI workloads and hardware resources. The NVIDIA Cloud Native Stack VMI is available on AWS, Azure, and GCP. (Source)