The AI landscape is shifting almost daily, with new models, features, and initiatives that promise to reshape how we work, create, and communicate. In this month's AI newsletter, we cut through the noise to bring you the most important developments you need to know.
From new multimodal systems like Bard to the formation of safety coalitions like the Frontier Model Forum, it's clear AI is entering a new era of rapid progress and heightened responsibility. But risks remain, as AI-generated porn becomes increasingly photorealistic and new studies reveal accuracy declines in ChatGPT over time.
How are we to make sense of it all? Which innovations show the most potential and which require caution? What do the latest advancements mean for developers, businesses, policymakers and society as a whole?
This newsletter contains the most relevant news and curated content on the developments that matter most. The AI train is picking up speed; make sure you're onboard and up to speed on where it's heading.
Custom Instructions for ChatGPT
OpenAI has introduced custom instructions for ChatGPT, allowing users to tailor the model to their specific needs. This feature, initially available in the Plus plan and soon for all users, enables the addition of preferences or requirements that ChatGPT will consider when generating its responses. For example, a teacher crafting a lesson plan no longer has to repeat that they're teaching 3rd-grade science, or a developer preferring efficient code in a language other than Python can state it once, and it's understood. Custom instructions enhance the interaction, making conversations more coherent and aligned with the individual needs of each user.
GitHub is starting a public beta for Copilot Chat AI
GitHub has announced the launch of a limited public beta for its new Copilot Chat feature, designed to assist developers with coding. This ChatGPT-like experience, part of GitHub's Copilot X initiative, integrates with OpenAI's GPT-4 model and is available to all business users through Microsoft's Visual Studio and Visual Studio Code apps. Copilot Chat is contextually aware of the code being typed and any error messages, providing real-time guidance tailored to specific coding projects, coding analysis, and simple troubleshooting. According to GitHub, this tool can significantly increase developer productivity, allowing even inexperienced developers to build applications or debug code in minutes instead of days, improving efficiency tenfold.
Collaborative Efforts for AI Safety: The Formation of the Frontier Model Forum and the White House's Request for AI Safeguards
Two significant initiatives highlight the AI industry's commitment to responsible development and safety. The Frontier Model Forum, formed by OpenAI, Microsoft, Google, and Anthropic, aims to oversee the safe development of advanced AI models. Concurrently, seven leading AI companies, including Microsoft and Google, have committed to safeguards at the White House's request, focusing on principles of safety, security, and social responsibility. Both initiatives reflect a proactive approach to ethical AI development and a response to growing calls for regulatory oversight.
Google Reportedly Introducing a New AI Tool to Newsrooms
Google is reportedly exploring how artificial intelligence could assist journalists and news publishers, including testing an AI tool that can process information and generate news articles. The tool, known internally as Genesis, has been pitched to several major news organizations. Google envisions the AI tool as a "personal assistant for journalists" that can automate some tasks. However, the company emphasizes that these tools are not intended to replace the essential role journalists play in reporting, creating, and fact-checking their articles. The rapid adoption of generative AI tools has raised concerns over potential issues, including the spread of misinformation and the reinforcement of bias.
As AI Porn Generators Get Better, the Stakes Get Higher
The article explores the rapid development and ethical concerns surrounding AI-generated pornography. As generative AI technology advances, the quality of AI-generated porn has improved, leading to a proliferation of tools and platforms, such as Unstable Diffusion, that enable the creation of explicit content. While some of these images could be mistaken for professional artwork, the ethical questions and real-world impacts have grown more complex. Instances of nonconsensual deepfakes, weaponisation against women, and photorealistic AI-generated child sexual abuse material have raised serious concerns. The article also highlights the challenges faced by groups like Unstable Diffusion in balancing freedom of expression with legal and ethical constraints, as well as the potential impact on the livelihoods of workers in adult films and art. The situation underscores the urgent need for responsible regulation and content moderation in this rapidly evolving field.
Stability AI Releases Its Latest Image-Generating Model, Stable Diffusion XL 1.0
AI startup Stability AI has announced the launch of Stable Diffusion XL 1.0, its most advanced text-to-image model to date. Available in open source and through Stability's API and consumer apps, the new model offers improved color accuracy, contrast, shadows, and lighting. With 3.5 billion parameters, it can generate full 1-megapixel resolution images in seconds and is customizable for various concepts and styles. The model also supports advanced text generation, inpainting, outpainting, and image-to-image prompts. Despite these advancements, the open-source nature of Stable Diffusion XL 1.0 raises ethical concerns, as it could be used to generate harmful content like nonconsensual deepfakes. Stability AI has taken steps to mitigate these risks but acknowledges that abuse is possible. The release coincides with new partnerships and features as Stability faces increasing competition in the commercial AI space.
Shopify Sidekick is Like ChatGPT, but for E-commerce Merchants
Shopify has announced an expansion of its generative AI capabilities under the brand Shopify Magic, including a new chatbot-like AI tool called Sidekick. Designed to understand and interpret questions related to business decision-making, Sidekick can assist merchants with tasks such as setting up discounts, segmenting customers, summarizing sales documents, and modifying shop designs. It can also generate content for blog posts, product descriptions, and marketing emails, tailored to the merchants' conversation histories and store policies. Shopify emphasizes that the generated content can be reviewed before going live, and the AI features are not allowed to write or make changes to any production systems. Sidekick represents Shopify's commitment to leveraging AI to support businesses of all sizes, offering a tool akin to OpenAI's ChatGPT but specifically tuned for e-commerce.
Google Bard's New Image Input Feature Takes Multimodality to the Next Level
Google's language model chatbot, Bard, has introduced a new feature that accepts image prompts, making it a multimodal tool comparable to Microsoft's Bing chat, which also uses OpenAI's GPT-4. The article explores Bard's image input capabilities through various tests, including counting objects and understanding images from ImageNet. While Bard performed exceptionally well in image captioning and classification, it struggled with tasks like counting objects and refused to work with images containing human faces. The article also discusses how Bard's image input features are integrated with multiple Google products like Google Lens and Google Cloud's Vision API. Although not ideal for complex computer vision tasks, Bard's new feature enhances its capabilities for generalized search and information lookup, making it valuable for consumers and developers looking to create more efficient and tailored solutions.
ChatGPT’s Capabilities are Getting Worse with Age, New Study Claims
A recent study conducted by researchers from Stanford and UC Berkeley has found that OpenAI's ChatGPT seems to be deteriorating in accuracy over time, and the reason for this decline is unclear. The study tested different models of ChatGPT, including ChatGPT-3.5 and ChatGPT-4, on tasks such as solving math problems, answering sensitive questions, writing new code, and spatial reasoning. The results showed a significant drop in accuracy, particularly in identifying prime numbers, where GPT-4's accuracy plummeted from 97.6% in March to just 2.4% in June. The study also noted changes in how the models responded to sensitive questions, becoming more concise in refusing to answer. The researchers emphasized the need for continuous monitoring of AI model quality and recommended that users and companies implement monitoring analysis to ensure the chatbot remains effective.
The AI train is leaving the station - don't get left behind. This newsletter is your ticket to stay informed on all the latest developments.