Following OpenAI’s launch of ChatGPT late last year, 2023 has proven to be a pivotal year for AI, characterized by advances in generative AI, intense competition, and growing concerns about AI. ethics and security.
The rapid growth of the field this year has brought both technological innovations and significant challenges. From leadership changes at OpenAI to challenges posed by new players such as Google’s Gemini and Anthropic’s Claude, the year saw a number of major shifts in the generative AI landscape. Alongside these developments, the industry has grappled with cybersecurity risks and debated the ethical implications of rapid advances in AI.
OpenAI fires and rehires CEO Sam Altman
In one of the most surprising news of the year, OpenAI co-founder and CEO Sam Altman was abruptly ousted on November 17 by the company’s board of directors, which cited a lack of candor in its communications. Shortly after Altman left, Microsoft announced he would hire her and Greg Brockman, president and co-founder of OpenAI, for a new AI research division.
Altman’s departure and the tumultuous period that followed caused widespread backlash at OpenAI, with 95% of employees threatening to resign in protest of the board’s decision. Less than a week after Altman’s initial firing, OpenAI reinstated him as CEOa decision influenced by lengthy negotiations within the board and the outpouring of support from employees.
While Altman’s return was a relief to many, the events shed light on the underlying challenges within OpenAI, namely the tension between its dual motivations as a profit-driven organization and mission, and the extent to which the viability of the company is linked to Altman. himself. This dynamic, along with the appointment of new business-focused board members, including former Harvard President Larry Summers and former Salesforce co-CEO Bret Taylor, has raised questions about the future direction of the company.
Competitors to ChatGPT emerge
ChatGPT kicked off the generative AI hype in November 2022. And while OpenAI continued to make headlines this year, 2023 also saw the rise of a number of competitors.
Although its DeepMind lab was historically an AI pioneer, Google was initially a laggard in generative AI as its Bard chatbot struggled with inconsistencies and hallucinations after its launch at the start of the year. But the company’s outlook could change in 2024 after the publication of the Gemini multimodal foundation model earlier this month. Gemini, which Google says will power Bard and other Google apps, integrates text, image, audio and video capabilities, potentially revitalizing Google’s position in generative AI.
Meanwhile, Anthropic, an AI startup founded by former members of OpenAI, revealed Claude 2, a broad language model aimed at addressing data security and privacy concerns while operating at a competitive level compared to ChatGPT. Features like the ability to scan large files and Anthropic’s focus on security set Claude apart from competitors like ChatGPT and Bard.
And IBM, renaming its long-standing Watson AI system, entered the fray with Watsonx, a generative AI platform targeting business needs with a focus on data governance, security and model customization. Despite its differentiated approach and focus on hybrid cloud, however, IBM will face challenges related to market speed and competition from startups and established technology giants.
Open source AI is becoming more and more viable
In addition to the wide range of business options, the Open source AI landscape is also expanding. Open source AI models offer an alternative to generative AI services from leading cloud providers, enabling businesses to customize templates with their own data. While training and customizing open source models offers greater control and potential savings, it can also pose challenges. challenges for businesses.
In February, AWS has partnered with Hugging Face, an important platform for open source AI models. This collaboration, which made the training, tuning and deployment of LLMs and vision models more accessible, marked Amazon’s strategic response to developments in generative AI from competitors Microsoft and Google. The partnership also gave Hugging Face access to AWS’ extensive infrastructure and developer ecosystem.
Also in February, Meta ventured into the generative AI market with its own LLM, Llama, initially intended for non-commercial licensed research and designed to be a smaller, more manageable foundation model. However, Llama was leaked online shortly after its publication, despite Meta’s plans to restrict access to academics, government agencies and other research organizations.
In July, Meta release of the improved Llama 2 marked a significant development in the generative AI market as an open source LLM available for both research and commercial purposes. In partnership with Microsoft, Meta made Llama 2 available in Azure’s AI template catalog and optimized the template for Windows, increasing its appeal to businesses.
OpenAI expands its offerings and commercial footprint
Following the blockbuster launch of ChatGPT in 2022, OpenAI introduced several new offerings in 2023. Some of the most notable include the following:
- Introduction of paid tiers, with ChatGPT Plus in February targeting individual users and small teams, and a Enterprise Level in August intended for large organizations. Both offer improved service availability and advanced features such as plugins and Internet browsing.
- A upgrade to OpenAI’s flagship LLM in March. GPT-4 is a multimodal version of the GPT model with higher performance compared to the previous GPT-3.5, which powers the free version of ChatGPT.
- New Data Privacy Features for ChatGPT in April – namely the ability for users to disable chat history to prevent OpenAI from using their conversations to retrain its AI model.
- Integration of OpenAI’s image generation model, Dall-E 3, into ChatGPT Plus and Enterprise in October.
- Several announcements during OpenAI’s first Dev Day conference in November. These include GPT-4 Turbo, which is a cheaper version of GPT-4 with a larger pop-up window, and the launch of GPTs, customizable versions of ChatGPT that users can tailor to specific tasks without writing of code.
Concerns emerge about AI safety and security
As generative AI has gained traction in 2023, debates over AI security and safety have intensified. Popular media have often highlighted fears about general artificial intelligence (AGI), a still hypothetical form of AI capable of matching or even surpassing human intelligence and capabilities.
Turing Prize winner Geoffrey Hinton retired from Google citing AI safety concerns. “Things like GPT-4 know a lot more than we do,” he told MIT Technology Reviewof the EmTech Digital 2023 conference in May. His statements echoed similar misgivings in a widely circulated letter in March. advocate for a pause in AI developmentwho wondered whether the development of “humanly competitive” AI “would risk losing control of our civilization”.
However, many other AI researchers and ethicists have argued that these concerns related to existential risk are hyperbolic, because AGI remains speculative; it is not yet known whether this technology can ever be created. From this perspective, focusing on AGI distracts from current, tangible issues like algorithmic bias and generation of harmful content using existing AI systems. There is also an element of competition, in that the AGI speech serves the interests of large AI companies by presenting AI as a technology so powerful that its access cannot be safely extended to smaller players.
Among the existing dangers of AI, an obvious risk lies in cybersecurity vulnerabilities, such as ChatGPT’s ability to increase the success and prevalence of phishing scams. In an interview earlier this year, Chester Wisniewski, director and global CTO at security software and hardware provider Sophos, explained how easily ChatGPT can be manipulated for malicious purposes.
“(ChatGPT is) significantly better at writing phishing lures than real humans, or at least the humans writing them,” he told TechTarget Editorial’s Esther Ajao in January. “Most humans who write phishing attacks do not have a high level of English proficiency, and as such, they are not as successful at compromising people. My concerns are really how the social aspect of ChatGPT could be exploited by people. who attack us. »
Lev Craig covers AI and machine learning as Site Editor for TechTarget Enterprise AI.