这是用户在 2024-11-14 11:41 为 https://www.theinformation.com/articles/following-openai-google-changes-tack-to-overcome-slowdown-in... 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Exclusive 独家

Following OpenAI, Google Changes Tack to Overcome AI Slowdown
继 OpenAI 之后,谷歌改变策略以克服人工智能放缓问题

Following OpenAI, Google Changes Tack to Overcome AI SlowdownArt by Clark Miller 克拉克·米勒的艺术

Google has recently struggled to achieve performance gains in its Gemini conversational artificial intelligence at the same rate it did last year, prompting researchers to focus on other ways to eke out gains, according to an employee who has been involved in the effort.
谷歌最近在其 Gemini 对话式人工智能的性能提升方面遇到了困难,未能达到去年同样的增长速度,这促使研究人员专注于其他方式以获取增益,来自一位参与该项目的员工表示。

The situation appears similar to the slowdown in AI advances that rival OpenAI has experienced this year, which spurred the ChatGPT owner to use new techniques to overcome the challenge.
情况似乎与竞争对手 OpenAI 今年经历的人工智能进展放缓相似,这促使 ChatGPT 的拥有者采用新技术来克服这一挑战。

Google hasn’t achieved the performance gains some of its leaders were hoping for after dedicating larger amounts of computing power and training data—such as text and images from the web, this person said. Past versions of Google’s flagship Gemini large language model improved at a faster rate when researchers used more data and computing power to train them.
谷歌并没有实现其一些领导者所期望的性能提升,尽管投入了更多的计算能力和训练数据——例如来自网络的文本和图像,这位人士表示。谷歌旗舰的 Gemini 大型语言模型的早期版本在研究人员使用更多数据和计算能力进行训练时,提升速度更快。

Google’s experience is another indication that a core assumption about how to improve models, known as scaling laws, is being tested. Many researchers believed that models would improve at the same rate as long as they processed more data while using more specialized AI chips, but those two factors don’t seem to be enough.
谷歌的经验是另一个迹象,表明关于如何改进模型的核心假设,即规模法则,正在被检验。许多研究人员认为,只要模型处理更多数据并使用更专业的人工智能芯片,模型的性能就会以相同的速度提高,但这两个因素似乎并不足够。

The issue is particularly concerning for Google, as Gemini models have lagged OpenAI’s GPT models in terms of the number of developers and customers who use them. Google has been hoping its advantage over OpenAI in computing resources could help it leapfrog OpenAI’s models in terms of quality. In the meantime, both companies are developing new products, powered by existing versions of the technology, that could automate the tedious or complex work of software programmers and other office workers.
这个问题对谷歌尤其令人担忧,因为 Gemini 模型在使用开发者和客户的数量上落后于 OpenAI 的 GPT 模型。谷歌一直希望其在计算资源上的优势能够帮助其在质量上超越 OpenAI 的模型。与此同时,两家公司都在开发新产品,这些产品由现有版本的技术驱动,可以自动化软件程序员和其他办公室工作人员的繁琐或复杂工作。

“We’re pleased with the progress we’re seeing on Gemini and we’ll share more when we’re ready,” a Google spokesperson said. The spokesperson said the company is rethinking how it approaches training data and “investing significantly” in data. Google has also succeeded at speeding up how quickly its models can produce answers, which is “important for serving AI at Google’s scale,” the spokesperson said
“我们对 Gemini 的进展感到满意,准备好时我们会分享更多,”一位谷歌发言人表示。发言人表示,公司正在重新思考其对训练数据的处理方式,并“在数据上进行重大投资”。谷歌还成功加快了其模型生成答案的速度,发言人表示,这“对于在谷歌的规模上提供 AI 服务至关重要”。

At OpenAI, researchers have invented new techniques such as reasoning models to make up for the slowdown in advancements that used traditional scaling law techniques during the model training phase.
在 OpenAI,研究人员发明了新的技术,例如推理模型,以弥补在模型训练阶段使用传统缩放法技术时进展放缓的问题。

Google appears to be following suit. In recent weeks, DeepMind has staffed a team within its Gemini unit, led by principal research scientist Jack Rae and former Character.AI co-founder Noam Shazeer, that aims to develop similar capabilities.
谷歌似乎也在效仿。最近几周,DeepMind 在其 Gemini 部门组建了一支团队,由首席研究科学家 Jack Rae 和前 Character.AI 联合创始人 Noam Shazeer 领导,旨在开发类似的能力。

Researchers at DeepMind, the Google unit developing Gemini, also have been focusing on making manual improvements to the models. Those include changing their “hyperparameters,” or variables that determine how the model processes information, such as how quickly it draws connections between different concepts or patterns within training data, said the person who is involved in the work. Researchers test different hyperparameters during a process called model tuning to see which variables lead to the best results.
DeepMind 的研究人员,谷歌旗下开发 Gemini 的单位,也一直专注于对模型进行手动改进。这些改进包括更改它们的“超参数”,即决定模型如何处理信息的变量,例如它在训练数据中多快地建立不同概念或模式之间的联系,参与该工作的人员表示。研究人员在一个称为模型调优的过程中测试不同的超参数,以查看哪些变量能够带来最佳结果。

Google may not be starting from scratch, however. Google researchers previously invented a key technique behind OpenAI’s reasoning models, though one of the Google researchers decamped to OpenAI to work on the technique there.
谷歌可能并不是从零开始。然而,谷歌的研究人员之前发明了 OpenAI 推理模型背后的一个关键技术,尽管其中一位谷歌研究人员已转投 OpenAI,在那里研究该技术。

One problem Google encountered in its traditional approach is it found copies of the same information within the data it used to develop Gemini, this person said. That may have hurt Gemini’s performance, they said. The Google spokesperson said this type of issue is not new to the team.
谷歌在其传统方法中遇到的一个问题是,它发现了在用于开发 Gemini 的数据中存在相同信息的副本,这位人士表示。这可能影响了 Gemini 的表现,他们说。谷歌发言人表示,这种类型的问题对团队来说并不新鲜。

Researchers had also hoped that using AI-generated data, also known as synthetic data, as well as audio and video as part of the Gemini training data could lead to significant improvements, but those factors don’t appear to have made a major impact, this person said. (Gemini models are “demonstrating strong performance with [audio and visual] information, and we continue to explore and advance multimodal capabilities,” the spokesperson said.)
研究人员还希望使用 AI 生成的数据,也称为合成数据,以及音频和视频作为 Gemini 训练数据的一部分能够带来显著的改进,但这些因素似乎没有产生重大影响,这位人士表示。(发言人表示:“Gemini 模型在[音频和视觉]信息方面表现出色,我们将继续探索和推进多模态能力。”)

OpenAI and other developers also use synthetic data but have found they are limited in how much they can improve AI models.
OpenAI 和其他开发者也使用合成数据,但发现他们在提升 AI 模型方面的能力有限。

Erin Woo contributed to this article. This article has been updated with comments from a Google spokesperson.
艾琳·吴对本文做出了贡献。本文已更新了来自谷歌发言人的评论。

Jon Victor is a reporter at The Information covering enterprise software and AI. He can be reached at jon.victor@theinformation.com or on Twitter at @jon_victor_.
乔恩·维克托是《信息》杂志的记者,负责报道企业软件和人工智能。他可以通过电子邮件 jon.victor@theinformation.com 或在推特上联系 @jon_victor_。