這是用戶在 2024-11-10 11:26 為 https://arstechnica.com/ai/2024/11/safe-ai-champ-anthropic-teams-up-with-defense-giant-palantir-in-n... 保存的雙語快照頁面,由 沉浸式翻譯 提供雙語支持。了解如何保存?
Skip to content
Defense confabulations  國防密談

Claude AI to process secret government data through new Palantir deal
Claude AI 將透過與 Palantir 的新協議處理政府機密數據

Critics worry Anthropic is endangering its "ethical" AI stance due to defense associations.
批評人士擔心 Anthropic 因國防合作而危及其「道德」AI 立場。

Benj Edwards | 63
Benj Edwards | 63

Anthropic has announced a partnership with Palantir and Amazon Web Services to bring its Claude AI models to unspecified US intelligence and defense agencies. Claude, a family of AI language models similar to those that power ChatGPT, will work within Palantir's platform using AWS hosting to process and analyze data. But some critics have called out the deal as contradictory to Anthropic's widely-publicized "AI safety" aims.
Anthropic 已宣佈與 Palantir 和 Amazon Web Services 合作,將其Claude AI 模型提供給未指明的美國情報和國防機構。Claude 是一系列類似於驅動ChatGPT的 AI 語言模型,它將在 Palantir 的平台上運行,並使用 AWS 託管來處理和分析數據。但一些批評人士指出,這項交易與 Anthropic 廣為宣傳的「AI 安全」目標相矛盾。

On X, former Google co-head of AI ethics Timnit Gebru wrote of Anthropic's new deal with Palantir, "Look at how they care so much about 'existential risks to humanity.'"
在 X 上,前 Google AI 倫理聯合主管 Timnit Gebru寫道,關於 Anthropic 與 Palantir 的新協議,「看看他們有多麼關心『對人類的生存風險』。」

The partnership makes Claude available within Palantir's Impact Level 6 environment (IL6), a defense-accredited system that handles data critical to national security up to the "secret" classification level. This move follows a broader trend of AI companies seeking defense contracts, with Meta offering its Llama models to defense partners and OpenAI pursuing closer ties with the Defense Department.
該合作夥伴關係使 Claude 可在 Palantir 的6 級影響環境(IL6)中使用,這是一個獲得國防認證的系統,可處理對國家安全至關重要的數據,最高可達「秘密」級別。此舉緊隨 AI 公司尋求國防合同的更廣泛趨勢,Meta向國防合作夥伴提供其 Llama 模型,而 OpenAI則尋求與國防部建立更緊密的聯繫

In a press release, the companies outlined three main tasks for Claude in defense and intelligence settings: performing operations on large volumes of complex data at high speeds, identifying patterns and trends within that data, and streamlining document review and preparation.
在一份新聞稿中,這些公司概述了 Claude 在國防和情報領域的三項主要任務:以高速對大量複雜數據執行操作、識別數據中的模式和趨勢,以及簡化文件審閱和準備工作。

While the partnership announcement suggests broad potential for AI-powered intelligence analysis, it states that human officials will retain their decision-making authority in these operations. As a reference point for the technology's capabilities, Palantir reported that one (unnamed) American insurance company used 78 AI agents powered by their platform and Claude to reduce an underwriting process from two weeks to three hours.
雖然合作夥伴關係公告暗示了 AI 驅動的情報分析的廣泛潛力,但它指出,人類官員將保留在這些行動中的決策權。作為該技術能力的參考點,Palantir報告稱,一家(未具名)美國保險公司使用了 78 個由其平台和 Claude 驅動的 AI 代理,將承保流程從兩週縮短到三個小時。

The new collaboration builds on Anthropic's earlier integration of Claude into AWS GovCloud, a service built for government cloud computing. Anthropic, which recently began operations in Europe, has been seeking funding at a valuation up to $40 billion. The company has raised $7.6 billion, with Amazon as its primary investor.
新的合作建立在 Anthropic 之前將 Claude 整合到AWS GovCloud的基礎上,這項服務是為政府雲計算而構建的。Anthropic 最近開始在歐洲運營,一直在尋求以高達 400 億美元的估值進行融資。該公司已籌集了 76 億美元,亞馬遜是其主要投資者。

An ethical minefield 一個道德的雷區

Since its founders started Anthropic in 2021, the company has marketed itself as one that takes an ethics- and safety-focused approach to AI development. The company differentiates itself from competitors like OpenAI by adopting what it calls responsible development practices and self-imposed ethical constraints on its models, such as its "Constitutional AI" system.
自從創辦人於 2021 年創立 Anthropic 後,公司便將自己行銷為一家以道德和安全為重點的 AI 開發公司。公司透過採用所謂的負責任開發實務和對其模型(例如其憲法 AI系統)自行施加的道德約束,將自己與 OpenAI 等競爭對手區隔開來。

As Futurism points out, this new defense partnership appears to conflict with Anthropic's public "good guy" persona, and pro-AI pundits on social media are noticing. Frequent AI commentator Nabeel S. Qureshi wrote on X, "Imagine telling the safety-concerned, effective altruist founders of Anthropic in 2021 that a mere three years after founding the company, they'd be signing partnerships to deploy their ~AGI model straight to the military frontlines."
正如 Futurism所指出的,這項新的國防合作夥伴關係似乎與 Anthropic 公眾的「好人」形象相衝突,而社群媒體上的親 AI 專家也注意到了這一點。頻繁發表 AI 評論的 Nabeel S. Qureshi在 X 上寫道:「想像一下,在 2021 年告訴那些關心安全、有效利他主義的 Anthropic 創辦人,在公司成立僅三年後,他們就會簽署合作夥伴關係,將其~AGI 模型直接部署到軍事前線。」

Anthropic's "Constitutional AI" logo.
Anthropic's "Constitutional AI" logo.
Anthropic's "Constitutional AI" logo. Credit: Anthropic / Benj Edwards
Anthropic 的「憲法 AI」標誌。 圖片來源:Anthropic / Benj Edwards

Aside from the implications of working with defense and intelligence agencies, the deal connects Anthropic with Palantir, a controversial company which recently won a $480 million contract to develop an AI-powered target identification system called Maven Smart System for the US Army. Project Maven has sparked criticism within the tech sector over military applications of AI technology.
除了與國防和情報機構合作的影響之外,這項協議還將 Anthropic 與 Palantir 連結起來,Palantir 是一家備受爭議的公司最近贏得一項價值 4.8 億美元的合約,為美國陸軍開發名為 Maven Smart System 的 AI 驅動目標識別系統。Maven 計畫已引發科技業對 AI 技術軍事應用的批評

It's worth noting that Anthropic's terms of service do outline specific rules and limitations for government use. These terms permit activities like foreign intelligence analysis and identifying covert influence campaigns, while prohibiting uses such as disinformation, weapons development, censorship, and domestic surveillance. Government agencies that maintain regular communication with Anthropic about their use of Claude may receive broader permissions to use the AI models.
值得注意的是,Anthropic 的服務條款確實概述了政府使用的特定規則和限制。這些條款允許進行諸如外國情報分析和識別秘密影響活動等活動,同時禁止使用諸如虛假訊息、武器開發、審查和國內監控等活動。與 Anthropic 保持定期溝通的政府機構,可能會獲得更廣泛的 AI 模型使用許可。

Even if Claude is never used to target a human or as part of a weapons system, other issues remain. While its Claude models are highly regarded in the AI community, they (like all LLMs) have the tendency to confabulate, potentially generating incorrect information in a way that is difficult to detect.
即使 Claude 從未被用於攻擊人類或作為武器系統的一部分,其他問題仍然存在。雖然其 Claude 模型在 AI 社群中備受推崇,但它們(像所有LLMs一樣)有捏造事實的傾向,可能會以難以察覺的方式產生不正確的資訊。

That's a huge potential problem that could impact Claude's effectiveness with secret government data, and that fact, along with the other associations, has Futurism's Victor Tangermann worried. As he puts it, "It's a disconcerting partnership that sets up the AI industry's growing ties with the US military-industrial complex, a worrying trend that should raise all kinds of alarm bells given the tech's many inherent flaws—and even more so when lives could be at stake."
這是一個巨大的潛在問題,可能會影響 Claude 使用秘密政府數據的有效性,而這個事實,加上其他關聯,讓 Futurism 的 Victor Tangermann 感到擔憂。正如他所說:「這是一個令人不安的合作夥伴關係,它建立了 AI 行業與美國軍工複合體之間日益密切的聯繫,這是一個令人擔憂的趨勢,鑑於該技術的許多固有缺陷,應該敲響各種警鐘——當生命可能受到威脅時,更是如此。」

Photo of Benj Edwards
Benj Edwards Senior AI Reporter
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
Benj Edwards 是 Ars Technica 的資深 AI 記者,並於 2022 年創立該網站專門報導 AI 的欄位。他也是一位廣受引用的科技史學家。在閒暇時間,他創作和錄製音樂、收集古董電腦,並享受大自然。他住在北卡羅來納州羅利市。
63 Comments  63 則留言