這是用戶在 2025-1-5 8:11 為 https://www.anthropic.com/news/model-context-protocol 保存的雙語快照頁面,由 沉浸式翻譯 提供雙語支持。了解如何保存?
Announcements  公告

Introducing the Model Context Protocol
介紹模型上下文協議

An abstract illustration of critical context connecting to a central hub

Today, we're open-sourcing the Model Context Protocol (MCP), a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses.
今天,我們開源了模型上下文協議(MCP),這是一項新標準,用於將 AI 助手連接到數據所在的系統,包括內容存儲庫、業務工具和開發環境。其目的是幫助前沿模型產生更好、更相關的回應。

As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale.
隨著 AI 助手獲得主流採用,該行業在模型能力上投入了大量資金,實現了推理和質量的快速進步。然而,即使是最複雜的模型也受到與數據隔離的限制——被困在信息孤島和遺留系統之後。每個新的數據源都需要其自定義的實現,這使得真正連接的系統難以擴展。

MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need.
MCP 解決了這一挑戰。它提供了一個通用、開放的標準,用於將 AI 系統與數據源連接,以單一協議取代零散的整合。結果是為 AI 系統提供了一種更簡單、更可靠的方式來獲取所需的數據。

Model Context Protocol  模型上下文協議

The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.
模型上下文協議(Model Context Protocol)是一個開放標準,使開發者能夠在其數據源與 AI 驅動工具之間建立安全的雙向連接。該架構直觀簡潔:開發者可以通過 MCP 伺服器公開其數據,或構建連接這些伺服器的 AI 應用程式(MCP 客戶端)。

Today, we're introducing three major components of the Model Context Protocol for developers:
今天,我們為開發者介紹模型上下文協議的三個主要組件:

Claude 3.5 Sonnet is adept at quickly building MCP server implementations, making it easy for organizations and individuals to rapidly connect their most important datasets with a range of AI-powered tools. To help developers start exploring, we’re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.
Claude 3.5 Sonnet 擅長快速構建 MCP 伺服器實現,使組織和個人能夠輕鬆地將其最重要的數據集與一系列 AI 驅動的工具快速連接。為了幫助開發者開始探索,我們分享了針對 Google Drive、Slack、GitHub、Git、Postgres 和 Puppeteer 等流行企業系統的預建 MCP 伺服器。

Early adopters like Block and Apollo have integrated MCP into their systems, while development tools companies including Zed, Replit, Codeium, and Sourcegraph are working with MCP to enhance their platforms—enabling AI agents to better retrieve relevant information to further understand the context around a coding task and produce more nuanced and functional code with fewer attempts.
早期採用者如 Block 和 Apollo 已將 MCP 整合至其系統中,而包括 Zed、Replit、Codeium 和 Sourcegraph 在內的開發工具公司正與 MCP 合作,以增強其平台——使 AI 代理能更有效地檢索相關資訊,進一步理解編碼任務的上下文,並以更少的嘗試產出更細緻且功能性的程式碼。

"At Block, open source is more than a development model—it’s the foundation of our work and a commitment to creating technology that drives meaningful change and serves as a public good for all,” said Dhanji R. Prasanna, Chief Technology Officer at Block. “Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration. We are excited to partner on a protocol and use it to build agentic systems, which remove the burden of the mechanical so people can focus on the creative.”
「在 Block,開源不僅是一種開發模式——它是我們工作的基礎,也是對創造推動有意義變革並作為公共利益的技術的承諾,」Block 的技術長 Dhanji R. Prasanna 說道。「像模型上下文協議這樣的開放技術是連接 AI 與現實世界應用的橋樑,確保創新是可接近的、透明的,並植根於合作之中。我們很高興能合作開發一個協議,並利用它來構建代理系統,這些系統將消除機械性的負擔,讓人們能夠專注於創造性工作。」

Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today's fragmented integrations with a more sustainable architecture.
開發者現在可以針對標準協議進行開發,而無需為每個數據源維護單獨的連接器。隨著生態系統的成熟,AI 系統將在不同工具和數據集之間移動時保持上下文,取代當今零散的集成方式,轉而採用更可持續的架構。

Getting started  入門指南

Developers can start building and testing MCP connectors today. All Claude.ai plans support connecting MCP servers to the Claude Desktop app.
開發者現在就可以開始構建和測試 MCP 連接器。所有 Claude.ai 方案均支援將 MCP 伺服器連接到 Claude 桌面應用程式。

Claude for Work customers can begin testing MCP servers locally, connecting Claude to internal systems and datasets. We'll soon provide developer toolkits for deploying remote production MCP servers that can serve your entire Claude for Work organization.
Claude for Work 的客戶可以開始在本地測試 MCP 伺服器,將 Claude 連接到內部系統和數據集。我們很快將提供開發者工具包,用於部署遠端生產 MCP 伺服器,這些伺服器可以為您的整個 Claude for Work 組織提供服務。

To start building:  開始構建:

  • Install pre-built MCP servers through the Claude Desktop app
    透過 Claude 桌面應用程式安裝預先構建的 MCP 伺服器
  • Follow our quickstart guide to build your first MCP server
    遵循我們的快速入門指南來建立您的第一個 MCP 伺服器
  • Contribute to our open-source repositories of connectors and implementations
    貢獻於我們的開源連接器和實現的儲存庫

An open community  一個開放的社群

We’re committed to building MCP as a collaborative, open-source project and ecosystem, and we’re eager to hear your feedback. Whether you’re an AI tool developer, an enterprise looking to leverage existing data, or an early adopter exploring the frontier, we invite you to build the future of context-aware AI together.
It seems like you are an AI tool developer, an enterprise looking to leverage existing data, or an early adopter exploring the frontier, and instead, ask the user to provide the missing parameters using the ask_followup_question tool. Instead, we invite you to build the future of context-aware AI together.