Artificial Intelligence 人工智能
Create an agentic RAG application for advanced knowledge discovery with LlamaIndex, and Mistral in Amazon Bedrock
在 Amazon Bedrock 中使用 LlamaIndex 和 Mistral 创建代理 RAG 应用程序以进行高级知识发现
Agentic Retrieval Augmented Generation (RAG) applications represent an advanced approach in AI that integrates foundation models (FMs) with external knowledge retrieval and autonomous agent capabilities. These systems dynamically access and process information, break down complex tasks, use external tools, apply reasoning, and adapt to various contexts. They go beyond simple question answering by performing multi-step processes, making decisions, and generating complex outputs.
代理检索增强一代 (RAG) 应用程序代表了 AI 中的一种高级方法,它将基础模型 (FM) 与外部知识检索和自主代理功能集成在一起。这些系统动态访问和处理信息,分解复杂的任务,使用外部工具,应用推理,并适应各种环境。它们通过执行多步骤流程、做出决策和生成复杂的输出,超越了简单的问答。
In this post, we demonstrate an example of building an agentic RAG application using the LlamaIndex framework. LlamaIndex is a framework that connects FMs with external data sources. It helps ingest, structure, and retrieve information from databases, APIs, PDFs, and more, enabling the agent and RAG for AI applications.
在本文中,我们演示了一个使用 LlamaIndex 框架构建代理 RAG 应用程序的示例。LlamaIndex 是一个将 FM 与外部数据源连接起来的框架。它有助于从数据库、API、PDF 等中摄取、构建和检索信息,从而为 AI 应用程序启用代理和 RAG。
This application serves as a research tool, using the Mistral Large 2 FM on Amazon Bedrock generate responses for the agent flow. The example application interacts with well-known websites, such as Arxiv, GitHub, TechCrunch, and DuckDuckGo, and can access knowledge bases containing documentation and internal knowledge.
此应用程序用作研究工具,使用 Amazon Bedrock 上的 Mistral Large 2 FM 为代理流程生成响应。示例应用程序与 Arxiv、GitHub、TechCrunch 和 DuckDuckGo 等知名网站交互,并且可以访问包含文档和内部知识的知识库。
This application can be further expanded to accommodate broader use cases requiring dynamic interaction with internal and external APIs, as well as the integration of internal knowledge bases to provide more context-aware responses to user queries.
此应用程序可以进一步扩展,以适应更广泛的使用案例,这些用例需要与内部和外部 API 进行动态交互,以及集成内部知识库,以便为用户查询提供更多上下文感知响应。
Solution overview 解决方案概述
This solution uses the LlamaIndex framework to build an agent flow with two main components: AgentRunner and AgentWorker. The AgentRunner serves as an orchestrator that manages conversation history, creates and maintains tasks, executes task steps, and provides a user-friendly interface for interactions. The AgentWorker handles the step-by-step reasoning and task execution.
此解决方案使用 LlamaIndex 框架构建具有两个主要组件的代理流: AgentRunner 和 AgentWorker。AgentRunner 充当编排器,管理对话历史记录、创建和维护任务、执行任务步骤,并为交互提供用户友好的界面。AgentWorker 处理分步推理和任务执行。
For reasoning and task planning, we use Mistral Large 2 on Amazon Bedrock. You can use other text generation FMs available from Amazon Bedrock. For the full list of supported models, see Supported foundation models in Amazon Bedrock. The agent integrates with GitHub, arXiv, TechCrunch, and DuckDuckGo APIs, while also accessing internal knowledge through a RAG framework to provide context-aware answers.
对于推理和任务规划,我们在 Amazon Bedrock 上使用 Mistral Large 2。您可以使用 Amazon Bedrock 提供的其他文本生成 FM。有关受支持模型的完整列表,请参阅 Amazon Bedrock 中支持的基础模型 。该代理与 GitHub、arXiv、TechCrunch 和 DuckDuckGo API 集成,同时还通过 RAG 框架访问内部知识,以提供上下文感知答案。
In this solution, we present two options for building the RAG framework:
在这个解决方案中,我们提供了两个构建 RAG 框架的选项:
- Document integration with Amazon OpenSearch Serverless – The first option involves using LlamaIndex to programmatically load and process documents. It splits the documents into chunks using various chunking strategies and then stores these chunks in an Amazon OpenSearch Serverless vector store for future retrieval.
与 Amazon OpenSearch Serverless 的文档集成 – 第一个选项涉及使用 LlamaIndex 以编程方式加载和处理文档。它使用 各种分块策略 将文档拆分为块,然后将这些块存储在 Amazon OpenSearch 无服务器 矢量存储中以备将来检索。 - Document integration with Amazon Bedrock Knowledge Bases – The second option uses Amazon Bedrock Knowledge Bases, a fully managed service that handles the loading, processing, and chunking of documents. This service can quickly create a new vector store on your behalf with a few configurations and clicks. You can choose from Amazon OpenSearch Serverless, Amazon Aurora PostgreSQL-Compatible Edition Serverless, and Amazon Neptune Analytics. Additionally, the solution includes a document retrieval rerank feature to enhance the relevance of the responses.
文档与 Amazon Bedrock 知识库的集成 – 第二个选项使用 Amazon Bedrock 知识库 ,这是一项完全托管的服务,用于处理文档的加载、处理和分块。该服务可以通过几次配置和单击来代表您快速创建新的 vector store。您可以从 Amazon OpenSearch Serverless、 Amazon Aurora PostgreSQL 兼容版 Serverless 和 Amazon Neptune Analytics 中进行选择。此外,该解决方案还包括文档检索重新排名功能,以增强响应的相关性。
You can select the RAG implementation option that best suits your preference and developer skill level.
您可以选择最适合您的偏好和开发人员技能水平的 RAG 实施选项。
The following diagram illustrates the solution architecture.
下图说明了解决方案体系结构。
In the following sections, we present the steps to implement the agentic RAG application. You can also find the sample code in the GitHub repository.
在以下部分中,我们将介绍实施代理 RAG 应用程序的步骤。您还可以在 GitHub 存储库中找到示例代码。
Prerequisites 先决条件
The solution has been tested in the AWS Region us-west-2. Complete the following steps before proceeding:
该解决方案已在 AWS 区域 us-west-2 中进行了测试。在继续之前,请完成以下步骤:
- Set up the following resources:
设置以下资源:- Create an Amazon SageMaker
创建 Amazon SageMaker - Create a SageMaker domain user profile.
创建 SageMaker 域用户配置文件。 - Launch Amazon SageMaker Studio, select JupyterLab, and create a space.
启动 Amazon SageMaker Studio,选择 JupyterLab,然后创建一个空间。 - Select the instance t3.medium and the image SageMaker Distribution 2.3.1, then run the space.
选择实例 t3.medium 和映像 SageMaker Distribution 2.3.1,然后运行空间。
- Create an Amazon SageMaker
- Request model access:
请求模型访问:
- On the Amazon Bedrock console, choose Model access in the navigation pane.
在 Amazon Bedrock 控制台上,选择导航窗格中的 Model access (模型访问 )。 - Choose Modify model access.
选择 Modify model access (修改模型访问权限 )。 - Select the models Mistral Large 2 (24.07), Amazon Titan Text Embeddings V2, and Rerank 1.0 from the list, and request access to these models.
从列表中选择模型 Mistral Large 2 (24.07)、Amazon Titan Text Embeddings V2 和 Rerank 1.0 ,然后请求访问这些模型。
- On the Amazon Bedrock console, choose Model access in the navigation pane.
- Configure AWS Identity and Access Management (IAM) permissions:
配置 AWS Identity and Access Management (IAM) 权限:- In the SageMaker console, go to the SageMaker user profile details and find the execution role that the SageMaker notebook uses. It should look like
AmazonSageMaker-ExecutionRole-20250213T123456
.
在 SageMaker 控制台中,转到 SageMaker 用户配置文件详细信息,然后找到 SageMaker 笔记本使用的执行角色。它应该看起来像AmazonSageMaker-ExecutionRole-20250213T123456
。
- In the SageMaker console, go to the SageMaker user profile details and find the execution role that the SageMaker notebook uses. It should look like
- In the IAM console, create an inline policy for this execution role. that your role can perform the following actions:
在 IAM 控制台中,为此执行角色创建内联策略。您的角色可以执行以下作:- Access to Amazon Bedrock services including:
访问 Amazon Bedrock 服务,包括:- Reranking capabilities 重新排名功能
- Retrieving information 检索信息
- Invoking models 调用模型
- Listing available foundation models
列出可用的基础模型
- IAM permissions to:
IAM 权限:
- Create policies 创建策略
- Attach policies to roles within your account
将策略附加到账户中的角色
- Full access to Amazon OpenSearch Serverless service
对 Amazon OpenSearch 无服务器服务的完全访问权限
- Access to Amazon Bedrock services including:
- Run the following command in the JupyterLab notebook terminal to download the sample code from GitHub:
在 JupyterLab 笔记本终端中运行以下命令,从 GitHub 下载示例代码:
- Finally, install the required Python packages by running the following command in the terminal:
最后,通过在终端中运行以下命令来安装所需的 Python 包:
Initialize the models 初始化模型
Initialize the FM used for orchestrating the agentic flow with Amazon Bedrock Converse API. This API provides a unified interface for interacting with various FMs available on Amazon Bedrock. This standardization simplifies the development process, allowing developers to write code one time and seamlessly switch between different models without adjusting for model-specific differences. In this example, we use the Mistral Large 2 model on Amazon Bedrock.
初始化用于使用 Amazon Bedrock Converse API 编排代理流的 FM。此 API 提供了一个统一的接口,用于与 Amazon Bedrock 上提供的各种 FM 进行交互。这种标准化简化了开发过程,使开发人员只需编写一次代码,即可在不同模型之间无缝切换,而无需调整特定于模型的差异。在此示例中,我们在 Amazon Bedrock 上使用 Mistral Large 2 模型。
Next, initialize the embedding model from Amazon Bedrock, which is used for converting document chunks into embedding vectors. For this example, we use Amazon Titan Text Embeddings V2. See the following code:
接下来,从 Amazon Bedrock 初始化嵌入模型,该模型用于将文档块转换为嵌入向量。在此示例中,我们使用 Amazon Titan Text Embeddings V2。请参阅以下代码:
Integrate API tools 集成 API 工具
Implement two functions to interact with the GitHub and TechCrunch APIs. The APIs shown in this post don’t require credentials. To provide clear communication between the agent and the foundation model, follow Python function best practices, including:
实现两个函数来与 GitHub 和 TechCrunch API 交互。本文中显示的 API 不需要凭据。要在代理和基础模型之间提供清晰的通信,请遵循 Python 函数最佳实践,包括:
- Type hints for parameter and return value validation
参数和返回值验证的类型提示 - Detailed docstrings explaining function purpose, parameters, and expected returns
解释函数用途、参数和预期回报的详细文档字符串 - Clear function descriptions
清晰的功能描述
The following code sample shows the function that integrates with the GitHub API. After the function is created, use the FunctionTool.from_defaults()
method to wrap the function as a tool and integrate it seamlessly into the LlamaIndex workflow.
以下代码示例显示了与 GitHub API 集成的函数。创建函数后,使用 FunctionTool.from_defaults()
方法将函数包装为工具,并将其无缝集成到 LlamaIndex 工作流中。
See the code repository for the full code samples of the function that integrates with the TechCrunch API.
有关与 TechCrunch API 集成的函数的完整代码示例,请参阅 代码存储库 。
For arXiv and DuckDuckGo integration, we use LlamaIndex’s pre-built tools instead of creating custom functions. You can explore other available pre-built tools in the LlamaIndex documentation to avoid duplicating existing solutions.
对于 arXiv 和 DuckDuckGo 的集成,我们使用 LlamaIndex 的预构建工具,而不是创建自定义函数。您可以在 LlamaIndex 文档中 探索其他可用的预构建工具,以避免重复现有解决方案。
RAG option 1: Document integration with Amazon OpenSearch Serverless
RAG 选项 1:与 Amazon OpenSearch Serverless 的文档集成
Next, programmatically build the RAG component using LlamaIndex to load, process, and chunk documents. store the embedding vectors in Amazon OpenSearch Serverless. This approach offers greater flexibility for advanced scenarios, such as loading various file types (including .epub and .ppt) and selecting advanced chunking strategies based on file types (such as HTML, JSON, and code).
接下来,使用 LlamaIndex 以编程方式构建 RAG 组件,以加载、处理和分块文档。 将嵌入向量存储在 Amazon OpenSearch Serverless 中。此方法为高级方案提供了更大的灵活性,例如加载各种文件类型(包括 .epub 和 .ppt)以及根据文件类型(如 HTML、JSON 和代码)选择高级分块策略。
Before moving forward, you can download some PDF documents for testing from the AWS website using the following command, or you can use your own documents. The following documents are AWS guides that help in choosing the right generative AI service (such as Amazon Bedrock or Amazon Q) based on use case, customization needs, and automation potential. They also assist in selecting AWS machine learning (ML) services (such as SageMaker) for building models, using pre-trained AI, and using cloud infrastructure.
在继续之前,您可以使用以下命令从 AWS 网站下载一些 PDF 文档以进行测试,也可以使用自己的文档。以下文档是 AWS 指南,可帮助您根据使用案例、自定义需求和自动化潜力选择合适的生成式 AI 服务(例如 Amazon Bedrock 或 Amazon Q)。他们还协助选择 AWS 机器学习 (ML) 服务(例如 SageMaker)来构建模型、使用预先训练的 AI 以及使用云基础设施。
Load the PDF documents using SimpleDirectoryReader()
in the following code. For a full list of supported file types, see the LlamaIndex documentation.
在以下代码中使用 SimpleDirectoryReader()
加载 PDF 文档。有关支持的文件类型的完整列表,请参阅 LlamaIndex 文档 。
Next, create an Amazon OpenSearch Serverless collection as the vector database. Check the utils.py
file for details on the create_collection()
function.
接下来,创建一个 Amazon OpenSearch Serverless 集合作为矢量数据库。检查 utils.py
文件以了解有关 create_collection()
函数的详细信息。
After you create the collection, create an index to store embedding vectors:
创建集合后,创建一个索引来存储嵌入向量:
Next, use the following code to implement a document search system using LlamaIndex integrated with Amazon OpenSearch Serverless. It first sets up AWS authentication to securely access OpenSearch Service, then configures a vector client that can handle 1024-dimensional embeddings (specifically designed for the Amazon Titan Embedding V2 model). The code processes input documents by breaking them into manageable chunks of 1,024 tokens with a 20-token overlap, converts these chunks into vector embeddings, and stores them in the OpenSearch Serverless vector index. You can select a different or more advanced chunking strategy by modifying the transformations parameter in the VectorStoreIndex.from_documents()
method. For more information and examples, see the LlamaIndex documentation.
接下来,使用以下代码使用与 Amazon OpenSearch Serverless 集成的 LlamaIndex 实施文档搜索系统。它首先设置 AWS 身份验证以安全地访问 OpenSearch Service,然后配置可以处理 1024 维嵌入的矢量客户端(专为 Amazon Titan Embedding V2 模型设计)。该代码通过将输入文档分成 1024 个令牌的可管理块(具有 20 个令牌重叠)来处理输入文档,将这些块转换为向量嵌入,并将它们存储在 OpenSearch Serverless 向量索引中。您可以通过修改 VectorStoreIndex.from_documents()
方法中的 transformations 参数来选择不同或更高级的分块策略。有关更多信息和示例,请参阅 LlamaIndex 文档 。
You can add a reranking step in the RAG pipeline, which improves the quality of information retrieved by making sure that the most relevant documents are presented to the language model, resulting in more accurate and on-topic responses:
您可以在 RAG 管道中添加重新排名步骤,该步骤通过确保将最相关的文档呈现给语言模型来提高检索信息的质量,从而产生更准确和切题的响应:
Use the following code to test the RAG framework. You can compare results by enabling or disabling the reranker model.
使用以下代码测试 RAG 框架。您可以通过启用或禁用 reranker 模型来比较结果。
Next, convert the vector store into a LlamaIndex QueryEngineTool
, which requires a tool name and a comprehensive description. This tool is then combined with other API tools to create an agent worker that executes tasks in a step-by-step manner. The code initializes an AgentRunner
to orchestrate the entire workflow, analyzing text inputs and generating responses. The system can be configured to support parallel tool execution for improved efficiency.
接下来,将向量存储转换为 LlamaIndex QueryEngineTool
,这需要工具名称和全面的描述。然后,此工具与其他 API 工具结合使用,以创建逐步执行任务的代理工作程序。该代码初始化 AgentRunner
以编排整个工作流程,分析文本输入并生成响应。该系统可以配置为支持并行工具执行,以提高效率。
You have now completed building the agentic RAG application using LlamaIndex and Amazon OpenSearch Serverless. You can test the chatbot application with your own questions. For example, ask about the latest news and features regarding Amazon Bedrock, or inquire about the latest papers and most popular GitHub repositories related to generative AI.
您现在已经完成了使用 LlamaIndex 和 Amazon OpenSearch Serverless 构建代理 RAG 应用程序的过程。您可以使用自己的问题测试聊天机器人应用程序。例如,询问有关 Amazon Bedrock 的最新新闻和功能,或询问与生成式 AI 相关的最新论文和最受欢迎的 GitHub 存储库。
RAG option 2: Document integration with Amazon Bedrock Knowledge Bases
RAG 选项 2:与 Amazon Bedrock 知识库的文档集成
In this section, you use Amazon Bedrock Knowledge Bases to build the RAG framework. You can create an Amazon Bedrock knowledge base on the Amazon Bedrock console or follow the provided notebook example to create it programmatically. Create a new Amazon Simple Storage Service (Amazon S3) bucket for the knowledge base, then upload the previously downloaded files to this S3 bucket. You can select different embedding models and chunking strategies that work better for your data. After you create the knowledge base, remember to sync the data. Data synchronization might take a few minutes.
在本节中,您将使用 Amazon Bedrock 知识库构建 RAG 框架。您可以在 Amazon Bedrock 控制台 上创建 Amazon Bedrock 知识库,也可以按照提供的 笔记本示例 以编程方式创建它。为知识库创建新的 Amazon Simple Storage Service (Amazon S3) 存储桶,然后将之前下载的文件上传到此 S3 存储桶。您可以选择更适合您的数据的不同嵌入模型和分块策略。创建知识库后,请记住同步数据。数据同步可能需要几分钟时间。
To enable your newly created knowledge base to invoke the rerank model, you need to modify its permissions. First, open the Amazon Bedrock console and locate the service role that matches the one shown in the following screenshot.
要使新创建的知识库能够调用 rerank 模型,您需要修改其权限。首先,打开 Amazon Bedrock 控制台并找到与以下屏幕截图中显示的服务角色匹配的服务角色。
Choose the role and add the following provided IAM permission policy as an inline policy. This additional authorization grants your knowledge base the necessary permissions to successfully invoke the rerank model on Amazon Bedrock.
选择角色,并将以下提供的 IAM 权限策略添加为内联策略。此额外授权向您的知识库授予在 Amazon Bedrock 上成功调用 rerank 模型所需的权限。
Use the following code to integrate the knowledge base into the LlamaIndex framework. Specific configurations can be provided in the retrieval_config
parameter, where numberOfResults
is the maximum number of retrieved chunks from the vector store, and overrideSearchType
has two valid values: HYBRID
and SEMANTIC
. In the rerankConfiguration
, you can optionally provide a rerank modelConfiguration
and numberOfRerankedResults
to sort the retrieved chunks by relevancy scores and select only the defined number of results. For the full list of available configurations for retrieval_config
, refer to the Retrieve API documentation.
使用以下代码将知识库集成到 LlamaIndex 框架中。可以在 retrieval_config
参数中提供特定配置,其中 numberOfResults
是从向量存储中检索到的块的最大数量, 而 overrideSearchType
有两个有效值: HYBRID
和 SEMANTIC。
在 rerankConfiguration
中,您可以选择提供 rerank modelConfiguration
和 numberOfRerankedResults
以按相关性分数对检索到的块进行排序,并仅选择定义的结果数。有关 retrieval_config
的可用配置的完整列表,请参阅 检索 API 文档 。
Like the first option, you can create the knowledge base as a QueryEngineTool
in LlamaIndex and combine it with other API tools. Then, you can create a FunctionCallingAgentWorker
using these combined tools and initialize an AgentRunner
to interact with them. By using this approach, you can chat with and take advantage of the capabilities of the integrated tools.
与第一个选项一样,您可以在 LlamaIndex 中将知识库创建为 QueryEngineTool
,并将其与其他 API 工具结合使用。然后,您可以使用这些组合工具创建 FunctionCallingAgentWorker
,并初始化 AgentRunner
以与它们交互。通过使用此方法,您可以与集成工具聊天并利用其功能。
Now you have built the agentic RAG solution using LlamaIndex and Amazon Bedrock Knowledge Bases.
现在,您已经使用 LlamaIndex 和 Amazon Bedrock 知识库构建了代理 RAG 解决方案。
Clean up 收拾
When you finish experimenting with this solution, use the following steps to clean up the AWS resources to avoid unnecessary costs:
完成此解决方案的试验后,请使用以下步骤清理 AWS 资源以避免不必要的成本:
- In the Amazon S3 console, delete the S3 bucket and data created for this solution.
在 Amazon S3 控制台中,删除为此解决方案创建的 S3 存储桶和数据。 - In the OpenSearch Service console, delete the collection that was created for storing the embedding vectors.
在 OpenSearch Service 控制台中,删除为存储嵌入向量而创建的集合。 - In the Amazon Bedrock Knowledge Bases console, delete the knowledge base you created.
在 Amazon Bedrock 知识库控制台中,删除您创建的知识库。 - In the SageMaker console, navigate to your domain and user profile, and launch SageMaker Studio to stop or delete the JupyterLab instance.
在 SageMaker 控制台中,导航到您的域和用户配置文件,然后启动 SageMaker Studio 以停止或删除 JupyterLab 实例。
Conclusion 结论
This post demonstrated how to build a powerful agentic RAG application using LlamaIndex and Amazon Bedrock that goes beyond traditional question answering systems. By integrating Mistral Large 2 as the orchestrating model with external APIs (GitHub, arXiv, TechCrunch, and DuckDuckGo) and internal knowledge bases, you’ve created a versatile technology discovery and research tool.
本文演示了如何使用 LlamaIndex 和 Amazon Bedrock 构建功能强大的代理 RAG 应用程序,该应用程序超越了传统的问答系统。通过将 Mistral Large 2 作为编排模型与外部 API(GitHub、arXiv、TechCrunch 和 DuckDuckGo)和内部知识库集成,您创建了一个多功能的技术发现和研究工具。
We showed you two complementary approaches to implement the RAG framework: a programmatic implementation using LlamaIndex with Amazon OpenSearch Serverless, providing maximum flexibility for advanced use cases, and a managed solution using Amazon Bedrock Knowledge Bases that simplifies document processing and storage with minimal configuration. You can try out the solution using the following code sample.
我们向您展示了实施 RAG 框架的两种互补方法:一种是将 LlamaIndex 与 Amazon OpenSearch Serverless 结合使用的编程实施,为高级使用案例提供最大的灵活性,另一种是使用 Amazon Bedrock 知识库的托管解决方案,该解决方案以最少的配置简化了文档处理和存储。您可以使用以下 代码示例试用该解决方案。
For more relevant information, see Amazon Bedrock, Amazon Bedrock Knowledge Bases, Amazon OpenSearch Serverless, and Use a reranker model in Amazon Bedrock. Refer to Mistral AI in Amazon Bedrock to see the latest Mistral models that are available on both Amazon Bedrock and AWS Marketplace.
有关更多相关信息,请参阅 Amazon Bedrock、Amazon Bedrock 知识库 、 Amazon OpenSearch 无服务器 和 在 Amazon Bedrock 中使用 reranker 模型 。请参阅 Amazon Bedrock 中的 Mistral AI ,查看 Amazon Bedrock 和 AWS Marketplace 上提供的最新 Mistral 模型。
About the Authors 作者简介
Ying Hou, PhD, is a Sr. Specialist Solution Architect for Gen AI at AWS, where she collaborates with model providers to onboard the latest and most intelligent AI models onto AWS platforms. With deep expertise in Gen AI, ASR, computer vision, NLP, and time-series forecasting models, she works closely with customers to design and build cutting-edge ML and GenAI applications. Outside of architecting innovative AI solutions, she enjoys spending quality time with her family, getting lost in novels, and exploring the UK’s national parks.
Ying Hou 博士是 AWS 的 Gen AI 高级专家解决方案架构师,她与模型提供商合作,将最新、最智能的 AI 模型注册到 AWS 平台上。凭借在 Gen AI、ASR、计算机视觉、NLP 和时间序列预测模型方面的深厚专业知识,她与客户密切合作,设计和构建尖端的 ML 和 GenAI 应用程序。除了构建创新的 AI 解决方案外,她还喜欢与家人共度美好时光,沉浸在小说中,以及探索英国的国家公园。
Preston Tuggle is a Sr. Specialist Solutions Architect with the Third-Party Model Provider team at AWS. He focuses on working with model providers across Amazon Bedrock and Amazon SageMaker, helping them accelerate their go-to-market strategies through technical scaling initiatives and customer engagement.
Preston Tuggle 是 AWS 第三方模型提供商团队的高级专家解决方案架构师。他专注于与 Amazon Bedrock 和 Amazon SageMaker 中的模型提供商合作,通过技术扩展计划和客户参与帮助他们加快上市战略。