The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. 主导式序列转换模型基于复杂的循环或卷积神经网络,包括编码器和解码器。表现最佳的模型还通过注意力机制连接编码器和解码器。我们提出了一种新的简单网络架构"变换器"(Transformer),完全基于注意力机制,完全摆脱了循环和卷积。在两项机器翻译任务的实验中,这些模型在质量上优于现有最佳结果,且更易并行化,训练时间也大幅缩短。我们的模型在 2014 年 WMT 英德翻译任务上取得了 28.4 的 BLEU 分数,比现有最佳结果(包括集成模型)提高了 2 分以上。在 2014 年 WMT 英法翻译任务上,我们的模型在 3.5 天内使用 8 个 GPU 进行训练,创下 41.8 的新单一模型最高 BLEU 分数,大幅降低了文献中最佳模型的训练成本。我们还证明,变换器可很好地推广到其他任务,在英语成分分析任务上,无论训练数据量大小,都取得了成功应用。
1 Introduction 1 简介
Recurrent neural networks, long short-term memory [13] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [35, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [38, 24, 15]. 循环神经网络,特别是长短期记忆[13]和门控循环[7]神经网络,已被牢固地确立为序列建模和传导问题的最先进方法,如语言建模和机器翻译[35,2,5]。此后,许多努力一直在推动循环语言模型和编码-解码体系结构的边界[38,24,15]。
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states h_(t)h_{t}, as a function of the previous hidden state h_(t-1)h_{t-1} and the input for position tt. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [21] and conditional computation [32], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. 循环模型通常沿着输入和输出序列的符号位置计算。通过将位置与计算时间步骤对齐,它们生成一个隐藏状态序列作为前一个隐藏状态和当前位置输入的函数。这种固有的顺序性阻碍了训练示例内部的并行化,这在序列长度较长时变得很关键,因为内存限制限制了跨示例的批处理。最近的工作通过因子化技巧[21]和条件计算[32]实现了显著的计算效率提升,同时也提高了后者情况下的模型性能。然而,顺序计算的基本约束仍然存在。
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2] 19]. In all but a few cases [27], however, such attention mechanisms are used in conjunction with a recurrent network. 注意力机制已成为各种任务中引人入胜的序列建模和转导模型不可或缺的一部分,它们可以对输入或输出序列中的依赖关系进行建模,而不受距离的影响[2,19]。但在少数情况下[27],这种注意力机制与循环网络一起使用。
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. 我们提出了 Transformer 模型,这是一种摆脱循环的架构,完全依赖于注意力机制来勾画输入和输出之间的全局依赖关系。Transformer 允许显著更多的并行计算,在 8 块 P100 GPU 上训练 12 小时后就可以达到翻译质量的新纪录。
2 Background 2 背景
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2 降低顺序计算的目标也构成了扩展神经 GPU [16]、ByteNet [18] 和 ConvS2S [9] 的基础,这些模型都使用卷积神经网络作为基本构件,并行计算所有输入和输出位置的隐藏表示。在这些模型中,所需的运算数量在输入或输出位置之间的距离上线性增长(对于 ConvS2S)或对数增长(对于 ByteNet)。这使得学习远距离位置之间的依赖关系更加困难[12]。在 Transformer 中,这被减少到一个恒定的操作数量,尽管代价是由于注意力加权位置的平均化导致的有效分辨率降低,我们在第 3.2 节描述的多头注意力机制可以抵消这一影响。
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 27, 28, 22]. 自注意力, 有时被称为内部注意力, 是一种注意力机制, 它将单个序列的不同位置联系起来, 以计算该序列的表征。自注意力已成功应用于多种任务, 包括阅读理解、抽象摘要、文本蕴涵和学习独立于任务的句子表征 [4, 27, 28, 22]。
End-to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [34]. 端到端记忆网络基于递归注意机制而不是串联性递归,已被证明在简单语言问题回答和语言建模任务上有良好表现。
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequencealigned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [17, 18] and [9]. 根据我们所知,然而,Transformer 是第一个完全依赖自注意力来计算其输入和输出表示的转导模型,而不使用序列对齐的 RNNs 或卷积。在接下来的章节中,我们将描述 Transformer,阐述自注意力,并讨论其相比于[17, 18]和[9]等模型的优势。
3 Model Architecture 3 个模型架构
Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 35]. Here, the encoder maps an input sequence of symbol representations (x_(1),dots,x_(n))\left(x_{1}, \ldots, x_{n}\right) to a sequence of continuous representations z=(z_(1),dots,z_(n))\mathbf{z}=\left(z_{1}, \ldots, z_{n}\right). Given z\mathbf{z}, the decoder then generates an output sequence (y_(1),dots,y_(m))\left(y_{1}, \ldots, y_{m}\right) of symbols one element at a time. At each step the model is auto-regressive [10], consuming the previously generated symbols as additional input when generating the next. 大多数竞争性神经序列转换模型都有编码器-解码器结构[5、2、35]。在这里,编码器将符号表示 (x_(1),dots,x_(n))\left(x_{1}, \ldots, x_{n}\right) 的输入序列映射到一序列连续表示 z=(z_(1),dots,z_(n))\mathbf{z}=\left(z_{1}, \ldots, z_{n}\right) 。给定 z\mathbf{z} ,解码器然后一次生成一个元素的输出序列 (y_(1),dots,y_(m))\left(y_{1}, \ldots, y_{m}\right) 。在每一步,模型都是自回归的[10],在生成下一个符号时消耗之前生成的符号作为额外输入。
Figure 1: The Transformer - model architecture. 图 1:Transformer-模型架构。
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. 变换器遵循这种整体架构,使用堆叠的自注意力和点方式、全连接层,用于编码器和解码器,如图 1 所示的左侧和右侧。
3.1 Encoder and Decoder Stacks 编码器和解码器堆栈
Encoder: The encoder is composed of a stack of N=6N=6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, positionwise fully connected feed-forward network. We employ a residual connection [11] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm (x+Sublayer(x))(x+\operatorname{Sublayer}(x)), where Sublayer(x)\operatorname{Sublayer}(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension d_("model ")=512d_{\text {model }}=512. 编码器由一堆 N=6N=6 相同层组成。每一层有两个子层。第一个是多头自注意机制,第二个是简单的位置全连接前馈网络。我们在两个子层周围都使用残差连接[11],然后进行层标准化[1]。也就是说,每个子层的输出是 LayerNorm (x+Sublayer(x))(x+\operatorname{Sublayer}(x)) ,其中 Sublayer(x)\operatorname{Sublayer}(x) 是子层自身实现的函数。为了促进这些残差连接,模型中的所有子层以及嵌入层都产生 d_("model ")=512d_{\text {model }}=512 维度的输出。
Decoder: The decoder is also composed of a stack of N=6N=6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position ii can depend only on the known outputs at positions less than ii. 译文:
解码器由一堆 N=6N=6 相同的层组成。除了编码器层中的两个子层外,解码器还插入了第三个子层,该子层对编码器堆栈的输出执行多头注意力。与编码器类似,我们在每个子层周围使用残差连接,然后进行层归一化。我们还修改了解码器堆栈中的自注意力子层,以防止位置注意到后续位置。这种遮蔽,再加上输出嵌入被偏移一个位置的事实,确保位置 ii 的预测只能依赖于小于位置 ii 的已知输出。
3.2 Attention 注意力
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum 注意力函数可以被描述为将查询和一组键值对映射到输出,其中查询、键、值和输出都是向量。输出被计算为加权和。
Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel. 图 2:(左)缩放点乘注意力。(右)多头注意力由多个并行运行的注意力层组成。
of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. 每个值的权重由查询与相应键的兼容性函数计算得出。
3.2.1 Scaled Dot-Product Attention 缩放点积注意力
We call our particular attention “Scaled Dot-Product Attention” (Figure 2). The input consists of queries and keys of dimension d_(k)d_{k}, and values of dimension d_(v)d_{v}. We compute the dot products of the query with all keys, divide each by sqrt(d_(k))\sqrt{d_{k}}, and apply a softmax function to obtain the weights on the values. 我们将特别注意力称为"缩放点积注意力"(图 2)。输入由维度为 d_(k)d_{k} 的查询和密钥以及维度为 d_(v)d_{v} 的值组成。我们计算查询与所有密钥的点积,每个点积除以 sqrt(d_(k))\sqrt{d_{k}} ,然后应用 softmax 函数获得值的权重。
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix QQ. The keys and values are also packed together into matrices KK and VV. We compute the matrix of outputs as: 在实践中,我们同时对一组查询计算注意力函数,打包在矩阵 QQ
Attention(Q,K,V)=softmax((QK^(T))/(sqrt(d_(k))))V\operatorname{Attention}(Q, K, V)=\operatorname{softmax}\left(\frac{Q K^{T}}{\sqrt{d_{k}}}\right) V
The two most commonly used attention functions are additive attention [2], and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of (1)/(sqrt(d_(k)))\frac{1}{\sqrt{d_{k}}}. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. 两种最常用的注意力机制是加性注意力[2]和点积(乘积)注意力。除了缩放因子 (1)/(sqrt(d_(k)))\frac{1}{\sqrt{d_{k}}} 之外,点积注意力与我们的算法完全一致。加性注意力使用单隐层的前馈网络计算兼容性函数。理论复杂度相似,但点积注意力在实践中更快且更节省空间,因为可以使用高度优化的矩阵乘法代码实现。
While for small values of d_(k)d_{k} the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of d_(k)d_{k} [3]. We suspect that for large values of d_(k)d_{k}, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients ^(4){ }^{4} To counteract this effect, we scale the dot products by (1)/(sqrt(d_(k)))\frac{1}{\sqrt{d_{k}}}. 对于 d_(k)d_{k} 值较小时,两种机制的表现相似,但对于 d_(k)d_{k} 值较大时,加性注意力优于点乘注意力,无需缩放[3]。我们怀疑对于 d_(k)d_{k} 值较大时,点乘结果值很大,导致 softmax 函数进入梯度极小的区域 ^(4){ }^{4} 。为了抵消这一效果,我们将点乘结果缩放 (1)/(sqrt(d_(k)))\frac{1}{\sqrt{d_{k}}} 。
3.2.2 Multi-Head Attention 多头注意力
Instead of performing a single attention function with d_("model ")d_{\text {model }}-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values hh times with different, learned linear projections to d_(k),d_(k)d_{k}, d_{k} and d_(v)d_{v} dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding d_(v)d_{v}-dimensional 与其执行单个 d_("model ")d_{\text {model }}
output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2 输出值。这些被连接并再次投射,形成最终值,如图 2 所示。
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. 多头注意力允许模型同时关注来自不同表示子空间的信息,位于不同位置上。使用单个注意力头,平均值会抑制这一点。
[
{:[MultiHead(Q","K","V)=Concat(" head "_(1),dots," head "_(h))W^(O)],[" where head "],[i=Attention(QW_(i)^(Q),KW_(i)^(K),VW_(i)^(V))]:}\begin{aligned}
& \operatorname{MultiHead}(Q, K, V)=\operatorname{Concat}\left(\text { head }_{1}, \ldots, \text { head }_{\mathrm{h}}\right) W^{O} \\
& \text { where head } \\
& \mathrm{i}=\operatorname{Attention}\left(Q W_{i}^{Q}, K W_{i}^{K}, V W_{i}^{V}\right)
\end{aligned}
]
Where the projections are parameter matrices W_(i)^(Q)inR^(d_("model ")xxd_(k)),W_(i)^(K)inR^(d_("model ")xxd_(k)),W_(i)^(V)inR^(d_("model ")xxd_(v))W_{i}^{Q} \in \mathbb{R}^{d_{\text {model }} \times d_{k}}, W_{i}^{K} \in \mathbb{R}^{d_{\text {model }} \times d_{k}}, W_{i}^{V} \in \mathbb{R}^{d_{\text {model }} \times d_{v}} and W^(O)inR^(hd_(v)xxd_("model "))W^{O} \in \mathbb{R}^{h d_{v} \times d_{\text {model }}}. 投影是参数矩阵 W_(i)^(Q)inR^(d_("model ")xxd_(k)),W_(i)^(K)inR^(d_("model ")xxd_(k)),W_(i)^(V)inR^(d_("model ")xxd_(v))W_{i}^{Q} \in \mathbb{R}^{d_{\text {model }} \times d_{k}}, W_{i}^{K} \in \mathbb{R}^{d_{\text {model }} \times d_{k}}, W_{i}^{V} \in \mathbb{R}^{d_{\text {model }} \times d_{v}} 和 W^(O)inR^(hd_(v)xxd_("model "))W^{O} \in \mathbb{R}^{h d_{v} \times d_{\text {model }}} 。
In this work we employ h=8h=8 parallel attention layers, or heads. For each of these we use d_(k)=d_(v)=d_("model ")//h=64d_{k}=d_{v}=d_{\text {model }} / h=64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. 在这项工作中,我们采用了 h=8h=8 个并行注意力层或注意力头。对于每个注意力头,我们使用了 d_(k)=d_(v)=d_("model ")//h=64d_{k}=d_{v}=d_{\text {model }} / h=64 。由于每个注意力头的维度降低,总的计算成本与使用单注意力头且完整维度的注意力计算相似。
3.2.3 Applications of Attention in our Model 我们模型中注意力的应用
The Transformer uses multi-head attention in three different ways: 变形金刚在三种不同的方式中使用了多头注意力机制
In “encoder-decoder attention” layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [38, 2, 9]. 在"编码解码器注意力"层中,查询来自前一个解码器层,而内存键和值来自编码器的输出。这使得解码器中的每个位置都可以关注输入序列中的所有位置。这模仿了序列到序列模型(如[38, 2, 9])中典型的编码器-解码器注意力机制。
The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. 编码器包含自注意力层。在自注意力层中,所有的键、值和查询都来自同一个地方,在本例中,来自编码器中前一层的输出。编码器中的每个位置都可以关注前一层编码器中的所有位置。
Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to -oo-\infty ) all values in the input of the softmax which correspond to illegal connections. See Figure 2. 在解码器中,自注意力层允许解码器中的每个位置都注意到该位置前的所有位置。我们需要防止解码器中的左向信息流来保持自回归性质。我们通过在 softmax 的输入中屏蔽掉对应于非法连接的所有值(设置为 -oo-\infty )来实现这一点。请参见图 2。
3.3 Position-wise Feed-Forward Networks 位置式前馈网络
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. 除了注意力子层外,我们的编码器和解码器中的每个层都包含一个完全连接的前馈网络,它被单独和相同地应用于每个位置。这由两个线性变换组成,中间有一个 ReLU 激活。
FFN(x)=max(0,xW_(1)+b_(1))W_(2)+b_(2)\operatorname{FFN}(x)=\max \left(0, x W_{1}+b_{1}\right) W_{2}+b_{2}
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is d_("model ")=512d_{\text {model }}=512, and the inner-layer has dimensionality d_(ff)=2048d_{f f}=2048. 虽然线性变换在不同位置是相同的,但它们从层到层使用不同的参数。另一种描述方式是两个卷积核大小为 1。输入和输出的维度为 d_("model ")=512d_{\text {model }}=512 ,内部层的维度为 d_(ff)=2048d_{f f}=2048 。
3.4 Embeddings and Softmax 3.4 嵌入和 Softmax
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension d_("model ")d_{\text {model }}. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [30]. In the embedding layers, we multiply those weights by sqrt(d_("model "))\sqrt{d_{\text {model }}}. 与其他顺序转换模型类似,我们使用学习嵌入将输入令牌和输出令牌转换为 d_("model ")d_{\text {model }}
Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations for different layer types. nn is the sequence length, dd is the representation dimension, kk is the kernel size of convolutions and rr the size of the neighborhood in restricted self-attention. 表 1:不同层类型的最大路径长度、每层复杂度和最小顺序操作数。 nn 是序列长度, dd 是表示维度, kk 是卷积的内核大小, rr 是受限自注意力中的邻域大小。
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add “positional encodings” to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension d_("model ")d_{\text {model }} as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [9]. 由于我们的模型不包含循环和卷积,为了让模型利用序列的顺序信息,我们必须注入一些关于令牌在序列中相对或绝对位置的信息。为此,我们在编码器和解码器栈的底部将"位置编码"添加到输入嵌入中。位置编码的维度与嵌入相同,因此两者可以相加。有许多可学习和固定的位置编码选择[9]。
In this work, we use sine and cosine functions of different frequencies: 在这项工作中,我们使用不同频率的正弦和余弦函数:
{:[PE_((pos,2i))=sin(pos//10000^(2i//d_("model ")))],[PE_((pos,2i+1))=cos(pos//10000^(2i//d_("model ")))]:}\begin{aligned}
P E_{(p o s, 2 i)} & =\sin \left(p o s / 10000^{2 i / d_{\text {model }}}\right) \\
P E_{(p o s, 2 i+1)} & =\cos \left(p o s / 10000^{2 i / d_{\text {model }}}\right)
\end{aligned}
where pos is the position and ii is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2pi2 \pi to 10000*2pi10000 \cdot 2 \pi. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k,PE_(pos+k)k, P E_{p o s+k} can be represented as a linear function of PE_("pos ")P E_{\text {pos }}. 其中 pos 是位置, ii 是维度。也就是说,位置编码的每个维度都对应一个正弦波。波长形成了从 2pi2 \pi 到 10000*2pi10000 \cdot 2 \pi 的几何级数。我们选择这个函数是因为我们假设它将使模型能够轻松地学习通过相对位置进行注意力。因为对于任何固定的偏移量 k,PE_(pos+k)k, P E_{p o s+k} 都可以表示为 PE_("pos ")P E_{\text {pos }} 的线性函数。
We also experimented with using learned positional embeddings [9] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. 我们还尝试使用学习的位置嵌入[9]代替,发现两个版本产生的结果几乎完全相同(见表 3 行(E))。我们选择了正弦波版本,因为它可能允许模型推广到训练期间遇到的序列长度以外的序列长度。
4 Why Self-Attention 为什么自注意力
In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x_(1),dots,x_(n))\left(x_{1}, \ldots, x_{n}\right) to another sequence of equal length (z_(1),dots,z_(n))\left(z_{1}, \ldots, z_{n}\right), with x_(i),z_(i)inR^(d)x_{i}, z_{i} \in \mathbb{R}^{d}, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata. 在这一节中,我们将不同的自注意力层与常用于映射一个可变长度的符号表示序列 (x_(1),dots,x_(n))\left(x_{1}, \ldots, x_{n}\right) 到另一个长度相等的序列 (z_(1),dots,z_(n))\left(z_{1}, \ldots, z_{n}\right) 的递归和卷积层进行比较,例如典型序列转换编码器或解码器中的隐藏层。我们考虑三个期望,以激励我们使用自注意力。
One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. 一个是每层的总计算复杂度。另一个是可并行计算的数量,用最小的顺序操作数来衡量。
The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [12]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types. 网络中长程依赖的路径长度是第三个因素。学习长程依赖是许多序列转换任务的关键挑战。影响学习这些依赖能力的一个关键因素是网络中正向和反向信号需要遍历的路径长度。这些路径在输入和输出序列的任意两个位置之间越短,学习长程依赖就越容易[12]。因此,我们还比较了由不同层类型组成的网络中任意两个输入和输出位置之间的最大路径长度。
As noted in Table 1. a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n)O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence 长度大于深度的时候。
length nn is smaller than the representation dimensionality dd, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [38] and byte-pair [31] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size rr in the input sequence centered around the respective output position. This would increase the maximum path length to O(n//r)O(n / r). We plan to investigate this approach further in future work. 长度 nn 小于表示维数 dd ,这在机器翻译最先进模型使用的句子表示中常见,如字分片[38]和字节对[31]表示。为了提高涉及非常长序列任务的计算性能,自注意力可以仅考虑以相应输出位置为中心的输入序列中的邻域大小 rr 。这将将最大路径长度增加到 O(n//r)O(n / r) 。我们计划在未来的工作中进一步研究这种方法。
A single convolutional layer with kernel width k < nk<n does not connect all pairs of input and output positions. Doing so requires a stack of O(n//k)O(n / k) convolutional layers in the case of contiguous kernels, or O(log_(k)(n))O\left(\log _{k}(n)\right) in the case of dilated convolutions [18], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of kk. Separable convolutions [6], however, decrease the complexity considerably, to O(k*n*d+n*d^(2))O\left(k \cdot n \cdot d+n \cdot d^{2}\right). Even with k=nk=n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model. 单个卷积层宽度 k < nk<n 无法连接所有输入输出位置。这需要 O(n//k)O(n / k) 个连续核卷积层或 O(log_(k)(n))O\left(\log _{k}(n)\right) 个扩张卷积层[18],增加网络任意两点间最长路径长度。卷积层通常比循环层贵 kk 倍。然而,可分离卷积[6]大幅降低复杂度至 O(k*n*d+n*d^(2))O\left(k \cdot n \cdot d+n \cdot d^{2}\right) 。即使如此,可分离卷积复杂度仍等同于自注意力层和逐点前馈层的组合,这就是我们模型采用的方式。
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences. 作为附带效益,自注意力可以生成更可解释的模型。我们检查了我们模型中的注意力分布,并在附录中呈现和讨论了一些示例。许多注意力头不仅清楚地学会执行不同的任务,而且似乎还表现出与句子的语法和语义结构相关的行为。
5 Training 5 培训
This section describes the training regime for our models. 这个部分描述了我们模型的训练制度。
5.1 Training Data and Batching 训练数据和批处理
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36 M sentences and split tokens into a 32000 word-piece vocabulary [38]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens. 我们在标准的 WMT 2014 英语-德语数据集上进行训练,该数据集由约 450 万句对组成。使用字节对编码[3]对句子进行编码,共享源目标词汇表约 37000 个标记。对于英语-法语,我们使用了更大的 WMT 2014 英语-法语数据集,包含 36M 个句子,并将标记分割成 32000 个单词片段词汇表[38]。将句对按近似序列长度分批处理。每个训练批次包含一组句对,其中包含大约 25000 个源标记和 25000 个目标标记。
5.2 Hardware and Schedule 5.2 硬件和时间表
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models, (described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). 我们在一台配备 8 个 NVIDIA P100 GPU 的机器上训练了我们的模型。对于我们使用本文介绍的超参数的基础模型,每个训练步骤约需 0.4 秒。我们总共训练了基础模型 100,000 步,耗时 12 小时。对于我们的大型模型(在表 3 的最后一行中描述),每个步骤耗时 1.0 秒。大型模型训练了 300,000 步(3.5 天)。
5.3 Optimizer 5.3 优化器
We used the Adam optimizer [20] with beta_(1)=0.9,beta_(2)=0.98\beta_{1}=0.9, \beta_{2}=0.98 and epsilon=10^(-9)\epsilon=10^{-9}. We varied the learning rate over the course of training, according to the formula: 我们使用了 Adam 优化器[20]与 beta_(1)=0.9,beta_(2)=0.98\beta_{1}=0.9, \beta_{2}=0.98 和 epsilon=10^(-9)\epsilon=10^{-9} 。我们根据公式在训练过程中调整了学习率。
This corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps =4000=4000. 这对应于在前 warmup_steps 个培训步骤中线性增加学习率,之后按与步数平方根成反比的方式递减。我们使用了 warmup_steps =4000=4000 。
5.4 Regularization 正则化
We employ three types of regularization during training: 我们在培训期间使用三种类型的正则化:
Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the English-to-German and English-to-French newstest2014 tests at a fraction of the training cost. 表格 2:Transformer 在英语-德语和英语-法语 newstest2014 测试中的 BLEU 得分优于之前的最先进模型,训练成本仅为其一小部分。
Residual Dropout We apply dropout [33] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of P_("drop ")=0.1P_{\text {drop }}=0.1. 残余丢弃
Label Smoothing During training, we employed label smoothing of value epsilon_(ls)=0.1\epsilon_{l s}=0.1 [36]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score. 我们在训练期间采用了标签平滑值 epsilon_(ls)=0.1\epsilon_{l s}=0.1 [36]。这降低了困惑度,因为模型学会更加不确定,但提高了准确度和 BLEU 分数。
6 Results 6 个结果
6.1 Machine Translation 机器翻译
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 23 outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models. 在 2014 年 WMT 英语到德语的翻译任务中,大型 Transformer 模型(表 23 中的 Transformer(big))的表现超过以前报告的最佳模型(包括集成模型)超过 2.0 BLEU 分,创造了 28.4 的全新最高 BLEU 分。该模型的配置列在表 3 的底行。在 8 台 P100 GPU 上训练了 3.5 天。即使是我们的基础模型也超过了所有之前发表的模型和集成模型,而训练成本仅为任何竞争模型的一小部分。
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1//41 / 4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate P_("drop ")=0.1P_{\text {drop }}=0.1, instead of 0.3 . 在 2014 年 WMT 英语到法语翻译任务上,我们的大模型实现了 41.0 的 BLEU 分数,超过了所有之前发布的单一模型,而训练成本不到之前最先进模型的 1//41 / 4 。用于英语到法语的 Transformer(大)模型使用了 P_("drop ")=0.1P_{\text {drop }}=0.1 的丢弃率,而不是 0.3。
For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10 -minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty alpha=0.6\alpha=0.6 [38]. These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length +50 , but terminate early when possible [38]. 对于基础模型,我们使用了一个单一模型,该模型通过平均最后 5 个检查点而得到,这些检查点是每 10 分钟写入的。对于大型模型,我们平均了最后 20 个检查点。我们使用了带有 4 个光束大小的光束搜索,以及长度惩罚[38]。这些超参数是在开发集上进行实验后选择的。我们将推理期间的最大输出长度设置为输入长度+50,但在可能的情况下尽早终止[38]。
Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU ^(5){ }^{5} 表 2 总结了我们的结果,并将我们的翻译质量和培训成本与文献中其他模型架构进行了比较。我们通过乘以培训时间、使用的 GPU 数量和每个 GPU 的持续单精度浮点运算能力的估计来估计训练模型所需的浮点运算数量。
6.2 Model Variations 6.2 型号变体
To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the 我们以不同方式调整了基本模型,并测量了英语到德语翻译的性能变化,以评估 Transformer 不同组件的重要性
Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base model. All metrics are on the English-to-German translation development set, newstest2013. Listed perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to per-word perplexities. 表 3:变形金刚架构的变体。未列出的值与基础模型相同。所有指标都是在 2013 年新闻测试集上进行英语到德语的翻译开发集。列出的困惑度是根据我们的字节对编码的每个词块计算的,不应与每个词的困惑度进行比较。
To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing. This task presents specific challenges: the output is subject to strong structural constraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence models have not been able to attain state-of-the-art results in small-data regimes [37]. 要评估 Transformer 是否可以推广到其他任务,我们在英语成分分析上进行了实验。这项任务面临特定挑战:输出受到强大的结构约束,并且比输入长得多。此外,RNN 序列到序列模型在小数据环境下无法达到最先进的结果[37]。
We trained a 4-layer transformer with d_("model ")=1024d_{\text {model }}=1024 on the Wall Street Journal (WSJ) portion of the Penn Treebank [25], about 40K training sentences. We also trained it in a semi-supervised setting, using the larger high-confidence and BerkleyParser corpora from with approximately 17 M sentences [37]. We used a vocabulary of 16 K tokens for the WSJ only setting and a vocabulary of 32 K tokens for the semi-supervised setting. 我们在《华尔街日报》(WSJ)部分的宾夕法尼亚树型库[25]上训练了一个 4 层的变换器 d_("model ")=1024d_{\text {model }}=1024 ,约 40,000 个训练句子。我们还在半监督环境下对其进行了训练,使用了更大的高置信度和伯克利解析器语料库,约 17M 个句子[37]。我们在仅使用 WSJ 的设置中使用了 16,000 个词汇,在半监督设置中使用了 32,000 个词汇。
We performed only a small number of experiments to select the dropout, both attention and residual (section 5.4, learning rates and beam size on the Section 22 development set, all other parameters remained unchanged from the English-to-German base translation model. During inference, we 我们只执行了少量实验来选择 dropout、注意力和残差(第 5.4 节)、学习率和束大小,在第 22 节开发集上,其他所有参数保持不变,来自英语到德语的基础翻译模型。在推理期间,我们
Table 4: The Transformer generalizes well to English constituency parsing (Results are on Section 23 of WSJ) 表 4:Transformer 在英语成分分析方面泛化效果良好(结果来自 WSJ 第 23 节)
Parser 解析器
Training 培训
WSJ 23 F1 华尔街日报 23 F1
Vinyals & Kaiser el al. (2014) [37] 瓦尼尔斯和凯撒等人(2014)[37]
WSJ only, discriminative 华尔街日报专属,歧视性
88.3
Petrov et al. (2006) [29] 彼得罗夫等人(2006 年)[29]
WSJ only, discriminative 华尔街日报专属,歧视性
90.4
Zhu et al. (2013) [40] 朱等人(2013 年)[40]
WSJ only, discriminative 华尔街日报专属,歧视性
90.4
Dyer et al. (2016) [8] 戴尔等(2016 年)[8]
WSJ only, discriminative 华尔街日报专属,歧视性
91.7
Transformer (4 layers) 变压器(4 层)
WSJ only, discriminative 华尔街日报专属,歧视性
91.3
Zhu et al. (2013) [40] 朱等人(2013 年)[40]
semi-supervised 半监督
91.3
Huang & Harper (2009) [14] 黄和哈珀(2009 年)[14]
semi-supervised 半监督
91.3
McClosky et al. (2006) [26] 麦克洛斯基等人(2006 年)[26]
semi-supervised 半监督
92.1
Vinyals & Kaiser el al. (2014) [37] 瓦尼尔斯和凯撒等人(2014)[37]
semi-supervised 半监督
92.1
Transformer (4 layers) 变压器(4 层)
semi-supervised 半监督
92.7
Luong et al. (2015) [23] 梁等人(2015 年)[23]
multi-task 多任务
93.0
Dyer et al. (2016) [8] 戴尔等(2016 年)[8]
generative 生成性
93.3
Parser Training WSJ 23 F1
Vinyals & Kaiser el al. (2014) [37] WSJ only, discriminative 88.3
Petrov et al. (2006) [29] WSJ only, discriminative 90.4
Zhu et al. (2013) [40] WSJ only, discriminative 90.4
Dyer et al. (2016) [8] WSJ only, discriminative 91.7
Transformer (4 layers) WSJ only, discriminative 91.3
Zhu et al. (2013) [40] semi-supervised 91.3
Huang & Harper (2009) [14] semi-supervised 91.3
McClosky et al. (2006) [26] semi-supervised 92.1
Vinyals & Kaiser el al. (2014) [37] semi-supervised 92.1
Transformer (4 layers) semi-supervised 92.7
Luong et al. (2015) [23] multi-task 93.0
Dyer et al. (2016) [8] generative 93.3| Parser | Training | WSJ 23 F1 |
| :---: | :---: | :---: |
| Vinyals & Kaiser el al. (2014) [37] | WSJ only, discriminative | 88.3 |
| Petrov et al. (2006) [29] | WSJ only, discriminative | 90.4 |
| Zhu et al. (2013) [40] | WSJ only, discriminative | 90.4 |
| Dyer et al. (2016) [8] | WSJ only, discriminative | 91.7 |
| Transformer (4 layers) | WSJ only, discriminative | 91.3 |
| Zhu et al. (2013) [40] | semi-supervised | 91.3 |
| Huang & Harper (2009) [14] | semi-supervised | 91.3 |
| McClosky et al. (2006) [26] | semi-supervised | 92.1 |
| Vinyals & Kaiser el al. (2014) [37] | semi-supervised | 92.1 |
| Transformer (4 layers) | semi-supervised | 92.7 |
| Luong et al. (2015) [23] | multi-task | 93.0 |
| Dyer et al. (2016) [8] | generative | 93.3 |
increased the maximum output length to input length +300 . We used a beam size of 21 and alpha=0.3\alpha=0.3 for both WSJ only and the semi-supervised setting. 增加了最大输出长度到输入长度+300 。我们使用了 beam size 为 21 的方法,并且同时使用了 WSJ 单独和半监督设置。
Our results in Table 4 show that despite the lack of task-specific tuning our model performs surprisingly well, yielding better results than all previously reported models with the exception of the Recurrent Neural Network Grammar [8]. 我们在表 4 中的结果显示,尽管缺乏任务特定的调整,但我们的模型表现出色,优于所有先前报告的模型,除了递归神经网络语法[8]。
In contrast to RNN sequence-to-sequence models [37], the Transformer outperforms the BerkeleyParser [29] even when training only on the WSJ training set of 40K sentences. 与 RNN 序列到序列模型[37]相比,Transformer 在只使用包含 4 万个句子的 WSJ 培训集进行培训的情况下,也超过了 BerkeleyParser[29]。
7 Conclusion 结论
In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. 在这项工作中,我们提出了 Transformer,这是第一个完全基于注意力的序列转换模型,取代了编码器-解码器架构中最常用的循环层,采用了多头自注意力机制。
For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles. 对于翻译任务,Transformer 可以比基于循环或卷积层的架构训练得更快。在 WMT 2014 英语到德语和 WMT 2014 英语到法语的翻译任务中,我们实现了最新的水平。在前一项任务中,我们的最佳模型甚至超越了所有先前报告的集成模型。
We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours. 我们对注意力模型的未来感到兴奋,并计划将它们应用于其他任务。我们计划将 Transformer 扩展到除文本以外的输入和输出模式,并研究局部受限注意力机制,以有效处理大型输入和输出,如图像、音频和视频。使生成过程更不顺序是我们的另一个研究目标。
The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor. 我们用于训练和评估模型的代码可在 https://github.com/tensorflow/tensor2tensor 获得。
Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration. 致谢我们感谢 Nal Kalchbrenner 和 Stephan Gouws 提供的富有成果的意见、修正和启发。
References 参考文献
[1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 雷霆,詹米·萊恩·柯瑞斯和傑弗里·E·興頓。層正規化。arXiv 預印本 arXiv:1607.06450,2016。
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. 德米特里·巴丹诺夫、谅恒衍和 Yoshua Bengio。通过联合学习对齐和翻译的神经机器翻译。CoRR,abs/1409.0473,2014。
[3] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. Massive exploration of neural machine translation architectures. CoRR, abs/1703.03906, 2017. 德尼·布里茨、安娜·高尔迪、阮明唐和勒金庆
[4] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016. 程建鹏,李冬,米雷拉·拉普塔。长短期记忆网络用于机器阅读。arXiv 预印本 arXiv:1601.06733,2016。
[5] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. CoRR, abs/1406.1078, 2014. 崔庆贤、Bart van Merrienboer、Caglar Gulcehre、Fethi Bougares、Holger Schwenk 和 Yoshua Bengio.使用 RNN 编码器-解码器学习短语表示用于统计机器翻译.CoRR,abs/1406.1078,2014.
[6] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016. 傅兰索瓦·肖雷特。Xception:使用逐深度可分离卷积的深度学习。arXiv 预印本 arXiv:1610.02357,2016.
[7] Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014. 程骏勇、Çaglar Gülçehre、曹京贤、Yoshua Bengio。经验评估门控循环神经网络在序列建模中的表现。CoRR, abs/1412.3555, 2014.
[8] Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural network grammars. In Proc. of NAACL, 2016. 克里斯·迪尔、阿迪古纳·孔科罗、米格尔·巴列斯特罗斯和诺亚·A·史密斯。循环神经网络语法。在 NAACL 会议论文集中,2016 年。
[9] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2, 2017. [9] 约纳斯·盖林, 迈克尔·奥利, 大卫·格兰基耶, 丹尼斯·亚拉茨, 和伊恩·N. 多芬。卷积序列到序列学习。arXiv 预印本 arXiv:1705.03122v2, 2017。
[10] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. 亚历克斯·格雷夫斯。生成带有循环神经网络的序列。arXiv 预印本 arXiv:1308.0850, 2013 年。
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. 何凯明、张祥宇、任少卿、孙建. 深度残差学习用于图像识别. 在 2016 年 IEEE 计算机视觉和模式识别会议论文集中, 第 770-778 页.
[12] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001. 塞普·霍赫赖特,优素福·本吉奥,帕罗·弗拉斯科尼,尤尔根·施米德胡伯。循环网络中的梯度流:学习长期依赖关系的困难,2001 年。
[13] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997. [13]赛普·霍克雷特和于尔根·施米德胡伯。长短期记忆。神经计算,9(8):1735-1780,1997。
[14] Zhongqiang Huang and Mary Harper. Self-training PCFG grammars with latent annotations across languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 832-841. ACL, August 2009. 黄中强和玛丽·哈珀。跨语言潜在注释的自训练 PCFG 语法。在 2009 年实证自然语言处理会议论文集中,第 832-841 页。2009 年 8 月,ACL。
[15] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. 拉法尔·约瑟福维奇、奥里奥尔·维尼亚尔斯、迈克·舒斯特、诺亚姆·沙泽尔和永辉吴。探索语言建模的极限。arXiv 预印本 arXiv:1602.02410,2016 年。
[16] Łukasz Kaiser and Samy Bengio. Can active memory replace attention? In Advances in Neural Information Processing Systems, (NIPS), 2016. 凯撒和贝尼奥。主动存储能取代注意力机制吗?在神经信息处理系统进步(NIPS)中,2016。
[17] Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference on Learning Representations (ICLR), 2016. 卢卡斯·凯撒和伊利亚·苏斯克耶尔。神经 GPU 学习算法。在国际学习表征会议(ICLR)上,2016 年。
[18] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099y2, 2017. 纳尔·卡尔布伦纳、拉斯·埃斯佩霍尔特、卡伦·西蒙尼安、亚伦·范登奥尔德、亚历克斯·格雷夫斯和科雷·卡夫库乔格鲁。神经机器翻译在线性时间内。arXiv 预印本 arXiv:1610.10099y2,2017 年。
[19] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks. In International Conference on Learning Representations, 2017. 尹金、卡尔·登顿、黄龙和亚历山大·M·拉什。结构化注意力网络。在 2017 年国际表征学习会议上。
[20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [20] 迪德里克·金格马和吉米·巴。Adam:一种随机优化方法。在 ICLR,2015 年。
[21] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. arXiv preprint arXiv:1703.10722, 2017. 奥列克谢·库切耶夫和鲍里斯·金斯堡。LSTM 网络的因子分解技巧。arXiv 预印本 arXiv:1703.10722, 2017。
[22] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130, 2017. 林周汉、冯敏威、西塞罗·诺盖拉·桑托斯、余默、项冰、周博文、Yoshua Bengio。结构化自注意力句子嵌入。arXiv 预印本 arXiv:1703.03130,2017。
[23] Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114, 2015. [23] 明通龙, 勒文越, 伊利亚•苏捷科夫, 奥里奥尔•维尼亚斯, 和卢卡斯•凯撒。多任务序列对序列学习。arXiv 预印本 arXiv:1511.06114, 2015。
[24] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025, 2015. 刘明(明)
[25] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993. [25] 米切尔·P·马库斯、玛丽·安·马辛基维奇和比阿特丽丝·桑托里尼。构建一个大型注释语料库的英语:宾夕法尼亚州的树库。计算语言学,19(2):313-330,1993 年。
[26] David McClosky, Eugene Charniak, and Mark Johnson. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152-159. ACL, June 2006. 大卫·麦克洛斯基、欧仁·查尼亚克和马克·约翰逊。有效的自我训练解析。在北美计算语言学会人类语言技术大会, Main Conference, 152-159 页。2006 年 6 月。
[27] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model. In Empirical Methods in Natural Language Processing, 2016. 安库·帕里克,奥斯卡·塔克斯特罗姆,迪潘詹·达斯,和雅各布·乌兹科瑞特。可分解注意力模型。在自然语言处理的实证方法中,2016 年。
[28] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304, 2017. 罗曼·保罗斯、蔡明雄和理查德·索尔切。一种深度强化模型用于抽象摘要。arXiv 预印本 arXiv:1705.04304,2017 年。
[29] Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 433-440. ACL, July 2006. 斯拉夫·彼得罗夫,莱昂·巴雷特,罗曼·蒂博和丹·克莱因。学习准确、紧凑和可解释的树注释。在第 21 届国际计算语言学会议和第 44 届年会论文集上,第 433-440 页。ACL,2006 年 7 月。
[30] Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016. 使用输出嵌入改善语言模型
[31] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015. 里科·塞纳里奇、巴里·哈多和亚历山德拉·伯奇。使用子词单元进行罕见词汇的神经机器翻译。arXiv 预印本 arXiv:1508.07909,2015 年。
[32] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. [32] 诺姆·谢泽尔、阿扎利亚·米尔霍塞尼、克日什托夫·马齐亚尔兹、安迪·戴维斯、柯克·勒、杰弗里·辛顿和杰夫·迪恩。极其庞大的神经网络:稀疏门控专家层。arXiv 预印本 arXiv:1701.06538,2017.
[33] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958, 2014. 尼蒂什·斯里瓦斯塔瓦, 杰弗里·E·亨顿, 亚历克斯·克里日夫斯基, 伊利亚·苏茨克维尔, 和鲁斯兰·萨拉赫丁诺夫。 退出: 阻止神经网络过拟合的一种简单方法。 机器学习研究杂志, 15(1):1929-1958, 2014。
[34] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440-2448. Curran Associates, Inc., 2015. 苏赫巴特尔·赛恩拜亚尔、亚瑟·斯拉姆、杰森·韦斯顿和罗布·弗格斯。端到端记忆网络。收录于 C. Cortes、N. D. Lawrence、D. D. Lee、M. Sugiyama 和 R. Garnett 编辑的《神经信息处理系统 28》,第 2440-2448 页,Curran Associates,Inc.,2015 年。
[35] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104-3112, 2014. 伊利亚·苏茨凯维奇、奥里奥尔·维尼亚尔斯和奎克·V·V·勒。使用神经网络的序列到序列学习。在神经信息处理系统进展中,第 3104-3112 页,2014 年。
[36] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015. 克里斯蒂安·塞杰迪, 文森特·范豪克, 谢尔盖·伊奥菲, 乔纳森·斯伦斯, 和兹比格涅夫·沃伊纳。重新思考计算机视觉的 inception 架构。CoRR, abs/1512.00567, 2015。
[37] Vinyals & Kaiser, Koo, Petrov, Sutskever, and Hinton. Grammar as a foreign language. In Advances in Neural Information Processing Systems, 2015. 维尼亚尔斯和凯撒,库,彼得罗夫,萨特克耶夫和希尔顿。语法作为一种外语。在神经信息处理系统进步中,2015 年。
[38] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. 吴永辉、迈克·舒斯特、陈志峰、Quoc V Le、Mohammad Norouzi、Wolfgang Macherey、Maxim Krikun、袁超、高钦、Klaus Macherey 等。谷歌的神经机器翻译系统:缩小人机翻译差距。arXiv 预印本 arXiv:1609.08144,2016.
[39] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. CoRR, abs/1606.04199, 2016. 周洁, 曹英, 王旭光, 李鹏, 徐伟. 具有快进连接的深度循环模型用于神经机器翻译. CoRR, abs/1606.04199, 2016.
[40] Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. Fast and accurate shift-reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the ACL (Volume 1: Long Papers), pages 434-443. ACL, August 2013. 朱慕华、张跃、陈文亮、张敏、朱靖波。快速和准确的移位规约成分分析。在第 51 次年度 ACL 会议论文集(第 1 卷:长论文)中,第 434-443 页。ACL,2013 年 8 月。
Attention Visualizations 注意力可视化
Figure 3: An example of the attention mechanism following long-distance dependencies in the encoder self-attention in layer 5 of 6 . Many of the attention heads attend to a distant dependency of the verb ‘making’, completing the phrase ‘making…more difficult’. Attentions here shown only for the word ‘making’. Different colors represent different heads. Best viewed in color. 图 3:编码器自注意力机制在第 5 层中捕获长距离依赖的一个例子。许多注意力头关注"making"动词的远距离依赖,完成短语"making...更加困难"。这里仅显示"making"一词的注意力。不同颜色表示不同的头。最佳查看效果为彩色。
Figure 4: Two attention heads, also in layer 5 of 6, apparently involved in anaphora resolution. Top: Full attentions for head 5. Bottom: Isolated attentions from just the word ‘its’ for attention heads 5 and 6 . Note that the attentions are very sharp for this word. 图 4:两个注意力头,也在 6 层中的第 5 层,显然参与指代消解。上图:第 5 个注意力头的全部注意力。下图:仅仅从"它的"这个词的注意力头 5 和 6 的孤立注意力。请注意,这个词的注意力非常尖锐。
Figure 5: Many of the attention heads exhibit behaviour that seems related to the structure of the sentence. We give two such examples above, from two different heads from the encoder self-attention at layer 5 of 6 . The heads clearly learned to perform different tasks. 图 5:许多注意力头展现出似乎与句子结构相关的行为。我们在上面给出了两个这样的例子,它们来自编码器中第 5 层中的两个不同头。这些头明显已经学会执行不同的任务。
^(**){ }^{*} Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research. ^(**){ }^{*} 平等贡献。列出顺序是随机的。Jakob 提出用自注意力代替 RNN,并开始评估这个想法的工作。Ashish 与 Illia 一起设计并实现了第一个 Transformer 模型,在这项工作的每个方面都发挥了关键作用。Noam 提出了缩放点积注意力、多头注意力和无参数位置表示,并成为参与几乎每个细节的另一个人。Niki 在我们原始代码库和 tensor2tensor 中设计、实现、调整和评估了无数的模型变体。Llion 还试验了新颖的模型变体,负责我们的初始代码库,以及有效的推理和可视化。Lukasz 和 Aidan 花费无数长天设计和实现 tensor2tensor 的各个部分,取代了我们之前的代码库,大大改善了结果,并大幅加速了我们的研究。 ^(†){ }^{\dagger} Work performed while at Google Brain. 在谷歌大脑工作期间完成的工作。 ^(‡){ }^{\ddagger} Work performed while at Google Research. ^(‡){ }^{\ddagger} 在谷歌研究部工作期间完成的工作。
^(4){ }^{4} To illustrate why the dot products get large, assume that the components of qq and kk are independent random variables with mean 0 and variance 1 . Then their dot product, q*k=sum_(i=1)^(d_(k))q_(i)k_(i)q \cdot k=\sum_{i=1}^{d_{k}} q_{i} k_{i}, has mean 0 and variance d_(k)d_{k}. ^(4){ }^{4} 为说明点积为什么会变大,假设 qq 和 kk 的分量是独立的随机变量,均值为 0,方差为 1。那么它们的点积 q*k=sum_(i=1)^(d_(k))q_(i)k_(i)q \cdot k=\sum_{i=1}^{d_{k}} q_{i} k_{i} 的均值为 0,方差为 d_(k)d_{k} 。
^(5){ }^{5} We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively. ^(5){ }^{5} 我们分别使用了 2.8、3.7、6.0 和 9.5 TFLOPS 的值来表示 K80、K40、M40 和 P100。