这是用户在 2024-5-7 15:55 为 https://app.immersivetranslate.com/pdf-pro/8124fe7d-e345-46a0-90bf-9550ee51c9fb 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
2024_05_07_5a6327d0122d3ddd6bc6g

GTZ: a fast compression and cloud transmission tool optimized for FASTQ files
GTZ:针对 FASTQ 文件优化的快速压缩和云传输工具

Yuting Xing , Gen , Zhenguo Wang , Bolun Feng , Zhuo Song and Chengkun Wu
Yuting Xing , Gen , Zhenguo Wang , Bolun Feng , Zhuo Song and Chengkun Wu

From 16th International Conference on Bioinformatics (InCoB 2017)
摘自第 16 届国际生物信息学大会(InCoB 2017)
Shenzhen, China. 20-22 September 2017
中国,深圳2017 年 9 月 20-22 日

Abstract 摘要

Background: The dramatic development of DNA sequencing technology is generating real big data, craving for more storage and bandwidth. To speed up data sharing and bring data to computing resource faster and cheaper, it is necessary to develop a compression tool than can support efficient compression and transmission of sequencing data onto the cloud storage.
背景:DNA 测序技术的飞速发展正在产生真正的大数据,对存储和带宽的需求也越来越大。为了加快数据共享,使数据更快、更便宜地进入计算资源,有必要开发一种压缩工具,以支持测序数据在云存储中的高效压缩和传输。

Results: This paper presents GTZ, a compression and transmission tool, optimized for FASTQ files. As a referencefree lossless FASTQ compressor, GTZ treats different lines of FASTQ separately, utilizes adaptive context modelling to estimate their characteristic probabilities, and compresses data blocks with arithmetic coding. GTZ can also be used to compress multiple files or directories at once. Furthermore, as a tool to be used in the cloud computing era, it is capable of saving compressed data locally or transmitting data directly into cloud by choice. We evaluated the performance of GTZ on some diverse FASTQ benchmarks. Results show that in most cases, it outperforms many other tools in terms of the compression ratio, speed and stability.
结果本文介绍了针对 FASTQ 文件优化的压缩和传输工具 GTZ。作为一种无参考无损 FASTQ 压缩器,GTZ 分别处理 FASTQ 的不同行,利用自适应上下文建模来估计它们的特征概率,并用算术编码压缩数据块。GTZ 还可用于一次性压缩多个文件或目录。此外,作为云计算时代的一种工具,它还能将压缩后的数据保存在本地或直接传输到云端。我们在一些不同的 FASTQ 基准上评估了 GTZ 的性能。结果表明,在大多数情况下,它在压缩率、速度和稳定性方面都优于许多其他工具。

Conclusions: GTZ is a tool that enables efficient lossless FASTQ data compression and simultaneous data transmission onto to cloud. It emerges as a useful tool for NGS data storage and transmission in the cloud environment. GTZ is freely available online at: https://github.com/Genetalks/gtz.
结论GTZ 是一种可实现高效无损 FASTQ 数据压缩并同时向云端传输数据的工具。它是在云环境中存储和传输 NGS 数据的有用工具。GTZ 可在以下网址免费在线获取:https://github.com/Genetalks/gtz。
Keywords: FASTQ, Compression, General-purpose, Lossless, Parallel compression and transmission, Cloud computing
关键词FASTQ、压缩、通用、无损、并行压缩和传输、云计算

Background 背景介绍

Next generation sequencing (NGS) has greatly facilitated the development of genome analyses, which is vital for reaching the goal of precision medicine. Yet the exponential growth of accumulated sequencing data poses serious challenges to the transmission and storage of NGS data. Efficient compression methods provide the possibility to address this increasingly prominent problem.
下一代测序(NGS)极大地促进了基因组分析的发展,这对实现精准医疗的目标至关重要。然而,累积测序数据的指数级增长给 NGS 数据的传输和存储带来了严峻挑战。高效的压缩方法为解决这一日益突出的问题提供了可能。
Previously, general-propose compression tools, such as gzip (http://www.gzip.org/), bzip2 (http://www.bzip.org/) and (www.7-zip.org), have been utilized to compress NGS data. These tools do not take advantage of the
以前,一般压缩工具,如 gzip ( http://www.gzip.org/)、bzip2 ( http://www.bzip.org/) 和 ( www.7-zip.org) 都被用来压缩 NGS 数据。这些工具没有利用
characteristics of genome data, such as a small size alphabet and repeated sequences segments, which leaves space for performance optimization. Recently, some specialized compression tools have been developed for NGS data. These tools are either reference-based or reference-free. The main difference lies in whether extra genome sequences are used as references. Reference-based algorithms encode the differences between the target and reference sequences, and consume more memory to improve compression performance. GenCompress [1] and SimGene [2] use various entropy encoders, such as arithmetic, Golomb and Huffman to compress integer values. The values show properties of reads, like starting position, length of reads, etc. A statistical compression method, GReEn [3], uses an adaptive model to estimate probabilities based on the frequencies of characters. The probabilities are then compressed with an arithmetic encoder.
基因组数据的特点,如字母表小、序列片段重复等,为性能优化留下了空间。最近,针对 NGS 数据开发了一些专门的压缩工具。这些工具有的是基于参考的,有的是无参考的。主要区别在于是否使用额外的基因组序列作为参考。基于参考的算法会对目标序列和参考序列之间的差异进行编码,并消耗更多内存以提高压缩性能。GenCompress [1] 和 SimGene [2] 使用各种熵编码器,如算术编码器、Golomb 编码器和 Huffman 编码器来压缩整数值。这些值显示读数的属性,如起始位置、读数长度等。一种名为 GReEn [3] 的统计压缩方法使用自适应模型,根据字符的频率估算概率。然后用算术编码器对概率进行压缩。
QUIP [4] exploits arithmetic coding associated with models of order-3 and high-order Markov chains in all three parts of FASTQ data. LW-FQZip [5] utilized incremental and run-length-limited encoding schemes to compress the metadata and quality scores, respectively. Reads are pre-processed by a light-weight mapping model and then three components are combined to be compressed by a general-purpose tool, like LZMA. Fqzcomp [6] estimates character probabilities by order-k context modelling and compresses NGS data in FASTQ format with the help of arithmetic coders.
QUIP [4] 在 FASTQ 数据的所有三个部分中利用了与阶-3 和高阶马尔可夫链模型相关的算术编码。LW-FQZip [5] 利用增量和运行长度限制编码方案分别压缩元数据和质量分数。通过轻量级映射模型对读数进行预处理,然后通过通用工具(如 LZMA)将三个部分组合起来进行压缩。Fqzcomp [6] 通过阶k上下文建模估计字符概率,并借助算术编码器压缩 FASTQ 格式的 NGS 数据。
Nevertheless, reference-based algorithms can be inefficient if the similarity between target and reference sequences is low. Therefore, reference-free methods were also proposed to address this problem. Biocompress proposed in [7] is a compression method dedicated to genomic sequences. Its main idea is based on the classical dictionary-based compression method -the Ziv and Lempel [8] compression algorithm. Repeats and palindromes are encoded using the length and the position of their earliest occurrences. As an extension of biocompress [7], biocompress-2 [9] exploits the same scheme, and uses arithmetic coding of order-2 when no significant repetition exists. The DSRC [10] algorithm splits sequences into blocks and compresses them independently with LZ77 [8] and Huffman [11] encoding. It is faster than QUIP both in compression and decompression speed, but inferior to the later in terms of compression ratio. DSRC2 [12], the multithreaded version of DSRC [10], splits the input into three streams for pre-processing. After pre-processing, metadata, reads, and quality scores are compressed separately in DRSC. A boosting algorithm, SCALCE [13], which re-organizes the reads, can outperform other algorithms on most datasets both in the compression ratio and the compression speed.
然而,如果目标序列和参考序列之间的相似度较低,基于参考的算法就会效率低下。因此,无参照方法也被提出来解决这一问题。Biocompress 是一种专门针对基因组序列的压缩方法。其主要思想基于经典的基于字典的压缩方法--Ziv 和 Lempel[8]压缩算法。重复序列和回文序列使用其最早出现的长度和位置进行编码。作为生物压缩[7]的扩展,生物压缩-2[9]利用了相同的方案,并在不存在明显重复的情况下使用阶次为 2 的算术编码。DSRC 算法[10]将序列分割成块,并使用 LZ77[8] 和 Huffman[11] 编码对其进行独立压缩。该算法在压缩和解压缩速度上都比 QUIP 快,但在压缩比上不如 QUIP。DSRC2 [12] 是 DSRC [10] 的多线程版本,它将输入分成三个流进行预处理。预处理后,元数据、读取和质量分数在 DRSC 中分别进行压缩。SCALCE [13] 是一种提升算法,可对读取数据进行重新组织,在大多数数据集上,其压缩率和压缩速度都优于其他算法。
Nowadays, it is evident that cloud computing has become increasingly important for genomic analyses. However, above-mentioned tools were developed for local usage. Compression has to be completed locally before a data transmission onto the cloud can begin.
如今,云计算在基因组分析中的重要性与日俱增。然而,上述工具都是为本地使用而开发的。在开始将数据传输到云端之前,必须在本地完成压缩。
AdOC proposed in [14] is a general-propose tool that allows the overlap of compression and communication in the context of a distributed computing environment. It presents a model for transport level compression with dynamic compression level adaptation, which can be used in an environment where resource availability and bandwidth vary unpredictably.
文献[14]中提出的 AdOC 是一种通用工具,可在分布式计算环境中实现压缩与通信的重叠。它提出了一种具有动态压缩级别适应性的传输级压缩模型,可用于资源可用性和带宽变化不可预测的环境。
Generally, the compression performances of the universal compression algorithms, such as AdOC, are unsatisfactory for NGS datasets.
一般来说,通用压缩算法(如 AdOC)对 NGS 数据集的压缩效果并不理想。
In this paper, we present a tool GTZ, it is characterized as a lossless and efficient compression tool to be used jointly with cloud computing for large-scale genomic data analyses:
在本文中,我们介绍了一种工具 GTZ,它是一种无损、高效的压缩工具,可与云计算结合使用,用于大规模基因组数据分析:
  1. GTZ exploits context model technology combined with multiple prediction modelling schemes. It employs paralleling processing to improve the compression speed.
    GTZ 利用上下文模型技术,结合多种预测建模方案。它采用并行处理技术来提高压缩速度。
  2. GTZ can compress directories or folders into a single archive, which is called a multi stream file system. The all-in-one scheme can satisfy purposes of transmission, validation and storage.
    GTZ 可以将目录或文件夹压缩成一个单一的归档文件,这就是所谓的多流文件系统。这种一体化方案可以满足传输、验证和存储的目的。
  3. GTZ supports random access to files or archives. GTZ utilizes block storage, such that users can extract some parts of genome sequences out of a FASTQ file or some files in a folder, without a complete decompression of the compressed archive.
    GTZ 支持对文件或档案的随机访问。GTZ 采用块存储,因此用户可以从 FASTQ 文件或文件夹中的某些文件中提取基因组序列的某些部分,而无需对压缩归档文件进行完全解压缩。
  4. GTZ can transfer compressed blocks to the cloud storage while the compress is still in process, which is a novel feature compared with other compression tools. This feature enables the data transmission time to be can greatly reduce the total time needed for compression and data transmission onto the cloud. For instance, it could compress and transit a 200GB FASTQ file to cloud storages like AWS and Alibaba cloud storage within 14 min.
    GTZ 可以在压缩过程中将压缩块传输到云存储中,与其他压缩工具相比,这是一项新功能。该功能可大大缩短数据传输时间,减少压缩和数据传输到云所需的总时间。例如,它可以在 14 分钟内将 200GB 的 FASTQ 文件压缩并传输到 AWS 和阿里巴巴云存储等云存储设备。
  5. GTZ provides a Python API, through which users can integrate GTZ in their own applications flexibly.
    GTZ 提供了 Python API,用户可通过该 API 将 GTZ 灵活地集成到自己的应用程序中。
In the remaining of this paper, we will introduce how GTZ works and evaluate its performance on several benchmark datasets using the AWS service.
在本文的其余部分,我们将介绍 GTZ 的工作原理,并使用 AWS 服务评估其在几个基准数据集上的性能。

Methods 方法

GTZ supports efficient compression in parallel, parallel transmission and random fetching. Figure 1 demonstrates the workflow of GTZ processing.
GTZ 支持高效的并行压缩、并行传输和随机获取。图 1 展示了 GTZ 处理的工作流程。
GTZ involves procedures on clients and the cloud end.
GTZ 涉及客户和云端的程序。
A client takes the following steps:
客户采取以下步骤
(1) Read in streams of large data files.
(1) 读入大型数据文件流。
(2) Pre-process the input by dividing data streams into three sub-streams: metadata, base sequence, and quality score.
(2) 对输入数据进行预处理,将数据流分为三个子流:元数据、基本序列和质量分数。
(3) Buffer sub-streams in local memories and assemble them into different types of data blocks with a fixed size.
(3) 在本地存储器中缓冲子数据流,并将其组合成不同类型、大小固定的数据块。
(4) Compress assembled data blocks and their descriptions, and then transmit output blocks into the cloud storage.
(4) 对组装好的数据块及其描述进行压缩,然后将输出块传输到云存储中。
On the cloud, the followings steps are executed:
在云上执行以下步骤
(1)Create three types of object-oriented containers (shown in Fig. 2), which define a tree structure.
(1) 创建三种面向对象的容器(如图 2 所示),它们定义了一个树形结构。
(2)Loop and wait to receive output blocks sent by the client.
(2) 循环并等待接收客户端发送的输出块。
Fig. 1 The workflow of GTZ
图 1 德国技术合作署的工作流程
(3) Save received output blocks into block containers according to their types.
(3) 将接收到的输出块按类型保存到块容器中。
(4) Stop if no more output blocks are received.
(4) 如果不再收到输出块,则停止。
We will explain all the steps in further details about processing FASTQ files below:
下面我们将进一步详细解释处理 FASTQ 文件的所有步骤:

The client reading streams of large data files
客户端读取大型数据文件流

Raw NGS data files are typically stored in FASTQ format for the convenience of compression. A typical FASTQ file contains four lines per sequence: Line 1 begins with a character ' ' followed by a sequence identifier; Line 2 holds the raw sequence composed of A, C, , and G; line 3 begins with a character '+' and is optionally followed by the same sequence identifier (and any description) again; line 4 holds the corresponding quality scores in ASCII characters for the sequence characters in line 2. An example of a read is given in Table 1.
原始 NGS 数据文件通常以 FASTQ 格式存储,以便于压缩。典型的 FASTQ 文件每个序列包含四行:第 1 行以字符 " "开头,后跟序列标识符;第 2 行保存由 A、C、 和 G 组成的原始序列;第 3 行以字符 "+"开头,后跟相同的序列标识符(和任何描述);第 4 行保存第 2 行中序列字符的 ASCII 字符对应质量分数。表 1 给出了一个读取示例。
Data pre-processing 数据预处理
During the second step, a data stream is split into metadata sub-streams, base sequence sub-streams and quality
在第二步中,数据流被分成元数据子流、基本序列子流和质量子流。

Fig. The hierarchy of data containers
数据容器的层次结构
Table 1 The format of an FASTQ file
表 1 FASTQ 文件的格式
@ERR194147.1.HSQ1004:134:C0D8DACXX:1:1104:3874:86,238/1
2 GGTTCCTACTTNAGGGTCATTAAATAGCCCACACGTC
3 +
4
score sub-streams. (Since uninformative comment lines normally do not provide any useful information for compression, comment streams are omitted during preprocessing.) Three types of date pre-processing controllers buffer sub-streams and save them in data blocks at a fixed size respectively. Afterwards, data blocks with annotations (about numbers of blocks, sizes of blocks and types of streams) are sent to corresponding compression units. Figure 3 demonstrates how to pre-process data files with the help of pre-processing controllers and compression units.
得分子流。(由于无信息的注释行通常不会提供任何有用的压缩信息,因此在预处理过程中会省略注释流)。三种日期预处理控制器分别对子数据流进行缓冲,并以固定大小保存在数据块中。之后,带有注释(关于块的数量、块的大小和流的类型)的数据块被发送到相应的压缩单元。图 3 演示了如何借助预处理控制器和压缩单元对数据文件进行预处理。

Compressing data 压缩数据

GTZ is a general-purpose compression tool that uses statistical modelling (http://marknelson.us/1991/02/01/ arithmetic-coding-statistical-modeling-data-compression/) and arithmetic coding.
GTZ 是一种通用压缩工具,使用统计建模 ( http://marknelson.us/1991/02/01/ arithmetic-coding-statistical-modeling-data-compression/) 和算术编码。
Statistical modelling can be categorized into two types: static and adaptive statistical modelling. Conventional methods are normally static, which means probabilities are calculated after sequences are scanned from the beginning to end. A static modelling keeps a static table that records character-frequency counts. Although they produce relatively accurate results, the drawbacks are obvious:
统计建模可分为两类:静态统计建模和自适应统计建模。传统的方法通常是静态的,即从头到尾扫描序列后计算概率。静态建模保留了一个记录字符频率计数的静态表。虽然这种方法能得出相对准确的结果,但缺点也是显而易见的:
  1. It is time-consuming to read all the sequences into main memory before compression.
    在压缩前将所有序列读入主存储器非常耗时。
  2. If an input stream does not match well with the previously accumulated sequence, the compression ratio will be degraded, even the output stream will become larger than the input stream.
    如果输入流与之前累积的序列不匹配,压缩率就会降低,甚至输出流会比输入流大。
In GTZ, we employ an adaptive statistical data compression technique based on context modelling. An adaptive modeling needs not to scan the whole sequence and generate probabilities before coding. Instead, the adaptive prediction technology provides on-the-fly reading and compression, that is probabilities are calculated based on the characters already read into the memory. Probabilities may alter with more characters scanned. Initially, the performance of adaptive statistical modelling may be poor due to the lack of reads. However, with more sequences processed, the prediction tends to be more accurate.
在 GTZ 中,我们采用了一种基于上下文建模的自适应统计数据压缩技术。自适应建模无需在编码前扫描整个序列并生成概率。相反,自适应预测技术提供即时读取和压缩,即根据已读入内存的字符计算概率。概率会随着扫描字符的增加而改变。最初,由于缺乏读取,自适应统计建模的性能可能较差。不过,随着处理的序列增多,预测往往会更加准确。
Every time the compressor encodes a character, it will update the counter in the prediction table. When a new character (suppose the sequence before is ) comes, GTZ will traverse the prediction table, find every character that has followed before, and compare their appearance frequencies. For instance, if both ABCDX appears 10 times, and only once. Then GTZ will assign a higher probability for .
压缩器每编码一个字符,就会更新预测表中的计数器。当出现一个新字符 时(假设 之前的序列是 ),GTZ 将遍历预测表,找出 之前出现过的每个字符,并比较它们的出现频率。例如,如果 ABCDX 都出现了 10 次,而 只出现了一次。那么 GTZ 将赋予 更高的概率。
The work flow of an adaptive model is depicted in Fig. 4. The box 'Update model' means converting low-order modellings to high-order modellings (the meaning of low-order and high-order will be discussed in the next subsection.).
自适应模型的工作流程如图 4 所示。更新模型 "框表示将低阶模型转换为高阶模型(低阶和高阶的含义将在下一小节讨论)。
Adaptive prediction modelling can effectively reduce compression time. There is no need to read all sequences in a time and it introduces overlap of scanning and compression.
自适应预测建模可有效缩短压缩时间。它无需在同一时间读取所有序列,并引入了扫描和压缩的重叠。
GTZ utilizes specific compression units for different kinds of data blocks: a low-order encoder for genetic sequences, a multi-order encoder for quality scores and mixed encoders for metadata. Finally, the outputs in this procedure are blocks at a fixed size.
GTZ 对不同类型的数据块使用特定的压缩单元:基因序列使用低阶编码器,质量分数使用多阶编码器,元数据使用混合编码器。最后,该程序的输出是固定大小的数据块。
The main idea about arithmetic coding is to convert reads into a floating point ranging from zero to one (precisely greater than or equal to zero and less than one) based on the predictive probabilities of characters. If the statistical modelling estimates every single character accurately for the compressor, we will have high compression performance. On the contrary, a poor prediction may result in expansion of the original sequence, instead of compression. Thus, the performance of a compressor largely relies on the whether the statistical modelling can output nearoptimal predictive probabilities.
算术编码的主要思想是根据字符的预测概率,将读数转换为范围从 0 到 1 的浮点数(精确地说是大于等于 0 小于 1)。如果统计建模能为压缩器准确估算出每个字符,我们就能获得很高的压缩性能。相反,如果预测不准确,可能会导致原始序列扩大,而不是压缩。因此,压缩器的性能很大程度上取决于统计建模能否输出接近最优的预测概率。

A low-order encoder for reads
用于读数的低阶编码器

The simplest implementation of adaptive modeling is order-0. Exactly, it does not consider any context
自适应建模最简单的实现方式是零阶。确切地说,它不考虑任何上下文
Fig. 3 Pre-process data files with pre-processing controllers and compression units
图 3 使用预处理控制器和压缩单元对数据文件进行预处理
Fig. 4 Work flow of a typical statistical modelling
图 4 典型统计建模的工作流程
information, thus this short-sighted modeling can only see the current character and make prediction that is independent of the previous sequences. Similarly, an order-1 encoder makes prediction based on one preceding character. Consequently, the low-order modeling makes little contribution to the performance of compressors. Its main advantage is that it is very memory efficient. Hence, for quality score streams that do not have spatial locality, a low-order modeling is adequate for moderate compression rate.
因此,这种短视建模只能看到当前字符,并做出与之前序列无关的预测。同样,阶 1 编码器也是根据前面的一个字符进行预测。因此,低阶建模对压缩器性能的贡献很小。它的主要优点是非常节省内存。因此,对于没有空间位置性的质量分数流,低阶建模足以满足中等压缩率的要求。
Our tailored low-order encoder for reads is demonstrated in Fig. 5. The first step is to transform sequences with the BWT algorithm. BWT (Burrows-Wheeler transform) rearranges reads into runs of similar characters. In the second step, the zero-order and the first-order prediction model are used to calculate appearance probability of each character. Since a poor probability accuracy contributes to undesirable encoding results, we add interpolation after quantizing the weighted average probability, to reduce prediction errors and improve compression ratios. In the last procedure, the bit arithmetic coding algorithm produces decimals ranging from zero to one as outputs to represent sequences.
图 5 展示了我们为读取量身定制的低阶编码器。第一步是使用 BWT 算法转换序列。BWT(Burrows-Wheeler 变换)将读数重新排列成相似字符的序列。第二步,使用零阶和一阶预测模型计算每个字符的出现概率。由于概率精度不高会导致不理想的编码结果,我们在对加权平均概率进行量化后增加了插值,以减少预测误差并提高压缩率。在最后一个程序中,位算术编码算法产生从 0 到 1 的小数作为输出来表示序列。

A multi-order encoder for quality scores
质量分数多阶编码器

The statistical modeling needs non-uniform probability distribution for arithmetic algorithms. The high-order modeling enables high probabilities for those characters which appear frequently, and low probabilities for those which appear infrequently. As a result, compared with low-order encoders, higher-order encoders can enhance adaptive modeling.
统计建模需要运算算法的非均匀概率分布。通过高阶建模,可以对经常出现的字符采用高概率,而对不经常出现的字符采用低概率。因此,与低阶编码器相比,高阶编码器可以增强自适应建模能力。
A high-order modeling considers several characters preceding the current position. It can obtain better compression performance at the expense of more memory usage. Higher-order modeling was less used due to the limited memory capacity, which is no longer a problem anymore.
高阶建模会考虑当前位置之前的几个字符。它可以获得更好的压缩性能,但会占用更多内存。由于内存容量有限,高阶建模的使用较少,但这已不再是问题。
Without transformation, a multi-order encoder (See Fig. 6) for quality scores includes two procedures:
在不进行转换的情况下,质量分数的多阶编码器(见图 6)包括两个程序:
Firstly, to generate probabilities of characters, input stream flows through an expanding character probability prediction model, which is composed of firstorder, second-order, fourth-order, sixth-order prediction models and a matching model. Like a low-order encoder, probabilities of characters undergo weighted averaging, quantization and interpolation to obtain final results. Secondly, we use bit arithmetic coding algorithm for compression.
首先,为生成字符概率,输入流流经扩展字符概率预测模型,该模型由一阶、二阶、四阶、六阶预测模型和匹配模型组成。与低阶编码器一样,字符概率经过加权平均、量化和插值后得到最终结果。其次,我们采用比特算术编码算法进行压缩。

A hybrid scheme for metadata
元数据混合方案

For metadata sub-streams, GTZ first uses delimiters (punctuations) to split them into different segments, then uses different ways to process metadata according to their fields:
对于元数据子流,德国技术合作公司首先使用分隔符(标点符号)将其分割成不同的段,然后根据不同的字段使用不同的方法处理元数据:
For numbers in an ascending or descending order, we employ incremental encoding to represent the variations of one metadata to its preceding neighbors. For instance, '3458644' will be compressed into 3,1,1,3,-2,-2,0. For continuous identical characters, we exploit run-length limited encoding to show their values and numbers of
对于按升序或降序排列的数字,我们采用增量编码来表示一个元数据与前面相邻元数据的变化。例如,"3458644 "将压缩为 3,1,1,3,-2,-2,0。对于连续的相同字符,我们利用运行长度限制编码来显示它们的值和数量。
Fig. 5 A low-order encoder scheme
图 5 低阶编码器方案
Fig. 6 A multi-order encoder scheme
图 6 多阶编码器方案
repetition. For random numbers with various precisions, we convert their formats by UTF-8 coding without adding a single separator, and then use a low-order encoder for compression. Otherwise, use the low-order encoder to compress metadata.
重复。对于各种精度的随机数,我们通过 UTF-8 编码转换其格式,不添加任何分隔符,然后使用低阶编码器进行压缩。否则,使用低阶编码器压缩元数据。
In conclusion, during this process, sub-streams are fed into a dynamic probability prediction model and an arithmetic encoder, and they are transformed into compressed blocks at a fixed size.
总之,在这一过程中,子数据流被送入动态概率预测模型和算术编码器,并被转换成固定大小的压缩块。

Data transmission 数据传输

The key objective is to transmit output blocks to a certain cloud storage platform, with annotations about types, sizes, numbers of data blocks.
主要目标是将输出数据块传输到某个云存储平台,并标注数据块的类型、大小和数量。
To note, different types of encoders may lead to inconsistency in compression speed, which can lead to a data pipe blockage. Thus, in our system, the pipe-filter pattern is designed to synchronize input and output speed, e.g., the input flow will be blocked when the speed of input stream is faster than that of the output stream; The pipe will also be blocked when there is no input flow.
需要注意的是,不同类型的编码器可能会导致压缩速度不一致,从而导致数据管道堵塞。因此,在我们的系统中,管道过滤模式的设计使输入和输出速度同步,例如,当输入流的速度快于输出流的速度时,输入流将被堵塞;当没有输入流时,管道也将被堵塞。

Storage at the cloud end - Creating an object-oriented nested container system
云端存储--创建面向对象的嵌套容器系统

GTZ creates containers as storage compartments that provide a way to manage instances and store file directories. They are organized in a tree structure. Containers can be nested to represent locations of instances: a root container represents a complete compressed file; a block container includes different types of sub-stream containers where specific instances are stored. The nesting structure is showed in Fig. 2.
GTZ 将容器创建为存储隔间,为管理实例和存储文件目录提供了一种方法。它们以树形结构组织。容器可以嵌套来表示实例的位置:根容器表示完整的压缩文件;块容器包括不同类型的子流容器,其中存储了特定的实例。嵌套结构如图 2 所示。
A root container represents a FASTQ file and it holds block containers, each of which includes metadata sub-containers, base sequence sub-containers and quality score sub-containers. A metadata sub-container nests repetitive data blocks, random data blocks, incremental data blocks, etc. Base sequence sub-containers and quality score sub-containers nest 0 instance block to instance block. Taking base sequences for examples, the 0 to (N-1) output blocks are stored in the 0th block container, and the to ( ) output blocks are stored in the 1st block container, and so on.
根容器代表一个 FASTQ 文件,它容纳 个块容器,每个块容器包括元数据子容器、基序列子容器和质量分数子容器。元数据子容器嵌套重复数据块、随机数据块、增量数据块等。基序子容器和质量分数子容器嵌套 0 实例块到 实例块。以基序为例,0 至 (N-1) 个输出块存储在第 0 个块容器中, 至 ( ) 个输出块存储在第 1 个块容器中,以此类推。
Table 2 Descriptions of 8 FASTQ datasets used for performance evaluation
表 2 用于性能评估的 8 个 FASTQ 数据集说明
Dataset Species Reference genome size 参考基因组大小 Encoding No. of quality scores in data file
数据文件中的质量分数数量
ERR233152 P. aeruginosa 绿脓杆菌 556 Sanger 32
SRR935126 A. thaliana 9755 Sanger 39
SRR489793 C. elegans 12,807 Illumina 1.8+ 38
SRR801793 L. pneumophila 2756 Sanger 38
SRR125858 H. sapiens 50,744 Sanger 39
SRR5419422 RNA seq (H. sapiens)
RNA seq(H. sapiens)
15,095 Illumina 1.8+ 6
ERR1137269 metagenomes 56,543 Illumina 1.8+ 7
NA12878 (read 2) NA12878 (读 2) H. sapiens 202,631 Sanger 38
Table 3 Compression ratios of different tools on 8 FASTQ datasets
表 3 不同工具在 8 个 FASTQ 数据集上的压缩率
Dataset Compression ratio (%) 压缩比 (%)
GTZ DSRC2 QUIP LW-FQZip Fqzcomp LFQC pigz
ERR233152 15.9 16.7 19 19 16.8 26.4
SRR935126 18.6 19.6 17.7 20.5 17.8 30.2
SRR489793 22.8 22.7 22.6 25.5 22.5 34.4
SRR801793 21.4 21.9 21.1 21.2 20.8 34.1
SRR125858 19.4 19.5 18.9 23.1 28.9 31
SRR5419422 12.8 13.9 10.9 12.5 12 ERROR 22
ERR1137269 12.2 13.4 12.8 14.3 11.9 ERROR 21.9
NA12878 (read 2) NA12878 (读 2) 19.8 24 20.4 TLE 19.9 TLE 24.7
avg 17.86 18.96 17.93 19.44 18.83 28.09
SD 3.87 3.97 4.07 4.64 5.60 3.62 5.05
CV 0.22 0.21 0.23 0.24 0.30 0.30
The best results of all the tools are boldfaced
所有工具的最佳结果均以粗体标出
This kind of hierarchy allows users to maintain a directory structure to manage compressed files, thereby facilitating random access to specific sequence. Here, we show how to decompress and extract the target files from the compressed archive: in decompression mode, the system will index the start line number (which is given by users through the command line), then fetch the certain sequence from their according block containers and compress certain (which are also specified by users) lines of the sequence.
这种层次结构允许用户维护一个目录结构来管理压缩文件,从而方便对特定序列的随机访问。在此,我们将展示如何从压缩包中解压和提取目标文件:在解压模式下,系统将索引起始行号 (由用户通过命令行给出),然后从相应的块容器中获取特定序列,并压缩序列中的某些行(也由用户指定)。

Receive data - Receive and store output blocks
接收数据 - 接收并存储输出块

Cloud storage platform receives output blocks and descriptive information such as numbers of data blocks, sizes of data blocks, most importantly, the line number of every base sequence within data blocks. The description enables us to directly index certain sequences with line numbers and decode their affiliated blocks rather than extract the whole file. Output blocks are stored in corresponding types of containers.
云存储平台接收输出块和描述信息,如数据块的数量、数据块的大小,最重要的是数据块中每个基本序列的行号。有了这些描述信息,我们就能直接索引某些带有行号的序列,并解码其附属块,而不是提取整个文件。输出块存储在相应类型的容器中。
What is worth noting is that non-FASTQ files can also be compressed and transmitted through GTZ. Additionally, GTZ uses object-oriented programming, it is not restricted
值得注意的是,非 FASTQ 文件也可以通过 GTZ 进行压缩和传输。此外,GTZ 使用面向对象编程,它不受限制。
Fig. 7 CVs for the compression ratio of different tools to interact with a specific type of cloud storage platform, but applicable to most existing cloud storage platforms, such as the Amazon Web Service and the Alibaba cloud.
图 7 不同工具与特定类型云存储平台交互的压缩率 CV,但适用于大多数现有云存储平台,如亚马逊网络服务和阿里巴巴云。

Results and discussion 结果和讨论

In this section, we conducted experiments on a 32 -core AWS R4.8xlarge instance with of memory to evaluate the performance of GTZ in terms of compression ratio and compression speed. During the experiments, the following points should be noted:
在本节中,我们在内存容量为 的 32 核 AWS R4.8xlarge 实例上进行了实验,以评估 GTZ 在压缩比和压缩速度方面的性能。在实验过程中,需要注意以下几点:
(1)Considering that our method is lossless, we exclude methods that allow losses as counterparts.
(1)考虑到我们的方法是无损耗的,我们排除了允许损耗的对应方法。
(2)NGS data can be stored in either FASTQ or SAM/ BAM formats, we only take into account tools targeted at FASTQ format files.
(2) NGS 数据可以 FASTQ 或 SAM/ BAM 格式存储,我们只考虑针对 FASTQ 格式文件的工具。
(3)Comparison will be conducted among the algorithms that do not reorder input sequences.
(3) 在不对输入序列重新排序的算法之间进行比较。
We carried out tests on 8 publicly accessible FASTQ datasets, which are downloaded from the Sequence Read Archive(SRA) initiated by NCBI and the GCTA competition website (https://tianchi.aliyun.com/mini/challenge.htm#training-profile). To ensure the comprehensiveness of our evaluation, we chose datasets that are heterogeneous: the size of datasets ranges from to 202 , ; different species and different types of data were chosen, including DNA reads, one RNA-seq dataset of Homo sapiens, one metagenome dataset and read 2 of NA12878 (the GCTA competition datasets). Different quality score encoding methods, such as Sanger and Illumina , are selected to cover different numbers of quality scores in datasets. Quality scores are logarithmically linked to error probabilities, leading to a larger alphabet than meta data and reads, thus encodings with small numbers of quality scores normally contribute to a higher compression performance. Descriptions of the
我们在 8 个可公开访问的 FASTQ 数据集上进行了测试,这些数据集是从 NCBI 发起的序列读取档案(SRA)和 GCTA 竞赛网站(https://tianchi.aliyun.com/mini/challenge.htm#training-profile)上下载的。为了确保评估的全面性,我们选择了不同类型的数据集:数据集的大小从 到 202 , ;选择了不同物种和不同类型的数据,包括 DNA 读数、一个智人的 RNA-seq 数据集、一个元基因组数据集和 NA12878 的读数 2(GCTA 竞赛数据集)。选择不同的质量分数编码方法,如 Sanger 和 Illumina ,以涵盖数据集中不同数量的质量分数。质量分数与错误概率呈对数关系,导致字母表比元数据和读数大,因此质量分数数量少的编码通常有助于提高压缩性能。描述
Table 4 Compression time of different tools on 8 FASTQ datasets
表 4 不同工具在 8 个 FASTQ 数据集上的压缩时间
Dataset
Size
(MB)
Compression Time (s) 压缩时间(秒)
DSRC2 QUIP LW-FQZip Fqzcomp LFQC pigz
ERR233152 556.1 19 13 10 284 13 297 3
SRR935126 9754.6 49 40 195 3966 191 3610 129
SRR489793 12,807 51 49 343 4893 289 4253 122
SRR801793 2756.2 43 28 59 1212 73 1143 22
SRR125858 178 153 1044 18,300 977 10,202 481
SRR5419422 26 7 329 4234 267 ERROR 67
ERR1137269 56,543 117 32 806 12,018 851 ERROR 213
NA12878 (read 2) NA12878 (读 2) 202,631 820 700 4703 TLE 4389 TLE 620
Average speed (MB/s) 平均速度(MB/秒) 267.4 648.8 49.7 2.9 49.6 33.7 176.8
The best results of all the tools are boldfaced
所有工具的最佳结果均以粗体标出
datasets are listed in Table 2. Besides, for comparison, based on a comprehensive literature survey, we selected four state-of-the-art and widely-used lossless compression algorithms, including DSRC2 [12] (the improved version of DSRC [10]), quip [4], LW-FQZip [5], Fqzcomp [6], LFQC [15] and pigz. Among them, LW-FQZip [5], Fqzcomp [15] are representatives of reference-based tools; DSRC2 [12] and quip [4] are reference-free methods; pigz is a generalpurpose tool for compression. All the experimental results are included in Additional file 1.
表 2 列出了这些数据集。此外,根据全面的文献调查,我们选择了四种最先进且广泛使用的无损压缩算法进行比较,包括 DSRC2 [12](DSRC [10]的改进版本)、quip [4]、LW-FQZip [5]、Fqzcomp [6]、LFQC [15] 和 pigz。其中,LW-FQZip [5]、Fqzcomp [15] 是基于参考的工具的代表;DSRC2 [12] 和 quip [4] 是无参考方法;pigz 是通用压缩工具。所有实验结果都包含在附加文件 1 中。

Evaluation results 评估结果

We evaluated the performance of different tools by the following related metrics: the compression ratio, the coefficient of variation (CV) of compression ratios, the compression speed, the total time of compression and transmission to cloud storages. Specifically, the compression ratio is defined as follows:
我们通过以下相关指标来评估不同工具的性能:压缩比、压缩比变异系数(CV)、压缩速度、压缩和传输到云存储的总时间。具体来说,压缩率的定义如下:
According to this definition, a smaller compression ratio represents a more effective compression in terms of size reduction; The coefficient of variation (CV) stands for the extent of variability in relation to the mean and it is defined as the ratio of the standard deviation (SD) divided by the average (avg):
根据这一定义,压缩比越小,表示压缩尺寸缩小的效果越好;变异系数(CV)表示相对于平均值的变异程度,定义为标准偏差(SD)除以平均值(avg)的比值:
A smaller CV reveals better robustness and stability; additionally, GTZ not only performs well in compression on local computers, but also gains satisfactory results in transmission to cloud storages. On local computers, the compression speed is chosen for evaluation, and it can be simply measured by the time used for the compression (for different tools applied on the same data). Under the latter circumstance, the run time of algorithms should be the sum of compression and transmission time, namely, from the start of compression to the completion of transmission onto the cloud.
较小的 CV 值显示了更好的鲁棒性和稳定性;此外,GTZ 不仅在本地计算机上的压缩性能良好,而且在向云存储传输数据时也取得了令人满意的结果。在本地计算机上,选择压缩速度进行评估,可以简单地用压缩所用的时间来衡量(对同一数据使用不同的工具)。在后一种情况下,算法的运行时间应该是压缩和传输时间的总和,即从开始压缩到完成向云传输的时间。

Compression ratio 压缩比

Performance evaluation results are demonstrated in Table 3 and the best compression ratio, the best CV, which are the smallest, are boldfaced. Comparative results of CV are shown in Fig. 7.
性能评估结果如表 3 所示,最佳压缩比、最佳 CV 值和最小 CV 值均以粗体标出。CV 的比较结果如图 7 所示。
To note, in Table 3, some fields on datasets NA12878 (read 2, a very large dataset) are filled with "TLE" (Time Limit Exceeded, the threshold is empirically set as 6 h), and
值得注意的是,在表 3 中,数据集 NA12878(读数 2,一个非常大的数据集)的某些字段填入了 "TLE"(超过时间限制,阈值根据经验设定为 6 小时),以及 "TLE"(超过时间限制,阈值根据经验设定为 6 小时)。
Table 5 Total time of different tools on 8 FASTQ datasets with maximum bandwidth
表 5 不同工具在最大带宽下处理 8 个 FASTQ 数据集的总时间
Dataset
Size
(MB)
Compression Time (s) + Data best upload time
压缩时间(秒) + 数据最佳上传时间
DSRC2 QUIP LW-FQZip Fqzcomp LFQC pigz
ERR233152 556.1 19.0 13.4 10.4 284.4 13.4 297.4 3.4
SRR935126 9754.6 49.0 48.8 202.8 3973.8 198.8 3617.8 136.8
SRR489793 12,807 51.0 59.2 353.2 4903.2 299.2 4263.2 132.2
SRR801793 2756.2 43.0 30.2 61.2 1214.2 75.2 1145.2 24.2
SRR125858 178.0 193.6 1084.6 1017.6 521.6
SRR5419422 26.0 19.1 341.1 4246.1 279.1 ERROR 79.1
ERR1137269 56,543 117.0 77.2 851.2 896.2 ERROR 258.2
NA12878 (read 2) NA12878 (读 2) 202,631 820.0 862.1 4865.1 TLE 4551.1 TLE 782.1
Average speed (MB/s) 平均速度(MB/秒) 269.3 269.1 45.2 7.8 47.9 17.9 181.1
The best results of all the tools are boldfaced
所有工具的最佳结果均以粗体标出
Table 6 Total time of different tools on the SRR125858_2 dataset in a real test
表 6 不同工具在 SRR125858_2 数据集上的实际测试总时间
Metrics Comparative methods 比较方法
GTZ DSRC2 QUIP
LW-
FQZip
Fqzcomp LFQC pigz
Compression
ratio (%)
19.2 19.2 18.7 23.2 28.7 18 30.7
Total time (s) 总时间(秒) 99 122 553 9283 549 4982 324
some fields of the LFQC tools on the SRR5419422, ERR137269 datasets are filled with "Error" (Cannot decompress after compression, those two datasets represent RNA sequences and metagenomics data respectively). Those "outliers" represent a low robustness (for convenience of CV calculation, we just filter out "TLE" and "Error"). For instance, LFQC [15] yields the best result on 5 out of 8 datasets. However, it got "TLE" on three datasets, which means a poor stability in compression efficiency. In addition, despite the CV of pigz is the lowest, its average compression ratio ranks at the bottom. Moreover, GTZ ranks second with an average compression ratio of , and the CV of GTZ is far below that of LFQC [15] (which has the best compression ratio). In summary, GTZ not only maintains a relatively good average compression ratio than most of its counterparts, but also exhibits better stability and robustness when dealing with different datasets.
在 SRR5419422 和 ERR137269 数据集上,LFQC 工具的某些字段显示为 "错误"(压缩后无法解压,这两个数据集分别代表 RNA 序列和元基因组学数据)。这些 "离群值 "代表了较低的鲁棒性(为方便计算 CV,我们只过滤掉 "TLE "和 "Error")。例如,LFQC [15] 在 8 个数据集中的 5 个数据集上得到了最佳结果。但是,它在三个数据集上得到了 "TLE",这意味着压缩效率的稳定性很差。此外,尽管 pigz 的 CV 值最低,但其平均压缩率却垫底。此外,GTZ 以 的平均压缩率排名第二,而 GTZ 的 CV 值远远低于 LFQC [15](其压缩率最好)。总之,与大多数同类软件相比,GTZ 不仅保持了相对较好的平均压缩比,而且在处理不同数据集时表现出更好的稳定性和鲁棒性。

Compression speed 压缩速度

Results for the compression speed tests are shown in Table 4 and the best results are boldfaced. LFQC [15] and LW-FQZip [5] fail to compress the GCTA dataset NA12878 (read 2) within , which is empirically set). On datasets SRR5419422 and ERR137269, compressed files generated by LFQC cannot be decompressed, which are considered as errors (possibly because SRR5419422 is a RNA dataset and ERR137269 is a metagenomics dataset). Table 4 reveals that the referencebased methods LW-FQZip [5] and LFQC [15] are very slow on large datasets like NA12878 (read 2). DSRC2 [12], which is the representative of reference-free methods, performs best in terms of the average compression speed. GTZ ranks second in terms of compression time.
压缩速度测试结果如表 4 所示,最佳结果用黑体字标出。LFQC [15] 和 LW-FQZip [5] 无法在 (根据经验设定)内压缩 GCTA 数据集 NA12878(读 2)。在 SRR5419422 和 ERR137269 数据集上,LFQC 生成的压缩文件无法解压缩,这被视为错误(可能是因为 SRR5419422 是 RNA 数据集,而 ERR137269 是元基因组学数据集)。表 4 显示,基于参考的 LW-FQZip [5] 和 LFQC [15] 方法在处理 NA12878(读 2)这样的大型数据集时速度非常慢。DSRC2 [12] 是无参照方法的代表,在平均压缩速度方面表现最佳。GTZ 在压缩时间方面排名第二。
However, we are mostly interested in the total time of compression and transmission. Under the condition where the data transmission throughput is at best of AWS settings), we tested and estimated the total time of all tools and the results are listed in Table 5. To note, this is a very optimistic optimization. Here, only GTZ supports data upload while compressing, other tools have to finish compression before submission. We can see the average compression and upload speed of GTZ (269.3 MB/ s) is the highest, DSRC2 comes second with an average speed of . In general, if the input data size is
不过,我们主要关注的是压缩和传输的总时间。在数据传输吞吐量为 (AWS 最佳设置)的条件下,我们测试并估算了所有工具的总时间,结果列于表 5。需要指出的是,这是一个非常乐观的优化结果。在这里,只有 GTZ 支持在压缩的同时上传数据,其他工具必须在提交前完成压缩。我们可以看到,GTZ 的平均压缩和上传速度(269.3 MB/s)最高,DSRC2 位居第二,平均速度为 。一般来说,如果输入数据大小为

Table 7 Qualitative performance summary
表 7 定性绩效汇总
Algorithm Compression speed 压缩速度 Compression ratio 压缩比
GTZ High Moderate
DSRC2 High Moderate
QUIP Moderate Moderate
LW-FQZip Low Moderate
Fqzcomp Moderate Low
LFQC Moderate Low
pigz High High
very large, GTZ will be even faster than DSRC2: 7% faster in the case of the SRR125858 dataset (a 50GB dataset).
对于非常大的数据集,GTZ 甚至比 DSRC2 更快:在 SRR125858 数据集(50GB 数据集)的情况下,GTZ 比 DSRC2 快 7%。
To note, the upload time are estimated with the maximum bandwidth, while in practice, the upload speed could be much slower than that. To verify this, we carried out a real upload test using the relatively big dataset, SRR125858_2.fastq (about half of the SRR125858 dataset), which is in size. The compression ratios of GTZ and DSRC2 happen to be the same on this dataset. It took GTZ to finish compression and transmission, while it took for DSRC2. Our optimistic estimation of a fast upload takes only , whereas in practice, it took about . The details are listed in Table 6.
需要注意的是,上传时间是根据最大带宽估算的,而实际上传速度可能比最大带宽慢得多。为了验证这一点,我们使用 SRR125858_2.fastq(约为 SRR125858 数据集的一半)这一相对较大的数据集进行了实际上传测试,其大小为 。在这个数据集上,GTZ 和 DSRC2 的压缩率恰好相同。GTZ 完成压缩和传输需要 ,而 DSRC2 需要 。我们乐观地估计,快速上传只需要 ,而实际情况大约需要 。详情见表 6。
In Table 7, we present a qualitative performance summary of all tools. The parameters, high, moderate, and low show the comparison between different tools. Compression ratio of a tool is said to be high if it is the best compressor or close to the known best algorithm. GTZ achieves satisfactory results both in compression ratio and compression speed (as well as the total time considering data upload) on tested datasets.
在表 7 中,我们对所有工具的性能进行了定性总结。参数 "高"、"中 "和 "低 "显示了不同工具之间的比较。如果一个工具是最好的压缩器或接近已知的最佳算法,那么它的压缩率就被称为高。在测试的数据集上,GTZ 在压缩率和压缩速度(以及考虑到数据上传的总时间)方面都取得了令人满意的结果。

Compression rate on different data sections
不同数据段的压缩率

The compression rates of GTZ on the three sections of a FASTQ file are reported in Table 8.
表 8 列出了 GTZ 对 FASTQ 文件三个部分的压缩率。
Table 8 The compression ratio of GTZ on the three components of FASTQ files
表 8 GTZ 对 FASTQ 文件三个组成部分的压缩率
Dataset Compression ratio (%) 压缩比 (%)
Metadata Reads Quality scores 质量得分
ERR233152 2.62 20.6 20.8
SRR935126 3.29 22.2 25.3
SRR489793 0.01 22.7 29.95
SRR801793 3.73 23.15 31.1
SRR125858 2.81 23.3 28.25
SRR5419422 0.01 22.9 9.5
ERR1137269 3.23 24.05 19.35
NA12878 (read 2) NA12878 (读 2) 7.59 20.4 27.3
Average 2.91 22.39 23.94

Conclusions 结论

The dramatic development of NGS technology has brought about challenge to store and transmit genome sequences. Efficient compression tools are feasible solutions to address this problem. Therefore, an efficient lossless compression tool for cloud computing of FASTQ files, GTZ, was proposed in this paper. GTZ is the champion winning solution of the GCTA competition (Reports can be found at http:// vcbeat.net/35028.html. GTZ integrates the context modeling technology with multiple prediction modelling schemes. It also introduces the ability of paralleling processing technique for improved and steady efficiency of compression. Moreover, it enables random access to some certain specific reads. By virtue of block storage, users are allowed to only compress and read some parts of genome sequences, without the need for a complete decompression of the original FASTQ file. Another important feature is that it can overlap the data transmission with the compression process, which can greatly reduce the total time needed.
NGS 技术的飞速发展给基因组序列的存储和传输带来了挑战。高效的压缩工具是解决这一问题的可行方案。因此,本文提出了一种用于 FASTQ 文件云计算的高效无损压缩工具 GTZ。GTZ是GCTA竞赛的冠军获奖方案(报告可在http:// vcbeat.net/35028.html上找到。GTZ 整合了上下文建模技术和多种预测建模方案。它还引入了并行处理技术,以提高压缩效率并保持稳定。此外,它还实现了对某些特定读取的随机访问。通过块存储,用户可以只压缩和读取基因组序列的某些部分,而无需对原始 FASTQ 文件进行完全解压缩。另一个重要特点是,它可以将数据传输与压缩过程重叠,从而大大减少所需的总时间。
We evaluated the performance of GTZ on eight realworld FASTQ datasets and compared it with other state-ofthe-art tools. Experimental results validate that GTZ performs well in terms of both compression rate and compression speed and its performance is steady across different datasets. GTZ managed to compress and transfer a 200GB FASTQ file to cloud storages like AWS and Alibaba cloud within .
我们在八个真实世界的 FASTQ 数据集上评估了 GTZ 的性能,并将其与其他先进工具进行了比较。实验结果证明,GTZ 在压缩率和压缩速度方面都表现出色,而且在不同数据集上性能稳定。GTZ 成功地压缩了一个 200GB 的 FASTQ 文件,并在 内将其传输到 AWS 和阿里巴巴云等云存储设备。
For future work, we will investigate how DSRC2, which exhibits a good performance of compression alone, can be optimized for the cloud environment by utilizing data segmentation and the optimization techniques proposed in GTZ.
在未来的工作中,我们将研究如何利用数据分段和 GTZ 提出的优化技术来优化仅在压缩方面表现良好的 DSRC2,使其适用于云环境。

Additional file 附加文件

Additional file 1: Compression ratios, compression time and
附加文件 1:压缩比、压缩时间和压缩率。
descriptions of datasets are included in this file. (XLSX )
本文件包含数据集说明。(XLSX )

Funding 资金筹措

Publication of this article was funded by the National Natural Science Foundation of China grant (No.31501073, No.81522048, No.81573511), the National Key Research and Development Program (No.2016YFC0905000), and the Genetalks Biotech. Co.,Ltd.
本文的发表得到了国家自然科学基金(编号:31501073、81522048、81573511)、国家重点研发计划(编号:2016YFC0905000)和Genetalks生物技术有限公司的资助。有限公司

Availability of data and materials
数据和材料的可用性

GTZ is freely available at https://github.com/Genetalks/gtz.
德国技术合作公司可在 https://github.com/Genetalks/gtz 免费查阅。

About this supplement 关于本补充剂

This article has been published as part of BMC Bioinformatics Volume 18 Supplement 16, 2017: 16th International Conference on Bioinformatics (InCoB 2017): Bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/ supplements/volume-18-supplement-16.
本文已作为《BMC 生物信息学》2017 年第 18 卷增刊 16 的一部分发表:第16届国际生物信息学会议(InCoB 2017):生物信息学》。该增刊的全部内容可在线查阅:https://bmcbioinformatics.biomedcentral.com/articles/ supplements/volume-18-supplement-16。

Authors' contributions 作者的贡献

Yuting Xing, Dr. Gen Li and Dr. Chengkun Wu developed the algorithms and drafted the manuscript; they developed the codes of GTZ together with Zhenguo Wang and Bolun Feng; Dr. Zhuo Song and Dr. Chengkun Wu proposed the idea of the project, prepared the 8 FASTQ datasets for testing, drafted the discussion and revised the whole manuscript. All the authors have read and approve the manuscript.
邢玉婷、李根博士和吴成坤博士开发了算法并起草了手稿;他们与王振国和冯博伦一起开发了GTZ的代码;宋卓博士和吴成坤博士提出了项目的想法,准备了8个FASTQ数据集进行测试,起草了讨论稿并修改了整个手稿。所有作者均已阅读并同意本稿件。
Ethics approval and consent to participate
伦理批准和参与同意书
Not applicable. 不适用。
Not applicable. 不适用。

Competing interests 竞争利益

The authors declare that they have no competing interests.
作者声明他们没有利益冲突。

Publisher's Note 出版商说明

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
施普林格-自然》对出版地图和机构隶属关系中的管辖权主张保持中立。
Published: 28 December 2017
发布时间:2017 年 12 月 28 日

References 参考资料

  1. Daily K, Rigor P, Christley S, Xie X, Baldi P. Data structures and compression algorithms for high-throughput sequencing technologies. BMC Bioinformatics. BioMed Central Ltd; 2010;11:514.
    Daily K, Rigor P, Christley S, Xie X, Baldi P. 高通量测序技术的数据结构和压缩算法。BMC 生物信息学。BioMed Central Ltd; 2010; 11:514.
  2. Kozanitis C, Saunders C, Kruglyak S, Bafna V, Varghese G. Compressing genomic sequence fragments using SLIMGENE. J Comput Biol. 2010;18:401-13,
    Kozanitis C, Saunders C, Kruglyak S, Bafna V, Varghese G. 使用 SLIMGENE 压缩基因组序列片段。J Comput Biol、
  3. Pinho AJ, Pratas D, Garcia SP. GReEn: a tool for efficient compression of genome resequencing data. Nucleic Acids Res 2012;40:e27-7.
    Pinho AJ, Pratas D, Garcia SP.GReEn:高效压缩基因组重测序数据的工具。Nucleic Acids Res 2012; 40:e27-7.
  4. Jones DC, Ruzzo WL, Peng X, Katze MG. Compression of next-generation sequencing reads aided by highly efficient de novo assembly. Nucleic Acids Res 2012;40:e171-1.
    Jones DC, Ruzzo WL, Peng X, Katze MG.通过高效的从头组装压缩下一代测序读数。Nucleic Acids Res 2012; 40:e171-1.
  5. Zhang Y, Li L, Yang Y, Yang X, He S, Zhu Z. Light-weight reference-based compression of FASTQ data. BMC Bioinformatics. 2015;16:188.
    Zhang Y, Li L, Yang Y, Yang X, He S, Zhu Z. 基于参考的轻量级 FASTQ 数据压缩。BMC Bioinformatics.2015;16:188.
  6. Bonfield JK, Mahoney MV. Compression of FASTQ and SAM format sequencing data. Gormley M, editor. PLoS One. 2013;8:e59190-10.
    Bonfield JK, Mahoney MV.FASTQ 和 SAM 格式测序数据的压缩。Gormley M,编辑。PLoS One.2013;8:e59190-10.
  7. Grumbach S, Tahi F. Compression of DNA sequences. In: I.N.R.I.A; 1994.
    Grumbach S, Tahi F.压缩 DNA 序列。In:I.N.R.I.A; 1994.
  8. Ziv J, Lempel A. A universal algorithm for sequential data compression. IEEE Trans Inf Theory. 1977;IT-23:337-43.
    Ziv J, Lempel A. A universal algorithm for sequential data compression.IEEE Trans Inf Theory.1977;it-23:337-43.
  9. Grumbach S, Tahi F. A new challenge for compression algorithms: genetic sequences. Inf Process Manag. 1994;30:875-86.
    Grumbach S, Tahi F.压缩算法的新挑战:基因序列。Inf Process Manag.1994;30:875-86.
  10. Deorowicz S, Grabowski S. Compression of DNA sequence reads in FASTQ format. Bioinformatics. 2011;27:860-2.
    Deorowicz S, Grabowski S. 以 FASTQ 格式压缩 DNA 序列读数。生物信息学。2011;27:860-2.
  11. Huffman DA. A method for the construction of minimum-Redundacy codes Proc IRE. 1952;40:1908-11.
    Huffman DA.构建最小冗余码的方法 Proc IRE.1952;40:1908-11.
  12. Roguski L, Deorowicz S. DSRC 2-industry-oriented compression of FASTQ files. Bioinformatics. 2014;30:2213-5.
    Roguski L, Deorowicz S.DSRC 2--面向行业的 FASTQ 文件压缩。生物信息学》。2014;30:2213-5.
  13. Hach F, Numanagic I, Alkan C, Sahinalp SC. SCALCE: boosting sequence compression algorithms using locally consistent encoding. Bioinformatics. 2012;28:3051-7.
    Hach F, Numanagic I, Alkan C, Sahinalp SC.SCALCE:使用局部一致编码提升序列压缩算法。生物信息学。2012;28:3051-7.
  14. Jeannot E, Knutsson B. Adaptive online data compression. In: Proceedings th IEEE international symposium on high performance distributed computing; 2017. p. 1-10.
    Jeannot E, Knutsson B. Adaptive online data compression.In:Proceedings th IEEE international symposium on high performance distributed computing; 2017. p. 1-10.
  15. Nicolae M, Pathak S, Rajasekaran S. LFQC: a lossless compression algorithm for FASTQ files. Bioinformatics. 2015;31:3276-81
    Nicolae M, Pathak S, Rajasekaran S. LFQC:FASTQ 文件的无损压缩算法。Bioinformatics.2015;31:3276-81
Submit your next manuscript to BioMed Central and we will help you at every step:
向 BioMed Central 投稿,我们将在每个环节为您提供帮助:
Submit your manuscript at www.biomedcentral.com/submit
请登录 www.biomedcentral.com/submit 投稿

Terms and Conditions 条款和条件

Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH ("Springer Nature").
施普林格-自然》杂志内容由施普林格-自然客户服务中心有限公司("施普林格-自然")提供。
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users ("Users"), for smallscale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use ("Terms"). For these purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
施普林格-自然》支持作者、订户和授权用户("用户")在保留所有版权、商标、服务标记和其他所有权声明的前提下,合理分享研究论文,用于小规模个人非商业用途。访问、共享、接收或以其他方式使用 Springer Nature 期刊内容即表示您同意这些使用条款("条款")。出于上述目的,Springer Nature 将学术使用(研究人员和学生)视为非商业使用。
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will apply.
本条款为补充条款,适用于任何适用的网站条款和条件、相关网站许可或个人订阅。本条款优先于与相关条款、网站许可或个人订阅之间的任何冲突或歧义(仅限于冲突或歧义范围内)。对于使用知识共享许可协议的文章,将适用所使用的知识共享许可协议的条款。
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as detailed in the Privacy Policy.
我们收集和使用个人数据以提供对施普林格-自然期刊内容的访问。我们还可能在ResearchGate和施普林格-自然内部使用这些个人数据,并按照约定以匿名方式共享这些数据,用于跟踪、分析和报告。除非我们获得您的许可,否则我们不会将您的个人数据透露给ResearchGate或施普林格-自然集团公司以外的第三方,详情请参见隐私政策。
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may not:
虽然用户可以将 Springer Nature 期刊内容用于小规模、个人非商业性用途,但必须注意的是,用户不得使用这些内容:
  1. use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access control;
    将此类内容用于向其他用户定期或大规模提供访问权限,或用作规避访问控制的手段;
  2. use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
    在任何司法管辖区,使用此类内容将被视为刑事犯罪或法定犯罪,或导致民事责任,或因其他原因而违法;
  3. falsely or misleadingly imply or suggest endorsement, approval, sponsorship, or association unless explicitly agreed to by Springer Nature in writing;
    虚假或误导性地暗示或暗示认可、批准、赞助或关联,除非得到施普林格-自然公司的明确书面同意;
  4. use bots or other automated methods to access the content or redirect messages
    使用机器人或其他自动方法访问内容或重定向信息
  5. override any security feature or exclusionary protocol; or
    推翻任何安全功能或排除协议;或
  6. share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal content.
    共享内容,以便为施普林格-自然的产品或服务或施普林格-自然期刊内容的系统数据库创造替代品。
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue, royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any other, institutional repository.
根据禁止商业使用的规定,施普林格-自然不允许创造产品或服务,从我们的内容或将其作为付费服务的一部分或为其他商业利益创造收入、版税、租金或收益。施普林格-自然》期刊内容不得用于馆际互借,图书馆员不得将《施普林格-自然》期刊内容大规模上传到他们或任何其他机构的资料库。
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
这些使用条款将定期审查,并可随时修订。施普林格-自然》没有义务在本网站上发布任何信息或内容,并可自行决定随时在通知或不通知的情况下删除信息或内容或功能。施普林格-自然》可随时撤销对您的许可,并删除对已保存的《施普林格-自然》期刊内容副本的访问权限。
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law, including merchantability or fitness for any particular purpose.
在法律允许的最大范围内,施普林格-自然公司不对用户作出任何明示或暗示的有关施普林格-自然公司期刊内容的保证、陈述或担保,所有各方否认并放弃法律规定的任何暗示保证或担保,包括适销性或任何特定用途的适用性。
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed from third parties.
请注意,这些权利并不自动延伸至施普林格-自然出版的内容、数据或其他材料,这些内容、数据或材料可能已从第三方获得许可。
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not expressly permitted by these Terms, please contact Springer Nature at
如果您希望向更广泛的受众或定期使用或分发我们的施普林格-自然期刊内容,或以本条款未明确允许的任何其他方式使用或分发我们的期刊内容,请通过以下方式联系施普林格-自然公司

  1. Equal contributors
    平等贡献者
    Genetalks Biotech. Co.,Ltd., Beijing 100000, China
    Genetalks Biotech.中国北京,100000
    'School of Computer Science, National University of Defense Technology, Changsha 410000, China
    国防科技大学计算机学院,中国长沙 410000
    • We accept pre-submission inquiries
      我们接受提交前咨询
    • Our selector tool helps you to find the most relevant journal
      我们的选择工具可帮助您找到最相关的期刊
    • We provide round the clock customer support
      我们提供全天候客户支持
    • Convenient online submission
      便捷的在线提交
    • Thorough peer review 彻底的同行评审
    • Inclusion in PubMed and all major indexing services
      纳入 PubMed 和所有主要索引服务
    • Maximum visibility for your research
      为您的研究提供最大的能见度