tcb@breakable \useunder\ul
Entropy Law: The Story Behind Data Compression and LLM Performance
熵定律：数据压缩和LLM性能背后的故事
Abstract 摘要
Data is the cornerstone of large language models (LLMs), but not all data is useful for model learning.
Carefully selected data can better elicit the capabilities of LLMs with much less computational overhead.
Most methods concentrate on evaluating the quality of individual samples in data selection, while the combinatorial effects among samples are neglected.
Even if each sample is of perfect quality, their combinations may be suboptimal in teaching LLMs due to their intrinsic homogeneity or contradiction.
In this paper, we aim to uncover the underlying relationships between LLM performance and data selection.
Inspired by the information compression nature of LLMs, we uncover an “entropy law” that connects LLM performance with data compression ratio and firstepoch training loss, which reflect the information redundancy of a dataset and the mastery of inherent knowledge encoded in this dataset, respectively.
Through both theoretical deduction and empirical evaluation, we find that model performance is negatively correlated to the compression ratio of training data, which usually yields a lower training loss.
Based on the findings of the entropy law, we propose a quite efficient and universal data selection method named ZIP for training LLMs, which aim to prioritize data subsets exhibiting a low compression ratio.
Based on a multistage algorithm that selects diverse data in a greedy manner, we can obtain a good data subset with satisfactory diversity.
Extensive experiments have been conducted to validate the entropy law and the superiority of ZIP across different LLM backbones and alignment stages.
We also present an interesting application of entropy law that can detect potential performance risks at the beginning of model training.
数据是大型语言模型的基石，但并非所有数据都对模型学习有用。精心选择的数据可以在更少的计算开销下更好地引出模型的能力。大多数方法集中在评估数据选择中单个样本的质量，而忽略了样本之间的组合效应。即使每个样本的质量都很完美，由于其内在的同质性或矛盾性，它们的组合在教学模型时可能是次优的。在本文中，我们旨在揭示模型性能与数据选择之间的潜在关系。受信息压缩性质的启发，我们揭示了一条“熵定律”，将模型性能与数据压缩比和第一轮训练损失联系起来，分别反映了数据集的信息冗余和数据集中编码的内在知识的掌握程度。通过理论推导和实证评估，我们发现模型性能与训练数据的压缩比负相关，这通常会产生较低的训练损失。 基于熵定律的发现，我们提出了一种名为 ZIP 的高效且通用的数据选择方法用于训练LLMs，其目的是优先选择压缩率低的数据子集。基于一个以贪婪方式选择多样数据的多阶段算法，我们可以获得一个具有满意多样性的数据子集。进行了大量实验以验证熵定律和 ZIP 在不同LLM骨干网和对齐阶段的优越性。我们还展示了熵定律的一个有趣应用，即在模型训练开始时检测潜在的性能风险。^{1}^{1}1Code can be found in https://github.com/USTCStarTeam/ZIP.
1 Introduction 1 引言
In recent years, Large Language Models (LLMs) have gained significant attention from both academia and industry, applied in various domains, such as chatbots (Ouyang et al., 2022; Achiam et al., 2023), chemistry tools (M. Bran et al., 2024), and programming assistants (GitHub, 2020).
The great success of LLMs depends on their general intelligence obtained from a vast amount of data collected from various sources (Albalak et al., 2024; Wang et al., 2023c).
Through pretraining on trillions of tokens to master diverse knowledge and tuning on smaller instruction data to align models with human preference, LLMs can effectively utilize their knowledge to follow user instructions, do commonsense reasoning, and solve realworld problems (Zhao et al., 2023).
近年来，大型语言模型（LLMs）引起了学术界和工业界的广泛关注，被应用于各种领域，如聊天机器人（Ouyang 等，2022；Achiam 等，2023）、化学工具（M. Bran 等，2024）和编程助手（GitHub，2020）。LLMs的巨大成功依赖于从各种来源收集的大量数据中获得的通用智能（Albalak 等，2024；Wang 等，2023c）。通过在数万亿个标记上进行预训练以掌握多样化的知识，并在较小的指令数据上进行微调以使模型与人类偏好对齐，LLMs能够有效利用其知识来遵循用户指令、进行常识推理并解决现实世界的问题（Zhao 等，2023）。
However, not all data are useful for teaching LLMs, especially when computational resources are limited (Albalak et al., 2024).
For example, we can better elicit the capability of LLMs by finetuning them on carefully curated samples rather than a large but noisy data collection (Ouyang et al., 2022; Chowdhery et al., 2023; Meta, 2020; Zhou et al., 2023).
However, selecting the proper data for LLM training is quite complicated and abstruse, since the space of data preprocessing and combination is almost unlimited.
Due to the huge computational overhead of LLM training, manual or empirical data selection based on trialanderror feedback is rather cumbersome and even impractical.
Therefore, automatic data selection methods are necessary for LLM development under limited computational budgets.
然而，并非所有数据都对教学LLMs有用，特别是在计算资源有限的情况下（Albalak 等，2024）。例如，我们可以通过在精心策划的样本上进行微调，而不是在大量但嘈杂的数据集合上，更好地引出LLMs的能力（Ouyang 等，2022；Chowdhery 等，2023；Meta，2020；Zhou 等，2023）。然而，为LLM训练选择合适的数据是相当复杂和深奥的，因为数据预处理和组合的空间几乎是无限的。由于LLM训练的巨大计算开销，基于试错反馈的手动或经验数据选择相当繁琐，甚至不切实际。因此，在有限的计算预算下，LLM开发需要自动数据选择方法。
Intuitively, highquality samples are expected to have better efficiency in teaching LLMs.
For example, the successful practice of LIMA (Zhou et al., 2023) shows the powerful effect of data quality on LLM performance that can surpass the amount of data.
Therefore, existing methods usually focus on qualityoriented data selection, based either on heuristic rules (Raffel et al., 2020; Rae et al., 2021; Xie et al., 2023; Chowdhery et al., 2023; Li et al., 2023) or evaluation models (Wettig et al., 2024; Chen et al., 2023; Lu et al., 2023; Liu et al., 2023; Cui et al., 2023).
Heuristic methods typically involve handcrafted rules (e.g., sentence number (Raffel et al., 2020), word count (Rae et al., 2021), length (Shen, 2024)) to evaluate data across multiple dimensions.
Modelbased approaches, on the contrary, rely on wellestablished LLMs such as GPT4 (Achiam et al., 2023) to provide quality assessments of training samples in different views, such as direct scoring (Chen et al., 2023), task tagging (Lu et al., 2023), and pairwise scoring (Liu et al., 2023).
However, most of these approaches evaluate different data samples independently, which neglects the intricate combinatorial effects among samples.
As illustrated in Figure 1, even if each sample is in perfect quality, their combinations may still be suboptimal due to their mutual information redundancy or inconsistency.
Although the qualitybased subset is composed of all three good samples, the knowledge they encode is actually redundant and conflicting.
In contrast, another data subset composed of several relatively lowerquality but diverse samples may convey more information than the above subset in the teaching of LLMs.
Therefore, qualitybased data selection does not fully align with the goal of maximizing the knowledge mastery of LLMs.
直观上，高质量样本在教学LLMs中预期会有更好的效率。例如，LIMA（Zhou et al, 2023）的成功实践显示了数据质量对LLM性能的强大影响，甚至可以超过数据量。因此，现有方法通常专注于基于质量导向的数据选择，或者基于启发式规则（Raffel et al, 2020; Rae et al, 2021; Xie et al, 2023; Chowdhery et al, 2023; Li et al, 2023）或评估模型（Wettig et al, 2024; Chen et al, 2023; Lu et al, 2023; Liu et al, 2023; Cui et al, 2023）。启发式方法通常涉及手工制定的规则（例如，句子数量（Raffel et al, 2020），字数（Rae et al, 2021），长度（Shen, 2024））来评估多个维度的数据。基于模型的方法则依赖于成熟的LLMs，如 GPT4（Achiam et al, 2023），从不同角度提供训练样本的质量评估，例如直接评分（Chen et al, 2023），任务标记（Lu et al, 2023），和成对评分（Liu et al, 2023）。然而，这些方法中的大多数都是独立评估不同的数据样本，忽略了样本之间复杂的组合效应。 如图 1 所示，即使每个样本的质量都很完美，由于它们的互信息冗余或不一致性，它们的组合仍可能是次优的。尽管基于质量的子集由所有三个好的样本组成，但它们编码的知识实际上是冗余和冲突的。相比之下，另一个由几个相对较低质量但多样化的样本组成的数据子集在LLMs的教学中可能比上述子集传达更多的信息。因此，基于质量的数据选择并不完全符合最大化LLMs知识掌握的目标。
In many recent studies, researchers have shown that the basic mechanism of autoregressive language modeling in LLMs is information compression (Delétang et al., 2023; Huang et al., 2024).
Thus, the knowledge condensed by LLMs actually depends on the effective information encoded by training data.
This intuition opens another direction of data selection, i.e., based on the effective information amount of data.
In this paper, we uncover the underlying relations between LLM performance and data homogeneity, which can be measured by various canonical lossless compression algorithms (e.g., DEFLATE in ZIP).
Through both theoretical analysis and empirical experiments, we formulate the “entropy law”, which shows that the compression ratio of training data is a decisive factor affecting model performance, if the overall quality and consistency of selected samples remain unchanged.
Motivated by the entropy law, we propose an effective and efficient data selection algorithm called ZIP to select heterogeneous data with low compression ratio, which aims to maximize the effective information amount of information for LLM learning.
Specifically, we devise a multistage greedy strategy to find an approximate solution that guarantees a low compression ratio without exhausting all possible combinations, and it iterates continuously until we obtain a predetermined number of samples.
In each iteration, ZIP performs preliminary filtering to choose a smaller pool of candidates, and then selects a few samples from the reduced pool that minimizes the compression ratio of the selected dataset through a cascaded manner.
By learning LLMs on a collection of diverse samples that encode heterogeneous and complementary information, the capabilities of LLMs can be better elicited.
Extensive experiments on different LLM backbones at different stages of LLM alignment demonstrate the superiority of ZIP over various qualitybased baselines.
We also present an interesting application of the entropy law that can detect potential performance risks at the beginning of model training, which can effectively reduce the computational overhead in LLM development.
在许多最近的研究中，研究人员表明，自回归语言建模的基本机制是信息压缩（Delétang 等，2023；Huang 等，2024）。因此，LLMs 所凝结的知识实际上取决于训练数据所编码的有效信息。这一直觉开启了数据选择的另一个方向，即基于数据的有效信息量。在本文中，我们揭示了 LLM 性能与数据同质性之间的潜在关系，这可以通过各种典型的无损压缩算法（例如 ZIP 中的 DEFLATE）来衡量。通过理论分析和实证实验，我们提出了“熵定律”，该定律表明，如果所选样本的整体质量和一致性保持不变，训练数据的压缩比是影响模型性能的决定性因素。受熵定律的启发，我们提出了一种名为 ZIP 的有效且高效的数据选择算法，以选择压缩比低的异质数据，旨在最大化 LLM 学习的信息有效信息量。 具体来说，我们设计了一种多阶段贪婪策略，以找到一个近似解，该解保证了低压缩比，而无需穷尽所有可能的组合，并且它会不断迭代，直到我们获得预定数量的样本。在每次迭代中，ZIP 进行初步筛选以选择一个较小的候选池，然后从缩小的候选池中选择一些样本，通过级联方式最小化所选数据集的压缩比。通过在编码异质和互补信息的多样化样本集合上学习LLMs，可以更好地引出LLMs的能力。在不同阶段的LLM对齐中，不同LLM骨干上的大量实验表明，ZIP 优于各种基于质量的基线。我们还展示了熵定律的一个有趣应用，它可以在模型训练开始时检测潜在的性能风险，从而有效减少LLM开发中的计算开销。
2 Related Works 2 相关工作
2.1 Large Modeling and Information Compression
2.1 大规模建模与信息压缩
The relationship between language modeling and data compression has long intrigued researchers (Shannon, 1948, 1951).
Pandey (2024) has identified a datadependant scaling law that takes data’s gzip compressibility into consideration.
Besides, recent empirical studies have confirmed that language models can act as versatile data compressors (Delétang et al., 2023), and the intelligence of LLMs can be quantified by their capacity for text compression (Huang et al., 2024).
Let a text corpus be generated from an underlying distribution $\rho$. A lossless compression algorithm $\mathcal{C}$ is then expected to encode a text sequence $x_{1:n}$ into a bitstream $\mathcal{C}(x_{1:n})$ of minimal length, ensuring that $x_{1:n}$ can be perfectly recovered from $\mathcal{C}(x_{1:n})$. The expected number of bits of an optimal $\mathcal{C}$ is equal to $\mathbb{E}_{x\sim\rho}[\log_{2}\rho(x)]=\mathbb{E}_{x\sim\rho}[\sum_{i=1}^{n%
}\log_{2}\rho(x_{i}x_{1:i1})]$ (Shannon, 1948).
The underlying distribution $\rho$ is usually unknown in reality, but it can be estimated by a language model $\rho_{\text{model}}$. Then the expected number of bits of an optimal $\mathcal{C}$ can be updated:
语言建模与数据压缩之间的关系长期以来一直吸引着研究人员（Shannon, 1948, 1951）。Pandey（2024）已经确定了一种依赖于数据的缩放定律，该定律考虑了数据的 gzip 可压缩性。此外，最近的实证研究证实，语言模型可以作为多功能的数据压缩器（Delétang et al, 2023），而LLMs的智能可以通过其文本压缩能力来量化（Huang et al, 2024）。假设一个文本语料库是从一个基础分布 $\rho$ 生成的。然后，一个无损压缩算法 $\mathcal{C}$ 应该将文本序列 $x_{1:n}$ 编码成最小长度的比特流 $\mathcal{C}(x_{1:n})$ ，确保可以从 $\mathcal{C}(x_{1:n})$ 完美恢复 $x_{1:n}$ 。最优 $\mathcal{C}$ 的预期比特数等于 $\mathbb{E}_{x\sim\rho}[\log_{2}\rho(x)]=\mathbb{E}_{x\sim\rho}[\sum_{i=1}^{n%
}\log_{2}\rho(x_{i}x_{1:i1})]$ （Shannon, 1948）。基础分布 $\rho$ 在现实中通常是未知的，但可以通过语言模型 $\rho_{\text{model}}$ 来估计。然后，最优 $\mathcal{C}$ 的预期比特数可以更新为：
$\mathbb{E}_{x\sim\rho}[\sum_{i=1}^{n}\log_{2}\rho_{\text{model}}(x_{i}x_{1:i% 1})].$  (1) 
Equation 1 is the crossentropy loss employed in training LLMs, thereby establishing a coherent relationship between LLMs and information compression.
This foundational insight paves the way for this work.
方程 1 是训练LLMs时使用的交叉熵损失，从而在LLMs和信息压缩之间建立了一种连贯的关系。这一基础性见解为本工作的开展铺平了道路。
2.2 Alignment of Large Language Models
2.2 大型语言模型的对齐
Large Language Models (LLMs) have recently gained significant attention from academia and industry.
LLM alignment, which includes supervised finetuning (SFT) and reinforcement learning with human feedback (RLHF), has emerged as a crucial technique for adapting LLMs to end tasks using natural language instructions (Zhao et al., 2023; Wang et al., 2023c).
Alignment is performed using instruction datasets consisting of multiple (Instruction, Output) pairs, which require LLMs to follow the instructions and generate corresponding outputs.
Early explorations have focused on constructing or expanding instruction datasets through methods such as crowdsourcing (Wang et al., 2022; Köpf et al., 2024), selfinstruction (Taori et al., 2023; Peng et al., 2023; Wang et al., 2023b), or the combination of existing datasets (Wang et al., 2023a; Ivison et al., 2023).
Finetuned LLMs on these datasets have demonstrated promising capabilities to adhere to instructions across various contexts and align with human expectations.
大型语言模型（LLMs）最近在学术界和工业界引起了广泛关注。LLM对齐，包括监督微调（SFT）和基于人类反馈的强化学习（RLHF），已成为使用自然语言指令将LLMs适应终端任务的关键技术（Zhao 等，2023；Wang 等，2023c）。对齐是使用包含多个（指令，输出）对的指令数据集进行的，这些数据集要求LLMs遵循指令并生成相应的输出。早期的探索集中在通过众包（Wang 等，2022；Köpf 等，2024）、自我指令（Taori 等，2023；Peng 等，2023；Wang 等，2023b）或现有数据集的组合（Wang 等，2023a；Ivison 等，2023）等方法构建或扩展指令数据集。在这些数据集上微调的LLMs展示了在各种情境下遵循指令并符合人类期望的有希望的能力。
2.3 Data selection for LLM alignment
2.3 用于LLM对齐的数据选择
A growing body of research has emphasized the importance of selecting appropriate data for LLM alignment, which can prevent potential quality issues and optimize computational resource allocation.
As a prominent example, Lima (Zhou et al., 2023) has demonstrated superior performance by carefully crafting only 1,000 highquality samples for SFT, highlighting the crucial importance of data quality.
The current literature on selecting alignment data has focused on selecting samples according to individual sample quality, which can be categorized into heuristic methods (Shen, 2024) and modelbased methods (Chen et al., 2023; Lu et al., 2023; Liu et al., 2023; Li et al., 2023, 2024; Du et al., 2023).
Heuristic methods typically employ specific criteria, such as response length (Shen, 2024), to guide data selection.
On the other hand, modelbased methods adopt various strategies to leverage the capabilities of established language models for evaluating sample quality.
For example, IFD (Li et al., 2023) measures the change in response loss when instructions are removed, and selects those with the most significant changes.
Building upon IFD, SuperFiltering (Li et al., 2024) introduces a lightweight proxy model for a more efficient calculation of the IFD score.
In addition, other modelbased methods employ proprietary LLMs to assess data quality.
In a pioneering work, AlpaGasus (Chen et al., 2023) uses ChatGPT directly to assign data quality scores to samples, while #InsTag (Lu et al., 2023) proposes assigning tags to each sample using ChatGPT and evaluates sample quality based on the number of tags.
DEITA (Liu et al., 2023) uses ChatGPTgenerated data to train two Llamabased scorers, assigning complexity and quality scores to each sample, and ultimately selecting samples with the highest hybrid scores.
However, existing methods are mainly designed to pick data based on samplewise quality measurements, which are usually weak in reflecting the overall dataset quality.
In this paper, we focus on the relation between performance and dataset quality, which can be efficiently measured by data compression metrics.
越来越多的研究强调选择适当的数据进行LLM对齐的重要性，这可以防止潜在的质量问题并优化计算资源分配。作为一个突出的例子，Lima（Zhou 等，2023）通过精心制作仅 1,000 个高质量样本用于 SFT，展示了卓越的性能，突显了数据质量的关键重要性。目前关于选择对齐数据的文献主要集中在根据单个样本质量选择样本，这可以分为启发式方法（Shen，2024）和基于模型的方法（Chen 等，2023；Lu 等，2023；Liu 等，2023；Li 等，2023，2024；Du 等，2023）。启发式方法通常采用特定标准，如响应长度（Shen，2024），来指导数据选择。另一方面，基于模型的方法采用各种策略来利用已建立的语言模型的能力来评估样本质量。例如，IFD（Li 等，2023）通过测量删除指令时响应损失的变化，并选择那些变化最显著的样本。 在 IFD 的基础上，SuperFiltering（Li 等，2024）引入了一种轻量级代理模型，以更高效地计算 IFD 分数。此外，其他基于模型的方法采用专有的LLMs来评估数据质量。在一项开创性工作中，AlpaGasus（Chen 等，2023）直接使用 ChatGPT 为样本分配数据质量分数，而#InsTag（Lu 等，2023）提出使用 ChatGPT 为每个样本分配标签，并根据标签数量评估样本质量。DEITA（Liu 等，2023）使用 ChatGPT 生成的数据训练了两个基于 Llama 的评分器，为每个样本分配复杂度和质量分数，最终选择混合分数最高的样本。然而，现有方法主要设计用于基于样本质量测量来挑选数据，这通常难以反映整体数据集的质量。在本文中，我们关注性能与数据集质量之间的关系，这可以通过数据压缩指标高效地测量。
3 Entropy Law: Connecting Model Performance with Data Compression
熵定律：将模型性能与数据压缩联系起来
In this section, we provide some theoretical analysis of the relations between data compression and LLM performance.
Intuitively, the correctness and diversity of the training data would affect the performance of the final model.
Meanwhile, the performance of LLM may be suboptimal if the data have severe intrinsic conflicts or the model has poor mastery of the information encoded by the data, which can be indicated by the training loss.
Based on these assumptions, we denote the performance of an LLM as $Z$, which is expected to be influenced by the following factors:
在本节中，我们提供了一些关于数据压缩与LLM性能之间关系的理论分析。直观上，训练数据的正确性和多样性会影响最终模型的性能。同时，如果数据存在严重的内在冲突或模型对数据编码的信息掌握不佳（这可以通过训练损失来表示），LLM的性能可能会不理想。基于这些假设，我们将LLM的性能表示为 $Z$ ，其预期会受到以下因素的影响：

•
Data compression ratio $R$: This metric can be derived by dividing the precompression data size by the postcompression size, which can be computed by various offtheshelf compression algorithms. Intuitively, a dataset with a lower compression ratio indicates a higher information density.

•
Training loss $L$: Indicates whether the data are hard for the model to memorize. Given the same base model, a high training loss is usually due to noisy or inconsistent information in the dataset. In practice, the average loss of a small number of training steps in the first training epoch is sufficient to produce an indicative $L$ value so that the model does not overfit the data.

•
Data consistency $C$: The consistency of data is reflected by the entropy of the probability of the next token given the previous contexts. Higher data consistency usually yields a lower training loss. The performance of LLMs is usually suboptimal if the dataset has poor consistency.^{2}^{2}2 Assume we have two questionanswer pairs $(q_{1},a_{1})$ and $(q_{2},a_{2})$. When an LLM updates on each QA pair, it learns the mutual information (MI) between the question and answer, i.e., $I(q_{1};a_{1})$ and $I(q_{2};a_{2})$. When it updates on both, it learns the joint MI $I(q_{1}q_{2};a_{1}a_{2})$. If $q_{1}$ and $q_{2}$ are not independent, we have $I(q_{1}q_{2};a_{1}a_{2})<I(q_{1};a_{1})+I(q_{2};a_{2})$, whose detailed derivation can be found in Appendix A. This implies that the total knowledge learned by LLMs is narrowed if the answers to similar questions are highly inconsistent.

•
Average data quality $Q$: This reflects the average samplelevel quality of the data, which can be measured through various objective and subjective aspects.
Given a certain amount of training data, the model performance can be estimated by the above factors:
$Z\propto f(R,L,C,Q),$  (2) 
where $f$ is a hidden function. Given a specific base model, the scale of $L$ usually depends on $R$ and $C$, which can be formulated as:
$L\propto g(R,C).$  (3) 
$L$ is expected to be monotonous on $R$ and $C$, since a dataset with higher homogeneity or better data consistency is easier for a model to learn. Thus, we can rewrite the above formula as follows:
$C\propto g^{\prime}(R,L),$  (4) 
where $g^{\prime}$ is an inverse function. By combining the three above equations, we have:
$Z\propto f(R,L,g^{\prime}(R,L),Q)\propto h(R,L,Q),$  (5) 
where $h$ is another hidden function. If a data selection method does not substantially change the average data quality $Q$, we can approximately regard the variable $Q$ as a constant. Therefore, the final performance can be roughly formulated as follows:
$Z\propto h^{\prime}(R,L),$  (6) 
which means that the model performance is correlated with the data compression ratio and training loss. We name this relationship as “Entropy Law”.
We can raise two deductions based on the entropy law:

•
If we further regard the data consistency as a constant, the training loss is directly influenced by the compression ratio (Eq. 3). Thus, the model performance is controlled by the compression ratio: $Z$ is usually worse if the data compression ratio $R$ is higher, which will be validated by our experiments.

•
Given the same compression ratio $R$, a higher training loss means a lower data consistency. Thus, the effective knowledge learned by the model may be more limited. This can be used to predict the performance of LLM on different data with similar compression ratios and sample qualities. We will show later the application of this deduction in our practice.
Notably, entropy law reveals a coherent connection between downstream model performance and data compression ratio, setting it apart from the previously proposed datadependent scaling law by Pandey (2024). Building upon the entropy law, we derive a data selection algorithm in Section 4 and demonstrate its application in practical largescale LLM development in Section 5.3.
4 ZIP: Lightweight Data Selection for LLM Alignment
Guided by the findings of the entropy law, we propose an effective and efficient method named ZIP to select data samples based on data compression ratios, which aims to maximize the amount of effective information given a limited training data budget. Although there exists a subset with the lowest compression ratio, it is impractical to find it due to the huge combination space of data samples. Thus, we propose an iterative multistage greedy algorithm to efficiently obtain an approximate solution with a relatively low compression ratio. In each iteration, we first use a global selection stage to choose a pool of candidate samples that have low compression ratios, which aims to find samples with high information density. We then employ a coarsegrained local selection stage incorporating a smaller set of samples with the lowest redundancy with already selected samples. Finally, we use a finegrained local selection stage that minimizes the similarity between samples to add. The above process is conducted until we obtain a sufficient size of data. The workflow of our method is summarized in Algorithm 1, whose details are introduced as follows.
4.1 Global Selection
In general, we maintain an information redundancy state $\pi_{\mathcal{D}}$ that evaluates the “information gain” of each sample. Intuitively, data with high intrasample information redundancy are unlikely to have good global diversity. For example, a sample with repeated patterns or echoed conversation turns usually has low education value in LLM training. Thus, we initialize this state by calculating the samplelevel compression ratio for the entire dataset $\mathcal{D}$. In each iteration, We select $K_{1}$ samples with the lowest scores in $\pi_{\mathcal{D}}$ to form an initial candidate pool $\mathcal{D}_{K_{1}}$, which provides a good set for subsequent local selection.
4.2 Local Coarsegrained Selection
Since the global selection does not well consider the mutual relations among samples, we further conduct local selection to pick diverse samples. To ensure good computational efficiency, we introduce a coarsegrained selection phase to narrow the candidate pool into a smaller one with $K_{2}$ samples. We first compute the compression ratio of a merged set that adds each sample in $\mathcal{D}_{K_{1}}$ to the selected set $\mathcal{D}^{\prime}$. We use this score to update the information redundancy state $\pi_{\mathcal{D}}$ to better indicate the current information gain of these samples. Based on the scores of the samples in $\mathcal{D}_{K_{1}}$, we select $K_{2}$ samples with the lowest scores. These samples form a small subset for final finegrained selection, where each sample has good diversity with the selected dataset $\mathcal{D}^{\prime}$.
4.3 Local Finegrained Selection
Although the above stage ensures that the candidate pool has distinct information from the selected set, the information redundancy among the samples within this pool is not measured. Thus, we aim to pick further samples from this subset that are diverse from each other. Concretely, we initialize a local selected set $\mathcal{D}_{K_{3}}$, and compute the compression ratio of the union of $\mathcal{D}_{K_{3}}$ and each sample in $\mathcal{D}_{K_{2}}$. We add the sample with the lowest compression ratio into $\mathcal{D}_{K_{3}}$, and remove it from $\mathcal{D}_{K_{2}}$. By repeating this process, we obtain a small subset that contains samples not only different from the selected set $\mathcal{D}^{\prime}$ but also distinct from each other. We conduct the three stages above until the size of $\mathcal{D}^{\prime}$ reaches a predefined data budget $m$. In our method, the entire selection process is quite easy to implement since it is modelfree, and can be accelerated using multiple threads. It can select data efficiently and effectively from a large candidate pool for highquality LLM training.
5 Experiments
ZIP is contentagnostic and modelfree, making it suitable for various stages of LLM alignment. We systematically evaluate the effectiveness of ZIP through experiments conducted in the SFT and RLHF stages, as described in Sections 5.1 and 5.2, respectively. Subsequently, Section 5.3 presents an indepth analysis to empirically support the proposed entropy law, including a practical application guided by this law.
5.1 Data Selection for SFT
5.1.1 Setup
Data Pool & Data Selection We follow DEITA (Liu et al., 2023) to establish a largescale data pool comprising 300K highquality samples obtained from WizardLM (Xu et al., 2023), ShareGPT (Chiang et al., 2023), and UltraChat (Ding et al., 2023). Subsequently, various data selection techniques are employed to extract a subset of this pool for LLM instruction tuning. Notably, previous studies controlled the data budget by limiting the number of instances, whereas we managed the total token count to ensure a fair allocation of the compute budget among all methods. To achieve this, we initially select 10,000 samples using ZIP and calculate the corresponding token count. Then, we apply other methods to continue data selection until the required token count is reached.
Training & Evaluation We finetune Mistral7B (Jiang et al., 2023) and LLama38B (Meta, 2020) on the selected dataset. Other training details can be found in Appendix B. As for evaluation, we adopt MTbench(Zheng et al., 2023) as our benchmark. Specifically, MTbench is a challenging multiturn question set with LLM judgements to evaluate model responses, which exhibits a highlevel human preferences alignment.
Baselines We select the baseline from two aspects. The first group includes heuristic methods: (1) Random, which randomly selects instances from the data pool to verify the fundamental effectiveness of other methods; (2) Cluster, which adopts Kmeans clustering based on the sample representations and select cluster centroids; (3) Perplexity, which selects the samples with highest training loss. The second group of baselines includes modelbased methods: (1) DEITA (Liu et al., 2023), which employs ChatGPTgenerated data to train a Llamabased data complexity evaluator and a quality evaluator, and selects samples with the highest hybrid scores; (2) SuperFiltering (Li et al., 2024), which assesses each sample by calculating the change in response loss upon instruction removal and introduce a lightweight proxy model to calculate the score more efficiently.
5.1.2 Results
Model  MTbench $\uparrow$  Cost $\downarrow$  Avg.length  Quality 
Mistral7Bbased models with SFT  
Random $\dagger$  6.85  10s  976  4.08 
Cluster  \ul6.91  15h  970  4.05 
Perplexity  6.89  8h  981  4.09 
SuperFiltering  6.12  14h  1579  4.10 
DEITA  6.82  21h  2048  4.03 
ZIP $\dagger$  7.08  \ul4.5h  543  4.00 
Llama38Bbased models with SFT  
Random $\dagger$  7.16  10s  892  4.08 
Cluster  \ul7.18  16h  886  3.95 
Perplexity  7.09  9h  895  3.96 
SuperFiltering  6.59  14h  1481  3.99 
DEITA  7.11  21h  2048  4.09 
ZIP $\dagger$  7.28  \ul4.5h  470  4.00 
Main comparison We compare ZIP with various data selection methods based on Mistral7B and Llama38B, and the results are presented in Table 1. ZIP outperforms other data selection approaches on all backbones, which can be attributed to ZIP’s ability to model the complex combinatorial effects among samples. Furthermore, our observations indicate that modelbased data selection methods often fail to produce satisfactory outcomes when a fixed token number is given. This is because the samplelevel evaluations are not updated correspondingly after selecting some samples, leading to biased evaluations for the remaining samples. Additionally, some of these methods adopt strategies to enhance data diversity, such as DEITA, which controls the representation distances of selected samples. However, these strategies only provide a rough assessment of the combinatorial effects within the representation space, since semantic distances do not necessarily reflect information redundancy.
Selection bias in sample length across different strategies We also provide the average length of tokenized samples in Table 1. The average token length of Random provides an estimation for the entire data pool, which is used to analyze other methods. From the tables, we can observe that Cluster and Perplexity exhibit similar selection preferences as Random. Additionally, Deita and SuperFiltering predominantly select lengthy data samples. This bias may stem from the LLMs’ inclination toward generating longer responses (Saito et al., 2023). However, given the limited budget of selected tokens, choosing excessively lengthy data will reduce the information density and degrade the capabilities of models trained on such data. In contrast, ZIP tends to select shorter samples. Furthermore, we plot the token length distribution of these methods, as depicted in Figure 2 and Figure 6. Consistent with the previous results, we can observe similar distributions for Random, Cluster, and Perplexity. The token length distributions of DEITA and SuperFiltering are severely skewed, deviating greatly from the original data distribution. In contrast to these modelbased approaches, ZIP exhibits no bias toward selecting lengthy samples.
Cost comparison of different strategies We provide a detailed cost analysis of each method in Table 1. Except for the Random method, ZIP required the least time to complete the data selection process, demonstrating greater efficiency than other methods. Notably, ZIP’s computations are entirely executed on CPUs, resulting in significant cost savings. Furthermore, ZIP is independent of proprietary LLMs used by DEITA or the proxy model employed by Cluster, Perplexity, and SuperFiltering. This modelfree characteristic endows ZIP with notable efficiency and versatility.
Selected data quality of different strategies We have followed Alpagasus (Chen et al., 2023) to evaluate the quality of each data sample in the selected datasets by prompting ChatGPT, with the quality scores ranging from 0 to 5. The quality scores of multiturn samples are the average scores of each turn. The results have been presented in Table 1. Surprisingly, the quality scores of selected datasets are highly similar, even with significant differences in selection mechanisms. This may suggest that the average quality distribution remains relatively uniform in the original data pool. Notably, even the SOTA modelbased methods like DEITA (Liu et al., 2023) and SuperFiltering (Li et al., 2024) select data with similar quality scores, potentially contradicting their original conclusions. We posit that this discrepancy stems from the setting of the data budget, which is controlled by the number of samples in prior studies. Considering the selection bias discussed above, these methods tend to select lengthy samples, resulting in a significantly higher token count compared with baselines. For instance, under this setting, data selected by DEITA will possess 2.7 times the number of tokens compared to ZIP. However, we argue it is fairer to control the data budget by the token count since it guarantees a similar compute budget among all methods^{3}^{3}3In practical implementation, the training steps of all methods are almost equal by employing the packing technique detailed in Axolotl..
5.2 Data Selection for RLHF
Model  MTbench $\uparrow$  Cost  Avg.length 

Base  7.18  NA  NA 
Random $\dagger$  \ul7.33  5s  464 
Score  7.30  NA  489 
ZIP $\dagger$  7.42  1.1h  357 
5.2.1 Setup
Data Pool & Data Selection The data pool used for preference alignment is a cleaned version of UltraFeedback (Cui et al., 2023; Bartolome et al., 2023), which consists of around 60k samples in the form of a "chosenrejected" pair. Similarly to the SFT stage, we ensure each data selection method selects data with an approximately equal token count. Since a "chosenrejected" data pair encompasses two data points, we select 5,000 data pairs with ZIP and then apply other methods to select data with the corresponding token budget.
Training & Evaluation Building upon the model previously finetuned with SFT, we further refine it using RLHF. In particular, we employ KahnemanTversky Optimization (KTO) (Ethayarajh et al., 2024) for preference alignment, a novel method that shows promising potential in aligning preferences. Additional training details can be found in Appendix B. For evaluation, we continue to utilize MTbench (Zheng et al., 2023) as our benchmark to assess the capabilities of LLMs finetuned with data selected using diverse data selection strategies.
Baselines We compare ZIP with the following baselines: (1) Random, which randomly samples some "chosenrejected" pairs from the data pool. (2) Score, which selects the "chosenrejected" pairs with the highest "chosenscores". These scores are obtained through LLM evaluation of the response quality (Cui et al., 2023; Bartolome et al., 2023).
5.2.2 Main results
Table 2 presents the results of different data selection strategies on the preference alignment stage of LLMs. Similar to the SFT stage, models aligned with data selected by ZIP can yield the best downstream performance, demonstrating the necessity for modeling the combinatorial effects. Besides, we find Score and Random are on par with each other, even though the selection process of Score is far more expensive than Random. This is unsurprising, as Score does not consider the combinatorial effects, which may limit the knowledge amount of the selected dataset.
5.3 Empirical Validation of Entropy Law
In this section, we aim to demonstrate the proposed entropy law. Specifically, we have plotted the model performance of Mistral7B and Llama38B concerning data compression ratio and training loss in Figure 3 and 4, respectively. Besides, we plot entropylaw curves by fitting the results. From the two figures, we can draw the following analysis:
Relationship between model performance, data compression ratio, and training loss In Figure 3(a) and Figure 4(a), LLMs trained on data with a lower compression ratio typically exhibit enhanced performance. Since the learning process of LLMs is highly relevant to information compression, we can regard LLMs as data compressors. Then the data with a lower compression ratio means a higher knowledge amount, which is more beneficial to the compressors. Besides, a lower compression ratio usually corresponds a higher training loss, as illustrated in Figures 3(b) and 4(b). This is because data resistant to compression carries more knowledge, posing a greater challenge for LLMs to absorb the encapsulated knowledge.
Model performance interpretation with entropy law Considering the three methods with comparable compression ratios and training loss, namely Random, Cluster, and Perplexity, the corresponding model performances are close. This phenomenon may seem counterintuitive, given the distinct criteria used for data selection. However, it aligns with the predictions of our proposed entropy law: when the average data quality, training loss, and data compression ratio are similar, the model performance is expected to be comparable as well. Thus, the entropy law has the potential to serve as a criterion for predicting the model’s performance on data, thereby guiding the training of LLMs.
Practical application of entropy law Incremental version update of training data is a common setting in practical LLM development. Usually, the training data amount remains relatively stable, with only a minor portion undergoing modification. We have conducted incremental training data update experiments in real scenarios, with results depicted in Figure 5. Due to confidentiality, only the relative order of the results is provided. Guided by entropy law, assuming the data quality $Q$ does not significantly decay after each incremental update, we can expect a model performance improvement with a decreased data compression ratio. This expectation is supported by the results from data versions $x_{1}$ to $x_{4}$ in Figure 5 since their compression ratios are decreased after each incremental update. However, the data version $x_{5}$ exhibits an abnormal increase in the loss and data compression ratio, which serves as an early indicator of potential model performance degradation due to a decline in training data consistency. This prediction is further confirmed by subsequent posttraining model performance evaluations, as illustrated in Figure 5. Thus, the entropy Law can be utilized as a guideline for LLM training to identify potential risks of experimental failure without training the model on the full dataset until convergence. This is particularly significant given the substantial costs associated with training an LLM.
6 Conclusion
In this paper, we delve deeply into the data selection problem from a data compression perspective. Inspired by the insight that language modeling is performing information compression, we propose an entropy law delineating the coherent relationship between model performance, data compression ratio, and training loss. Theoretically guided by the entropy law, we propose a new data selection algorithm, ZIP, to select data with the nearly lowest compression ratio, which is modelfree and contentagnostic. rendering it significantly lightweight and versatile. Experimental results have demonstrated the effectiveness and efficiency of ZIP, based on various LLM backbones, during the SFT and RLHF stages. Further indepth analysis provided empirical evidence of Entropy law, which could serve as a criterion for LLM performance prediction on specific data.
References
 (1)
 Achiam et al. (2023) Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt4 technical report. arXiv preprint arXiv:2303.08774 (2023).
 Albalak et al. (2024) Alon Albalak, Yanai Elazar, Sang Michael Xie, Shayne Longpre, Nathan Lambert, Xinyi Wang, Niklas Muennighoff, Bairu Hou, Liangming Pan, Haewon Jeong, Colin Raffel, Shiyu Chang, Tatsunori Hashimoto, and William Yang Wang. 2024. A Survey on Data Selection for Language Models. arXiv:2402.16827 [cs.CL]
 Bartolome et al. (2023) Alvaro Bartolome, Gabriel Martin, and Daniel Vila. 2023. Notus. https://github.com/argillaio/notus.
 Chen et al. (2023) Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, et al. 2023. Alpagasus: Training a better alpaca with fewer data. arXiv preprint arXiv:2307.08701 (2023).
 Chiang et al. (2023) WeiLin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An opensource chatbot impressing gpt4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023) 2, 3 (2023), 6.
 Chowdhery et al. (2023) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research 24, 240 (2023), 1–113.
 Cui et al. (2023) Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. 2023. Ultrafeedback: Boosting language models with highquality feedback. arXiv preprint arXiv:2310.01377 (2023).
 Delétang et al. (2023) Grégoire Delétang, Anian Ruoss, PaulAmbroise Duquenne, Elliot Catt, Tim Genewein, Christopher Mattern, Jordi GrauMoya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, Marcus Hutter, and Joel Veness. 2023. Language Modeling Is Compression. CoRR abs/2309.10668 (2023).
 Ding et al. (2023) Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. 2023. Enhancing Chat Language Models by Scaling Highquality Instructional Conversations. In EMNLP. Association for Computational Linguistics, 3029–3051.
 Du et al. (2023) Qianlong Du, Chengqing Zong, and Jiajun Zhang. 2023. Mods: Modeloriented data selection for instruction tuning. arXiv preprint arXiv:2311.15653 (2023).
 Ethayarajh et al. (2024) Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306 (2024).
 GitHub (2020) GitHub. 2020. GitHub Copilot. https://github.com/features/copilot/
 Huang et al. (2024) Yuzhen Huang, Jinghan Zhang, Zifei Shan, and Junxian He. 2024. Compression Represents Intelligence Linearly. arXiv preprint arXiv:2404.09937 (2024).
 Ivison et al. (2023) Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew E. Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023. Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2. CoRR abs/2311.10702 (2023).
 Jiang et al. (2023) Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, MarieAnne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7B. CoRR abs/2310.06825 (2023).
 Köpf et al. (2024) Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Duc Nguyen, Oliver Stanley, Richárd Nagyfi, et al. 2024. Openassistant conversationsdemocratizing large language model alignment. Advances in Neural Information Processing Systems 36 (2024).
 Li et al. (2024) Ming Li, Yong Zhang, Shwai He, Zhitao Li, Hongyu Zhao, Jianzong Wang, Ning Cheng, and Tianyi Zhou. 2024. Superfiltering: WeaktoStrong Data Filtering for Fast InstructionTuning. CoRR abs/2402.00530 (2024).
 Li et al. (2023) Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, and Jing Xiao. 2023. From Quantity to Quality: Boosting LLM Performance with SelfGuided Data Selection for Instruction Tuning. CoRR abs/2308.12032 (2023).
 Liu et al. (2023) Wei Liu, Weihao Zeng, Keqing He, Yong Jiang, and Junxian He. 2023. What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning. CoRR abs/2312.15685 (2023).
 Lu et al. (2023) Keming Lu, Hongyi Yuan, Zheng Yuan, Runji Lin, Junyang Lin, Chuanqi Tan, Chang Zhou, and Jingren Zhou. 2023. # InsTag: Instruction Tagging for Analyzing Supervised Finetuning of Large Language Models. In The Twelfth International Conference on Learning Representations.
 M. Bran et al. (2024) Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. 2024. Augmenting large language models with chemistry tools. Nature Machine Intelligence (2024), 1–11.
 Meta (2020) Meta. 2020. Llama3. https://ai.meta.com/blog/metallama3/
 Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems 35 (2022), 27730–27744.
 Pandey (2024) Rohan Pandey. 2024. gzip Predicts Datadependent Scaling Laws. arXiv preprint arXiv:2405.16684 (2024).
 Peng et al. (2023) Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction Tuning with GPT4. CoRR abs/2304.03277 (2023).
 Rae et al. (2021) Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446 (2021).
 Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified texttotext transformer. Journal of machine learning research 21, 140 (2020), 1–67.
 Saito et al. (2023) Keita Saito, Akifumi Wachi, Koki Wataoka, and Youhei Akimoto. 2023. Verbosity bias in preference labeling by large language models. arXiv preprint arXiv:2310.10076 (2023).
 Shannon (1948) Claude E. Shannon. 1948. A mathematical theory of communication. Bell Syst. Tech. J. 27, 3 (1948), 379–423.
 Shannon (1951) Claude E Shannon. 1951. Prediction and entropy of printed English. Bell system technical journal 30, 1 (1951), 50–64.
 Shen (2024) Ming Shen. 2024. Rethinking Data Selection for Supervised FineTuning. CoRR abs/2402.06094 (2024).
 Taori et al. (2023) Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Stanford alpaca: An instructionfollowing llama model.
 Wang et al. (2023a) Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023a. How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources. In NeurIPS.
 Wang et al. (2023b) Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023b. SelfInstruct: Aligning Language Models with SelfGenerated Instructions. In ACL (1). Association for Computational Linguistics, 13484–13508.
 Wang et al. (2022) Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. 2022. Supernaturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv preprint arXiv:2204.07705 (2022).
 Wang et al. (2023c) Zige Wang, Wanjun Zhong, Yufei Wang, Qi Zhu, Fei Mi, Baojun Wang, Lifeng Shang, Xin Jiang, and Qun Liu. 2023c. Data management for large language models: A survey. arXiv preprint arXiv:2312.01700 (2023).
 Wettig et al. (2024) Alexander Wettig, Aatmik Gupta, Saumya Malik, and Danqi Chen. 2024. QuRating: Selecting HighQuality Data for Training Language Models. arXiv preprint arXiv:2402.09739 (2024).
 Xie et al. (2023) Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy S Liang. 2023. Data selection for language models via importance resampling. Advances in Neural Information Processing Systems 36 (2023), 34201–34227.
 Xu et al. (2023) Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. WizardLM: Empowering Large Language Models to Follow Complex Instructions. CoRR abs/2304.12244 (2023).
 Zhao et al. (2023) Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, JianYun Nie, and JiRong Wen. 2023. A Survey of Large Language Models. CoRR abs/2303.18223 (2023).
 Zheng et al. (2023) Lianmin Zheng, WeiLin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLMasaJudge with MTBench and Chatbot Arena. In NeurIPS.
 Zhou et al. (2023) Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023. LIMA: Less Is More for Alignment. In NeurIPS.
Appendix A Derivations of joint mutual information of two QA pairs
$\displaystyle I(q_{1}q_{2};a_{1}a_{2})$  $\displaystyle=H(a_{1}a_{2})H(a_{1}a_{2}q_{1}q_{2})$  (7)  
$\displaystyle=H(a_{1}a_{2})H(a_{1}a_{2}q_{1}q_{2})$  
$\displaystyle=H(a_{1}a_{2})H(a_{1}q_{1}q_{2})H(a_{2}a_{1}q_{1}q_{2})$  
$\displaystyle=H(a_{1}a_{2})H(a_{1}q_{1})H(a_{2}q_{2})$  
$\displaystyle\leq H(a_{1})+H(a_{2})H(a_{1}q_{1})H(a_{2}q_{2})$  
$\displaystyle=I(q_{1};a_{1})+I(q_{2};a_{2}).$ 
The equality is achieved when $a_{1}$ is independent on $a_{2}$ (similarly, $q_{1}$ needs to be independent on $q_{2}$).
Appendix B Training Details
Platform All experiments were finished on a platform with 64 Intel Xeon Gold 6326 CPU cores @ 2.90GHz, two mainstream highperformance GPUs, and 500GB memories. The training code is based on a popular opensource framework Axolotl^{4}^{4}4https://github.com/OpenAccessAICollective/axolotl.
Data preprocessing To format the multiturn conversation data, we adopt the Vicunastyle template for Mistral7B and the Llama3 template for Llama38B. Samples longer than the maximum input sequence length will be truncated. Besides, the data will be packed to speed up training for SFT.
Hyperparameters For ZIP, the selection numbers $K_{1}$, $K_{2}$, and $K_{3}$ are set to 10000, 200, and 100, respectively. As for SFT, we share these hyperparameters for all backbones: training batch size is 128, training epochs is 4, input sequence length is 2048, and the warmup ratio is 0.1. We adopt different learning rates for each backbone: the learning rate of Mistral7B is set to 4e6, and the learning rate of Llama38B is set to 1e5. As for RLHF, the learning rate for KTO is set to 1e6, and the batch size is set to 128.
Appendix C Token length distribution of more backbones
The token length distribution of data selected for Llama38B is depicted in Figure 6, similar to the ones of Mistral7B.
Appendix D Hyperparameter sensitivity
ZIP involves three hyperparameters $K_{1}$, $K_{2}$, and $K_{3}$ for improved efficiency. We aim to investigate the impact of these hyperparameters on the model performance, with results depicted in Figure 7.
Perceived sample number in global selection $K_{1}$ $K_{1}$ decides the number of samples to be updated in the global selection stage. We set $K_{1}$ among range [200, 1000, 10000, 20000], and the results are presented in Figure 7(a). In the figure, the model performance exhibits an increasing trend when $K_{1}$ increases. When a smaller $K_{1}$ is specified, ZIP is only exposed to a limited set of samples. This can lead ZIP to degenerate into a variant that consistently selects samples based on individual compression ratios, neglecting the modeling of combinatorial effects. Furthermore, the compression ratio associated with the currently selected dataset typically increases with each update, whereas the compression ratios of other samples remain unchanged. Consequently, a large $K_{1}$ may result in the compression ratio of the unupdated samples being underestimated, leading to inferior samples’ selection. As a result, a model performance degradation can be found when $K_{1}$ is set to 20,000.
Data pool size of local selection $K_{2}$ $K_{2}$ decides the number of samples selected from the previous $K_{1}$ samples. We set $K_{2}$ among range [100, 200, 500, 1000], and the results are presented in Figure 7(b). The model performance increases with an increased $K_{2}$, which aligns with intuition since the algorithm can consider the combinatorial effects of more samples. But when $K_{2}$ exceeds a threshold, the model performance reaches a saturated phase, which indicates similar local selection results even with increased local data budget.
Data budget of local selection $K_{3}$ $K_{3}$ decides the number of samples selected from the previous $K_{2}$ samples. We set $K_{3}$ among range [50, 100, 150, 200], and the results are presented in Figure 7(b). The results exhibit a similar trend as the results of $K_{1}$, yet the underlying causes are inverse. A large $K_{3}$ will make ZIP degenerate into a trivial variant that consistently selects samples based on individual compression ratios. On the other hand, a small $K_{3}$ will lead to more frequent compression ratio updates, which can also lead to underestimated compression ratios of some inferior samples.