Best Subset Selection via a Modern Optimization Lens
透過現代優化鏡頭選擇最佳子集
((這是 2015 年 5 月的修訂版本。第一個版本於 2014 年 6 月提交出版。))
Abstract 抽象的
In the last twenty-five years (1990-2014), algorithmic advances in integer optimization combined with hardware improvements have resulted in an astonishing
200 billion factor speedup in solving Mixed Integer Optimization (MIO) problems.
We present a MIO approach for solving the classical best subset selection problem of choosing out of features in linear regression given observations.
We develop a discrete extension of modern first order continuous optimization methods to find high quality feasible solutions that we use as warm starts
to a MIO solver that finds provably optimal solutions. The resulting algorithm (a) provides a solution with a guarantee on its suboptimality even if
we terminate the algorithm early, (b) can
accommodate side constraints on the coefficients of the linear regression and (c)
extends to finding best subset solutions for the least absolute deviation loss function.
Using a wide variety of synthetic and real datasets, we demonstrate that our approach solves problems with in the 1000s and in the 100s in minutes
to provable optimality,
and finds near optimal solutions for in the 100s and in the 1000s in minutes. We also establish via numerical experiments that the MIO approach performs better than Lasso and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.
在過去 25 年(1990-2014 年)中,整數優化演算法的進步與硬體改進相結合,在解決混合整數優化 (MIO) 問題時帶來了驚人的 2000 億因子加速。我們提出了一種MIO 方法,用於解決在給定 觀測值的線性回歸中從 特徵中選擇 的經典最佳子集選擇問題。我們開發了現代一階連續優化方法的離散擴展,以找到高品質的可行解決方案,我們將其用作 MIO 求解器的熱啟動,以找到可證明的最佳解決方案。由此產生的演算法(a) 提供了一個解決方案,即使我們提前終止演算法,也能保證其次優性,(b) 可以適應線性迴歸係數的側面約束,(c) 擴展到尋找最少的最佳子集解決方案絕對偏差損失函數。使用各種合成和真實資料集,我們證明了我們的方法可以在幾分鐘內解決1000 秒內的 和100 秒內的 問題,以證明最優性,並找到接近最優的解對於 在 100 秒內, 在 1000 秒內。我們也透過數值實驗證明,在實現具有良好預測能力的稀疏解決方案方面,MIO 方法比 Lasso 和其他常用的稀疏學習程式表現更好。
1 Introduction 1簡介
We consider the linear regression model with response vector ,
model matrix , regression coefficients
and errors :
我們考慮具有反應向量 、模型矩陣 、迴歸係數 和誤差 的線性迴歸模型:
We will assume that the columns of have been standardized to have zero means and unit -norm.
In many important classical and modern statistical applications, it is desirable to obtain a parsimonious fit to the data by finding the best -feature
fit to the response .
Especially in the high-dimensional regime with in order to conduct statistically meaningful inference, it is desirable to assume that the true regression coefficient is sparse or may be well approximated by a sparse vector. Quite naturally,
the last few decades have seen a flurry of activity in estimating sparse linear models with good explanatory power.
Central to this statistical task lies the best subset problem [40] with subset size , which is given by the following optimization problem:
我們假設 的列已標準化為具有零均值和單位 -範數。在許多重要的經典和現代統計應用中,希望透過找到與反應 擬合的最佳 特徵來獲得對資料的簡約擬合。特別是在具有 的高維體系中,為了進行統計上有意義的推斷,最好假設真實的迴歸係數 是稀疏的或可以很好地近似為稀疏係數向量。很自然地,在過去的幾十年裡,在估計具有良好解釋力的稀疏線性模型方面出現了一系列活動。此統計任務的核心在於子集大小 的最佳子集問題 [40],由以下最佳化問題給出:
(1) |
where the (pseudo)norm of a vector counts the number of nonzeros in and is given by
where denotes the indicator function.
The cardinality constraint makes Problem (1) NP-hard [41].
Indeed, state-of-the-art algorithms to solve Problem (1), as implemented in
popular statistical packages, like leaps in R, do not scale to problem sizes larger than .
Due to this reason, it is not surprising that the best subset problem has been widely dismissed as being intractable
by the greater statistical community.
其中向量 的 (偽)範數計算 中非零的數量,並由 給出,其中< b4 > 表示指標函數。基數約束使得問題 (1) 成為 NP 難題 [41]。事實上,解決問題 (1) 的最先進演算法,如在流行的統計軟體包中實現的那樣,例如 R 中的跳躍,不能擴展到大於 的問題規模。由於這個原因,最佳子集問題被更大的統計界認為難以解決而被廣泛忽視也就不足為奇了。
In this paper we address Problem (1) using modern optimization methods, specifically mixed integer optimization (MIO) and a discrete extension of
first order continuous optimization methods. Using a wide variety of synthetic and real datasets, we demonstrate
that our approach
solves problems with in the 1000s and in the 100s in minutes
to provable optimality,
and finds near optimal solutions for in the 100s and in the 1000s in minutes. To the best of our knowledge, this is the first time that MIO has been demonstrated to be a tractable solution method for Problem (1).
We note that we use the term tractability not to mean the usual polynomial solvability for problems, but rather the ability to solve problems of realistic size in times that are appropriate
for the applications we consider.
在本文中,我們使用現代最佳化方法,特別是混合整數最佳化(MIO)和一階連續最佳化方法的離散擴展來解決問題(1)。使用各種合成和真實資料集,我們證明了我們的方法可以在幾分鐘內解決1000 秒內的 和100 秒內的 問題,以證明最優性,並找到接近最優的解對於 在 100 秒內, 在 1000 秒內。據我們所知,這是第一次證明 MIO 是問題(1)的一種易於處理的解決方法。我們注意到,我們使用術語「易處理性」並不是指問題的通常多項式可解性,而是指在適合我們考慮的應用程式的時間內解決實際規模問題的能力。
As there is a vast literature on the best subset problem, we next give a brief and selective overview of related approaches for the problem.
由於關於最佳子集問題有大量文獻,因此我們接下來對此問題的相關方法進行簡要和選擇性的概述。
Brief Context and Background
簡短背景
To overcome the computational difficulties of the best subset problem,
computationally tractable convex optimization based methods like Lasso [49, 17]
have been proposed as a convex surrogate for Problem (1).
For the linear regression problem, the Lagrangian form of Lasso solves
為了克服最佳子集問題的計算困難,基於計算可處理的凸優化的方法(如 Lasso [49, 17])已被提出作為問題(1)的凸替代。對於線性迴歸問題,Lasso 的拉格朗日形式求解
(2) |
where
the penalty on , i.e.,
shrinks the coefficients towards zero and
naturally produces a sparse solution by setting many coefficients to be exactly zero.
There has been a substantial amount of impressive work on
Lasso [23, 15, 5, 55, 32, 59, 19, 35, 39, 53, 50] in terms of algorithms and
understanding of its theoretical properties—see for example the excellent books or surveys [11, 34, 50] and the references therein.
其中 對 的懲罰,即 將係數縮小到零,並透過將許多係數設為零來自然地產生稀疏解。在演算法和對其理論特性的理解方面,Lasso [23,15,5,55,32,59,19,35,39,53,50] 已經有大量令人印象深刻的工作,例如參見優秀的書籍或調查[11,34,50]以及其中的參考文獻。
Indeed, Lasso enjoys several attractive statistical properties and has drawn a significant amount of
attention from the statistics community as well as other closely related fields. Under various conditions on the model matrix and
it can be shown that Lasso delivers a sparse model
with good predictive performance [11, 34].
In order to perform
exact variable selection, much stronger assumptions are required [11].
Sufficient conditions under which Lasso gives a sparse model with good predictive performance
are the restricted eigenvalue conditions and compatibility conditions [11]. These involve statements about
the range of the spectrum of sub-matrices of and are difficult to
verify, for a given data-matrix .
事實上,Lasso 具有幾個有吸引力的統計特性,並引起了統計界以及其他密切相關領域的廣泛關注。在模型矩陣 和 的各種條件下,可以證明 Lasso 提供了具有良好預測性能的稀疏模型 [11, 34]。為了執行精確的變數選擇,需要更強的假設[11]。 Lasso 給出具有良好預測性能的稀疏模型的充分條件是受限特徵值條件和相容性條件[11]。這些涉及關於 子矩陣頻譜範圍的陳述,並且對於給定的資料矩陣 很難驗證。
An important reason behind the popularity of Lasso is its computational feasibility and scalability to practical sized problems.
Problem (2) is a convex quadratic optimization problem and
there are several efficient solvers for it, see for example [44, 23, 29].
Lasso 受歡迎的一個重要原因是它的計算可行性和對實際規模問題的可擴展性。問題(2)是一個凸二次最佳化問題,有幾個有效的解算器,例如參見[44,23,29]。
In spite of its favorable statistical properties, Lasso has several shortcomings.
In the presence of noise and correlated variables, in order to deliver a model with good predictive accuracy,
Lasso brings in a large number of nonzero coefficients (all of which are shrunk towards zero) including noise variables.
Lasso leads to biased
regression coefficient estimates, since
the -norm penalizes the large coefficients more severely than the smaller coefficients.
In contrast, if the
best subset selection procedure decides to include a variable in the model, it brings it in without any shrinkage thereby
draining the effect of its correlated surrogates.
Upon increasing the degree of regularization, Lasso sets more coefficients to zero, but in the process ends up
leaving out true predictors from the active set.
Thus, as soon as certain sufficient regularity conditions on the data are violated,
Lasso becomes suboptimal as (a) a variable selector and (b) in terms of delivering a model with good predictive performance.
儘管 Lasso 具有良好的統計特性,但它也有一些缺點。在有雜訊和相關變數的情況下,為了提供具有良好預測精度的模型,Lasso 引入了大量非零係數(所有係數都縮小到零),包括雜訊變數。套索會導致迴歸係數估計有偏差,因為 範數對大係數的懲罰比對小係數的懲罰更嚴重。相反,如果最佳子集選擇過程決定在模型中包含一個變量,它將在沒有任何收縮的情況下將其引入,從而耗盡其相關代理的效果。在增加正規化程度後,Lasso 將更多係數設為零,但在此過程中最終會從活動集中遺漏真正的預測變數。因此,一旦違反了資料上的某些足夠的規律性條件,Lasso 就變得不是最理想的(a)變數選擇器和(b)提供具有良好預測性能的模型。
The shortcomings of Lasso are also known in the statistics literature.
In fact, there is
a significant gap between what can be achieved via best subset selection and Lasso: this is supported by empirical (for small problem sizes, i.e., )
and theoretical evidence, see for example, [46, 58, 38, 31, 56, 48] and the references therein.
Some discussion is also presented herein, in Section 4.
Lasso 的缺點在統計文獻中也眾所周知。事實上,透過最佳子集選擇和套索可以實現的目標之間存在顯著差距:這是由經驗(對於小問題規模,即 )和理論證據支持的,例如,參見[ 46、58、38、31、56、48]及其中的參考文獻。本文第 4 節也提出了一些討論。
To address the shortcomings,
non-convex penalized regression is often used to “bridge” the gap between the convex penalty and the
combinatorial penalty [38, 27, 24, 54, 55, 28, 61, 62, 57, 13].
Written in Lagrangian form, this gives rise to continuous non-convex
optimization problems of the form:
為了解決這些缺點,通常使用非凸懲罰回歸來「彌合」凸 懲罰和組合 懲罰之間的差距[38,27,24,54, 55、28 、61、62、57、13]。以拉格朗日形式編寫,這會產生以下形式的連續非凸最佳化問題:
(3) |
where is a non-convex function in with and denoting the degree of
regularization and non-convexity, respectively.
Typical examples of non-convex penalties include the minimax concave penalty (MCP), the smoothly clipped absolute deviation (SCAD), and
penalties (see for example, [27, 38, 62, 24]).
There is strong statistical evidence indicating the usefulness of estimators obtained as minimizers of non-convex penalized problems (3)
over Lasso see for example [56, 36, 54, 25, 52, 37, 60, 26].
In a recent paper, [60] discuss the usefulness of non-convex penalties over convex penalties (like Lasso) in identifying important covariates, leading
to efficient estimation strategies in high dimensions. They describe interesting connections between regularized least squares and
least squares with the hard thresholding penalty; and in the process develop comprehensive global
properties of hard thresholding regularization in terms of various metrics. [26] establish asymptotic equivalence of a wide class of regularization
methods in high dimensions with comprehensive sampling properties on both global and computable solutions.
其中 是 中的非凸函數, 和 分別表示正規化程度和非凸性程度。非凸懲罰的典型例子包括極小最大凹懲罰(MCP)、平滑剪切絕對偏差(SCAD)和 懲罰(例如,參見[27,38,62,24])。有強有力的統計證據表明,作為非凸懲罰問題 (3) 的最小化器獲得的估計量相對於 Lasso 是有用的,例如 [56,36,54,25,52,37,60,26]。在最近的一篇論文中,[60]討論了非凸懲罰相對於凸懲罰(如 Lasso)在識別重要協變量方面的有用性,從而導致高維度的有效估計策略。他們描述了 正則化最小二乘法和具有硬閾值懲罰的最小二乘法之間的有趣聯繫;並在此過程中根據各種指標開發硬閾值正則化的全面全局屬性。 [26]建立了一系列高維正則化方法的漸近等價性,並在全局和可計算解決方案上具有全面的採樣特性。
Problem (3) mainly leads to a family of continuous and non-convex optimization problems. Various effective
nonlinear optimization based methods (see for example [62, 24, 13, 36, 54, 38] and the references therein)
have been proposed in the literature to obtain good local minimizers to Problem (3). In particular [38] proposes Sparsenet, a
coordinate-descent procedure to trace out a surface of local minimizers for Problem (3) for the MCP penalty using effective
warm start procedures.
None of the existing approaches for solving Problem (3), however, come
with guarantees of how close the solutions are to the global minimum of Problem (3).
問題(3)主要導致一系列連續非凸優化問題。文獻中已經提出了各種有效的基於非線性最佳化的方法(例如參見[62,24,13,36,54,38]及其參考文獻)以獲得問題(3)的良好局部最小化器。特別是[38]提出了Sparsenet,一種坐標下降過程,用於使用有效的熱啟動過程來追蹤問題(3)的MCP懲罰的局部最小化器的表面。然而,解決問題(3)的現有方法都無法保證解決方案與問題(3)的全局最小值的接近程度。
The Lagrangian version of (1) given by
(1) 的拉格朗日版本由下式給出
(4) |
may be seen as a special case of (3).
Note that, due to non-convexity, problems (4) and (1) are not equivalent.
Problem (1) allows one to control the exact level of sparsity via the choice of , unlike (4) where there is no clear
correspondence between and .
Problem (4) is a discrete optimization problem unlike continuous optimization problems (3) arising
from continuous non-convex penalties.
可以看作(3)的特例。請注意,由於非凸性,問題 (4) 和 (1) 並不等價。問題(1) 允許人們透過選擇 來控制稀疏性的確切級別,這與(4) 不同,其中 和 .問題(4)是離散最佳化問題,與連續非凸懲罰產生的連續最佳化問題(3)不同。
Insightful statistical properties of Problem (4) have been explored from a theoretical viewpoint in [56, 31, 32, 48].
[48] points out that (1) is preferable over (4) in terms of superior statistical properties of the
resulting estimator.
The aforementioned papers, however, do not discuss methods to obtain provably optimal solutions to problems (4) or (1),
and to the best of our knowledge, computing optimal solutions to problems (4) and (1)
is deemed as intractable.
[56,31,32,48]從理論角度探討了問題(4)富有洞察力的統計特性。 [48]指出,就所得估計量的優越統計特性而言,(1)優於(4)。然而,上述論文並沒有討論獲得問題(4)或(1)的可證明最優解的方法,並且據我們所知,計算問題(4)和(1)的最優解被認為是棘手的。
Our Approach 我們的方法
In this paper, we propose a novel framework via which the best subset selection problem can be solved to optimality or near optimality
in problems of practical interest within a reasonable time frame.
At the core of our proposal is a computationally tractable framework
that brings to bear the power of modern discrete optimization methods: discrete first order methods motivated by first order methods in
convex optimization [45] and mixed integer optimization (MIO), see [4].
We do not guarantee polynomial time solution times as these do not exist for the best subset problem unless P=NP. Rather, our view of computational tractability is the ability of a method to solve problems of practical interest in times that are appropriate for the application addressed.
An advantage of our approach is that it adapts to variants of the best subset
regression problem of the form:
在本文中,我們提出了一種新穎的框架,透過該框架可以在合理的時間範圍內將最佳子集選擇問題解決為最優或接近最優的實際感興趣的問題。我們建議的核心是一個計算上易於處理的框架,它發揮了現代離散優化方法的力量:由凸優化[45]和混合整數優化(MIO)中的一階方法驅動的離散一階方法,請參見[4] 。我們不保證多項式時間求解時間,因為除非 P=NP,否則最佳子集問題不存在多項式時間求解時間。相反,我們對計算易處理性的看法是一種方法在適合所解決的應用程式的時間解決實際感興趣的問題的能力。我們的方法的一個優點是它適應以下形式的最佳子集迴歸問題的變體:
where represents polyhedral constraints and refers to a least absolute deviation or
the least squares loss function on the residuals .
其中 表示多面體約束, 指殘差上的最小絕對偏差或最小平方法損失函數 。
Existing approaches in the Mathematical Optimization Literature
數學最佳化文獻中的現有方法
In a seminal paper [30], the authors describe a leaps and bounds procedure for computing global solutions to Problem (1)
(for the classical case) which can be achieved with computational effort significantly less than complete enumeration.
leaps, a state-of-the-art R package uses this principle to perform best subset selection for problems with and .
[3] proposed a tailored branch-and-bound scheme that can be applied to
Problem (1) using ideas from [30] and techniques in
quadratic optimization, extending and enhancing the proposal of [6].
The proposal of [3] concentrates on obtaining high quality upper bounds for Problem (1)
and is less scalable than the methods presented in this paper.
在一篇開創性論文[30] 中,作者描述了計算問題(1) 全局解決方案(對於經典 情況)的跨越式過程,該過程的計算量明顯少於完全枚舉即可實現。跳躍,最先進的 R 套件使用此原理對 和 問題執行最佳子集選擇。 [3]提出了一種客製化的分支定界方案,可以使用[30]中的思想和二次優化技術應用於問題(1),擴展和增強[6]的建議。 [3] 的提議集中於獲得問題(1)的高品質上限,並且比本文提出的方法的可擴展性較差。
Contributions 貢獻
We summarize our contributions in this paper below:
我們在本文中總結了我們的貢獻如下:
-
1.
We use MIO to find a provably optimal solution for the best subset problem. Our approach has the appealing characteristic that if we terminate the algorithm early, we obtain a solution with a guarantee on its suboptimality. Furthermore, our framework can accommodate side constraints on and also extends to finding best subset solutions for the least absolute deviation loss function.
1. 我們使用 MIO 來尋找最佳子集問題的可證明最優解。我們的方法具有一個吸引人的特點,即如果我們提前終止演算法,我們將獲得一個保證其次優的解決方案。此外,我們的框架可以適應 的側面約束,並且還可以擴展到尋找最小絕對偏差損失函數的最佳子集解決方案。 -
2.
We introduce a general algorithmic framework based on a discrete extension of modern first order continuous optimization methods that provide near-optimal solutions for the best subset problem. The MIO algorithm significantly benefits from solutions obtained by the first order methods and problem specific information that can be computed in a data-driven fashion.
2. 我們引入了一種基於現代一階連續最佳化方法的離散擴展的通用演算法框架,該框架為最佳子集問題提供近乎最優的解決方案。 MIO 演算法顯著受益於一階方法獲得的解決方案以及可以以數據驅動方式計算的問題特定資訊。 -
3.
We report computational results with both synthetic and real-world datasets that show that our proposed framework can deliver provably optimal solutions for problems of size in the 1000s and in the 100s in minutes. For high-dimensional problems with and , with the aid of warm starts and further problem-specific information, our approach finds near optimal solutions in minutes but takes hours to prove optimality.
3. 我們報告了合成資料集和真實資料集的計算結果,顯示我們提出的框架可以為1000 秒內的 和100 秒內的 大小的問題提供可證明的最佳解決方案幾分鐘後。對於 和 的高維問題,借助熱啟動和進一步的特定問題信息,我們的方法在幾分鐘內找到接近最優的解決方案,但需要幾個小時才能證明最優性。 -
4.
We investigate the statistical properties of best subset selection procedures for practical problem sizes, which to the best of our knowledge, have remained largely unexplored to date. We demonstrate the favorable predictive performance and sparsity-inducing properties of the best subset selection procedure over its competitors in a wide variety of real and synthetic examples for the least squares and absolute deviation loss functions.
4. 我們研究了針對實際問題規模的最佳子集選擇程序的統計特性,據我們所知,迄今為止,這在很大程度上仍未被探索。我們在最小二乘和絕對偏差損失函數的各種真實和合成範例中證明了最佳子集選擇過程相對於競爭對手的有利預測性能和稀疏性誘導特性。
The structure of the paper is as follows.
In Section 2, we present a brief overview of MIO, including a summary of the computational advances it has enjoyed in the last twenty-five years.
We present the proposed MIO formulations for the best subset problem as well as some connections with the compressed sensing literature
for estimating parameters and providing lower bounds for the MIO formulations that improve their computational performance.
In Section 3, we develop a discrete extension of first order methods in convex optimization
to obtain near optimal solutions for the best subset problem and establish its convergence properties—the proposed algorithm and its properties may be of independent interest.
Section 4 briefly reviews some of the statistical properties of the best-subset solution, highlighting the performance gaps in prediction error,
over regular Lasso-type
estimators.
In Section 5, we perform a variety of computational tests on synthetic and real datasets
to assess the algorithmic and statistical performances of our approach for the least squares loss function for
both the classical overdetermined case , and the high-dimensional case .
In Section 6, we report computational results for the least absolute deviation loss function.
In Section 7, we include our concluding remarks. Due to space limitations, some of the material has been
relegated to the Appendix.
論文結構如下。在第 2 節中,我們對 MIO 進行了簡要概述,包括對其在過去 25 年中所取得的計算進步的總結。我們提出了針對最佳子集問題提出的 MIO 公式,以及與壓縮感知文獻的一些聯繫,用於估計參數並為 MIO 公式提供下界,從而提高其計算性能。在第3 節中,我們發展了凸優化中一階方法的離散擴展,以獲得最佳子集問題的接近最優解並建立其收斂特性-所提出的演算法及其特性可能具有獨立的意義。第 4 節簡要回顧了最佳子集解決方案的一些統計特性,強調了與常規 Lasso 型估計器相比在預測誤差方面的性能差距。在第5 節中,我們對合成和真實資料集進行了各種計算測試,以評估我們的最小二乘損失函數方法的演算法和統計性能,適用於經典的超定情況 和高維度情況 。在第 6 節中,我們報告了最小絕對偏差損失函數的計算結果。在第 7 節中,我們包含了我們的結論性意見。由於篇幅限制,部分資料已移至附錄。
2 Mixed Integer Optimization Formulations
2混合整數最佳化公式
In this section, we present a brief overview of MIO, including the simply astonishing advances it has enjoyed in the last twenty-five years.
We then present the proposed MIO formulations for the best subset problem as well as some connections with the compressed sensing literature
for estimating parameters. We also present completely data driven methods to estimate parameters in the MIO formulations that improve their computational performance.
在本節中,我們將簡要概述 MIO,包括它在過去 25 年中所取得的驚人進展。然後,我們提出了針對最佳子集問題提出的 MIO 公式,以及與用於估計參數的壓縮感知文獻的一些連結。我們也提出了完全資料驅動的方法來估計 MIO 公式中的參數,從而提高其計算效能。
2.1 Brief Background on MIO
2.1MIO簡介
The general form of a Mixed Integer Quadratic Optimization (MIQO) problem is as follows:
混合整數二次最佳化 (MIQO) 問題的一般形式如下:
where and (positive semidefinite) are the given parameters of the problem;
denotes the non-negative reals, the symbol denotes element-wise inequalities and
we optimize over containing both discrete () and continuous () variables, with . For background on MIO see [4]. Subclasses of MIQO problems include convex quadratic optimization problems (), mixed integer
() and linear optimization problems
( ).
Modern integer optimization solvers such as Gurobi and Cplex are able to tackle MIQO problems.
其中 和 (正半定)是問題的給定參數; 表示非負實數,符號 表示逐元素不等式,我們對包含離散 ( )和連續( )變量,以及 。有關 MIO 的背景信息,請參閱 [4]。 MIQO 問題的子類別包括凸二次最佳化問題 ( )、混合整數 ( ) 和線性最佳化問題 ( )。現代整數最佳化求解器(例如 Gurobi 和 Cplex)能夠解決 MIQO 問題。
In the last twenty-five years (1991-2014) the computational power of MIO solvers has increased at an astonishing rate. In [7], to measure the speedup of MIO solvers, the same set of MIO problems were tested on the same computers using twelve consecutive versions of Cplex and version-on-version speedups were reported. The versions tested ranged from Cplex 1.2, released in 1991 to Cplex 11, released in 2007. Each version released in these years produced a speed improvement on the previous version, leading to a total speedup factor of more than 29,000 between the first and last version tested (see [7], [42] for details). Gurobi 1.0, a MIO solver which was first released in 2009, was measured to have similar performance to Cplex 11. Version-on-version speed comparisons of successive Gurobi releases have shown a speedup factor of more than 20 between Gurobi 5.5, released in 2013, and Gurobi 1.0 ([7], [42]). The combined machine-independent speedup factor in MIO solvers between 1991 and 2013 is 580,000.
This impressive speedup factor is due to incorporating both theoretical and practical advances into MIO solvers. Cutting plane theory, disjunctive programming for branching rules, improved heuristic methods, techniques for preprocessing MIOs, using linear optimization as a black box to be called by MIO solvers, and improved linear optimization methods have all contributed greatly to the speed improvements in MIO solvers [7].
在過去二十五年(1991-2014)中,MIO 求解器的運算能力以驚人的速度成長。在[7]中,為了測量 MIO 求解器的加速,使用 12 個連續版本的 Cplex 在相同的計算機上測試了同一組 MIO 問題,並報告了版本間的加速。測試的版本範圍從1991年發布的Cplex 1.2到2007年發布的Cplex 11。係數超過29,000已測試(詳細資訊請參閱[7]、[42])。 Gurobi 1.0 是 2009 年首次發布的 MIO 求解器,測量與 CPLEX 11 具有相似的性能。 [7],[42])。 1991 年至 2013 年間,MIO 解算器中與機器無關的綜合加速因子為 580,000。這一令人印象深刻的加速係數歸功於將理論和實踐進步融入 MIO 求解器。割平面理論、分支規則的析取編程、改進的啟發式方法、MIO 預處理技術、使用線性優化作為MIO 求解器調用的黑匣子以及改進的線性優化方法都對MIO 求解器的速度提高做出了巨大貢獻。
In addition, the past twenty years have also brought dramatic improvements in hardware.
Figure 1 shows the exponentially increasing speed of supercomputers over the past twenty years, measured in billion floating point operations per second [1]. The hardware speedup from 1993 to 2013 is approximately .
When both hardware and software improvements are considered, the overall speedup is approximately 200 billion!
Note that the speedup factors cited here refer to mixed integer linear optimization problems, not MIQO problems. The speedup factors for MIQO problems are similar.
MIO solvers provide both feasible solutions as well as lower bounds to the optimal value. As the MIO solver progresses towards the optimal solution, the lower bounds improve and provide an
increasingly better guarantee of suboptimality, which is especially useful
if the MIO solver is stopped before reaching the global optimum. In contrast, heuristic methods do not provide such a certificate of suboptimality.
此外,過去二十年在硬體方面也帶來了巨大的進步。圖 1 顯示了過去二十年來超級電腦的速度呈指數級增長,以每秒十億次浮點運算來衡量 [1]。從 1993 年到 2013 年的硬體加速約為 。同時考慮硬體和軟體改進時,整體加速約 2000 億!請注意,此處引用的加速因子指的是混合整數線性最佳化問題,而不是 MIQO 問題。 MIQO 問題的加速因子類似。 MIO 求解器提供可行的解決方案以及最優值的下限。隨著 MIO 求解器向最優解前進,下界會提高,並為次優性提供越來越好的保證,如果 MIO 求解器在達到全局最優之前停止,這一點尤其有用。相反,啟發式方法不提供這樣的次優證明。
![Refer to caption](/html/1507.03133/assets/x1.png)
圖 1:1993 年至 2013 年超級電腦峰值速度的記錄。
The belief that MIO approaches to problems in statistics are not practically relevant was formed in the 1970s and 1980s and it was at the time justified.
Given the astonishing speedup of MIO solvers and computer hardware in the last twenty-five years,
the mindset of MIO as theoretically elegant but practically irrelevant is no longer justified. In this paper, we provide empirical evidence of this
fact in the context of the best subset selection problem.
人們在 20 世紀 70 年代和 80 年代形成了一種信念,認為 MIO 方法解決統計問題沒有實際意義,並且在當時是有道理的。鑑於 MIO 求解器和電腦硬體在過去 25 年中取得了驚人的加速,MIO 理論上優雅但實際上無關緊要的思維方式不再合理。在本文中,我們在最佳子集選擇問題的背景下提供了這一事實的經驗證據。
2.2 MIO Formulations for the Best Subset Selection Problem
2.2最佳子集選擇問題的MIO公式
We first present a simple reformulation to Problem (1) as a MIO (in fact a MIQO) problem:
我們首先將問題 (1) 重新表述為 MIO(實際上是 MIQO)問題:
(5) |
where is a binary variable and
is a constant such that if is a minimizer of Problem (5), then
.
If , then and if , then .
Thus, is an indicator of the number of non-zeros in .
其中 是二元變量, 是常數,如果 是問題 (5) 的最小化,則 。如果 ,則 ,如果 ,則 。因此, 是 中非零數量的指示符。
Provided that is chosen to be sufficently large with
, a solution to Problem (5) will be a solution to Problem (1).
Of course, is not known a priori, and
a small value of may lead to a solution different from (1).
The choice of affects the strength of the formulation and is critical for obtaining good lower bounds in practice.
In Section 2.3 we describe how to find appropriate values for .
Note that there are other MIO formulations, presented herein (See Problem (8)) that do not rely on a-priori
specifications of . However, we will stick to formulation (5) for the time being, since it provides some interesting
connections to the Lasso.
假設 選擇足夠大且 ,問題 (5) 的解決方案將是問題 (1) 的解決方案。當然, 不是先驗已知的, 的較小值可能會導致與(1)不同的解。 的選擇會影響配方的強度,對於在實踐中獲得良好的下限至關重要。在第 2.3 節中,我們描述如何為 找到合適的值。請注意,這裡提出的其他 MIO 公式(請參閱問題 (8))不依賴 的先驗規範。然而,我們暫時堅持公式(5),因為它提供了與 Lasso 的一些有趣的聯繫。
Formulation (5) leads to interesting insights, especially via the structure of the convex hull of its constraints, as illustrated next:
公式(5)帶來了有趣的見解,特別是透過其約束的凸包結構,如下所示:
Thus, the minimum of Problem (5) is lower-bounded by the optimum objective value of both the following convex optimization problems:
因此,問題 (5) 的最小值受到以下兩個凸優化問題的最佳目標值的下界:
(6) | ||||||
(7) |
where (7) is the familiar Lasso in constrained form. This is a weaker relaxation than formulation (6), which in addition
to the constraint on , has box-constraints controlling the values of the ’s.
It is easy to see that the following ordering exists:
with the inequalities being strict in most instances.
其中 (7) 是熟悉的 Lasso 的約束形式。這是比公式(6) 更弱的鬆弛,公式(6) 除了對 的 約束之外,還具有控制 值的框約束' s。很容易看出存在以下順序: ,並且在大多數情況下不等式都是嚴格的。
In terms of approximating the optimal solution to Problem (5),
the MIO solver begins by first solving a continuous relaxation of Problem (5).
The Lasso formulation (7) is weaker than this root node relaxation.
Additionally, MIO is typically able to significantly improve the quality of the root node solution as the MIO solver progresses toward the optimal solution.
就逼近問題(5)的最優解而言,MIO求解器首先求解問題(5)的連續鬆弛。 Lasso 公式 (7) 比根節點鬆弛弱。此外,隨著 MIO 解算器不斷向最優解前進,MIO 通常能夠顯著提高根節點解的品質。
To motivate the reader we provide an example of the evolution (see Figure 2)
of the MIO formulation (8) for the Diabetes dataset [23],
with (for further details on the dataset see Section 5).
為了激勵讀者,我們提供了糖尿病數據集[23] 的MIO 公式(8) 演變的示例(參見圖2),其中 (有關數據集的更多詳細信息,請參見第5節) 。
Since formulation (5) is sensitive to the choice of , we consider an alternative MIO formulation based on
Specially Ordered Sets [4] as described next.
由於公式 (5) 對 的選擇很敏感,因此我們考慮基於特殊有序集 [4] 的替代 MIO 公式,如下所述。
Formulations via Specially Ordered Sets
透過特別訂購的套裝配製
Any feasible solution to formulation (5) will have
for every . This constraint
can be modeled via integer optimization using Specially Ordered Sets of Type 1 [4] (SOS-1). In an SOS-1 constraint, at most one variable in the set can take a nonzero value, that is
公式 (5) 的任何可行解決方案對於每個 都將具有 。此限制可以使用類型 1 [4] (SOS-1) 的特殊有序集合透過整數最佳化進行建模。在 SOS-1 限制條件中,集合中最多有一個變數可以取非零值,即
for every
This leads to the following formulation of (1):
對於每個 這導致 (1) 的公式如下:
(8) |
We note that Problem (8) can in principle be used to obtain global solutions to Problem (1) — Problem (8) unlike Problem (5) does not require any specification of
the parameter .
我們注意到,問題(8)原則上可以用來得到問題(1)的全域解-與問題(5)不同,問題(8)不需要任何參數 的規格。
![]() |
![]() |
![]() |
![]() |
Time (secs) | Time (secs) |
圖 2:糖尿病資料集的 MIO 公式 (8) 的典型演變,其中 、 (左圖)和 (右圖)。頂部面板顯示了上限和下限隨時間的演變。下圖顯示了相應的 MIO 差距隨時間的演變。在這兩個範例中,都可以在幾秒鐘內找到這兩個問題的最佳解決方案,但需要 10-20 分鐘才能透過下限證明最優性。請注意,MIO 證明收斂到全域最優所花費的時間隨著 的增加而增加。
We now proceed to present a more structured representation of Problem (8). Note that objective in this problem is a convex quadratic function in the continuous variable , which can be formulated explicitly as:
我們現在繼續提出問題(8)的更結構化的表示。請注意,此問題中的目標是連續變數 中的凸二次函數,可以明確表示為:
(9) |
We also provide problem-dependent constants and .
provides an upper bound on the absolute value of the regression coefficients and provides an upper bound on the -norm of . Adding these bounds typically leads to improved performance of the MIO, especially in
delivering lower bound certificates. In Section 2.3, we describe several approaches to compute these parameters from the data.
我們也提供與問題相關的常數 和 。 提供迴歸係數絕對值的上限, 提供 的 範數的上限。增加這些邊界通常會提高 MIO 的效能,特別是在提供較低邊界憑證方面。在 2.3 節中,我們描述了從資料計算這些參數的幾種方法。
We also consider another formulation for (9):
我們也考慮 (9) 的另一種表述:
(10) |
where the optimization variables are ,
and are problem specific parameters.
Note that the objective function in formulation (10) involves a quadratic form in variables and a linear function in variables.
Problem (10) is equivalent to the following variant of the best subset problem:
其中最佳化變數是 、 和 是問題特定的參數。請注意,公式(10)中的目標函數涉及 變數中的二次形式和 變數中的線性函數。問題(10)等價於最佳子集問題的以下變體:
(11) |
Formulations (9) and (10)
differ in the size of the quadratic forms that are involved. The current state-of-the-art MIO solvers are better-equipped to handle mixed integer linear optimization problems
than MIQO problems.
Formulation (9) has fewer variables but a quadratic form in variables—we find this formulation more useful in the regime, with in the s.
Formulation (10) on the other hand has more variables, but involves a quadratic form in variables—this formulation is more useful for high-dimensional problems
, with in the s and in the s.
公式(9)和(10)的差異在於所涉及的二次形式的大小。目前最先進的 MIO 求解器比 MIQO 問題更能處理混合整數線性最佳化問題。公式(9) 的變數較少,但 變數為二次形式- 我們發現公式在 體系中較有用, 在 s。另一方面,公式 (10) 有更多變量,但涉及 變量的二次形式 - 該公式對於高維度問題更有用 ,其中 中, 在 中。
As we said earlier, the bounds on and are not required,
but if these constraints are provided, they improve the strength of the MIO formulation. In other words, formulations with
tightly specified bounds provide better lower bounds to the global optimization problem in a specified amount of time, when compared to a MIO formulation
with loose bound specifications.
We next show how these bounds
can be computed from given data.
正如我們之前所說, 和 上的界限不是必需的,但如果提供這些約束,它們會提高 MIO 公式的強度。換句話說,與具有鬆散邊界規範的 MIO 公式相比,具有嚴格指定邊界的公式在指定時間內為全域最佳化問題提供了更好的下界。接下來我們將展示如何根據給定資料計算這些界限。
2.3 Specification of Parameters
2.3參數說明
Coherence and Restricted Eigenvalues of a Model Matrix
模型矩陣的相干性與受限特徵值
Given a model matrix , [51] introduced the cumulative coherence function
給定一個模型矩陣 ,[51]引入了累積相干函數
where, , represent the columns of , i.e., features.
其中, 、 表示 的列,即特徵。
For , we obtain the notion of coherence
introduced in [22, 21] as a measure of the maximal pairwise correlation in absolute value of the columns of :
對於 ,我們獲得了 [ 22, 21] 中引入的一致性概念,作為 列絕對值中最大成對相關性的度量:
[16, 14] (see also [11] and references therein) introduced the notion
that a matrix
satisfies a restricted eigenvalue condition if
[ 16, 14](另見[ 11]和其中的參考文獻)引入了矩陣 滿足受限特徵值條件的概念,如果
(12) |
where denotes the smallest eigenvalue of the matrix .
An inequality linking and is as follows.
其中 表示矩陣 的最小特徵值。連接 和 的不等式如下。
The computations of and for general are difficult, while is simple to compute.
Proposition 1 provides bounds for and in terms of the coherence .
一般的 的 和 的計算比較困難,而 的計算則比較簡單。命題 1 根據一致性 提供了 和 的界線。
Operator Norms of Submatrices
子矩陣的算子範數
The operator norm of matrix
is
矩陣 的 算子範數為
We will use extensively here the operator norm.
We assume that each column vector of
has unit -norm. The results derived in the next proposition borrow and enhance techniques developed by [51] in the context of
analyzing the — equivalence in compressed sensing.
我們將在這裡廣泛使用 運算子規格。我們假設 的每個列向量都有單位 -範數。下一個命題中得出的結果借用並增強了[51]在分析壓縮感知中的 - 等價性的背景下開發的技術。
Proposition 2.
For any with we have :
-
(a)
-
(b)
If the matrix is invertible and , then
(b)若矩陣 可逆且 ,則
命題 2. 對於任何有 的 ,我們有:
We note that Part (b) also appears in [51] for
the operator norm .
我們注意到,(b) 部分也出現在 [51] 中的算符範數 中。
Given a set with we let denote the
least squares regression coefficients obtained by regressing on , i.e.,
.
If we append with zeros in the remaining coordinates we obtain
as follows:
Note that depends on but we will suppress the dependence on for notational convenience.
給定一個有 的集合 ,我們讓 表示在 得到的最小平方法迴歸係數> ,即 。如果我們在剩餘座標中附加 零,我們將得到 ,如下所示: 請注意, 取決於 的依賴。
2.3.1 Specification of Parameters in terms of Coherence and Restricted Strong Convexity
2.3.1 相干性和限制強凸性方面的參數規範
Recall that , represent the columns of ;
and we will use to denote the rows of .
As discussed above .
We
order the correlations :
回想一下 、 代表 的欄位;我們將使用 來表示 的行。如上所討論的 。我們將相關性排序 :
(13) |
We finally denote by
the sum of the top absolute values of the entries of
.
我們最後用 表示 條目的頂部 絕對值總和。
Theorem 2.1.
For any such that any optimal solution to (1) satisfies:
(14) | |||||
(15) | |||||
(16) | |||||
(17) |
定理2.1。對於任何 ,使得 任何最佳解 滿足(1):
We note that in the above theorem, the upper bound in Part (a) becomes infinite as soon as .
In such a case, we can use purely data-driven bounds by using convex optimization techniques, as described in Section 2.3.2.
我們注意到,在上述定理中,(a)部分的上限一旦 就變成無限大。在這種情況下,我們可以透過使用凸優化技術來使用純資料驅動的邊界,如第 2.3.2 節所述。
The interesting message conveyed by Theorem 2.1 is that the upper bounds on
,
and
, corresponding to the Problem (11)
can all be obtained in terms of and , quantities of fundamental interest
appearing in the analysis of regularization methods and understanding how close they are to solutions [51, 22, 21, 16, 14].
On a different note, Theorem 2.1 arises from a purely computational motivation and quite curiously, involves the same
quantities: cumulative coherence and restricted eigenvalues.
定理 2.1 傳達的有趣訊息是 、 和 的上限,對應於問題 (11 ) 都可以根據< b4> 和 獲得,在分析 正則化方法並了解它們與 的接近程度時出現的基本興趣量。 > 解[51,22,21,16,14]。另一方面,定理 2.1 源自於純粹的計算動機,並且非常奇怪地涉及相同的量:累積相干性和受限特徵值。
Note that the quantities are difficult to compute exactly, but they can be approximated by Proposition 1 which
provides bounds commonly used in the compressed sensing literature.
Of course, approximations to these quantities can also be obtained by using subsampling schemes.
請注意,數量 很難精確計算,但可以透過命題 1 進行近似,命題 1 提供了壓縮感知文獻中常用的界限。當然,也可以透過使用子取樣方案來獲得這些量的近似值。
2.3.2 Specification of Parameters via Convex Quadratic Optimization
2.3.2 透過凸二次優化指定參數
We provide an alternative purely data-driven way to compute the upper bounds to the parameters
by solving several simple convex quadratic optimization problems.
我們提供了另一種純數據驅動的方法,透過解決幾個簡單的凸二次最佳化問題來計算參數的上限。
Bounds on ’s 的界限
For the case ,
upper and lower bounds on can be obtained by solving the following pair of convex optimization problems:
對於情況 ,可以透過求解以下一對凸最佳化問題來獲得 的上下界:
(18) |
for . Above, UB is an upper bound to the minimum of the -subset least squares problem (1).
is an upper bound to , since the cardinality constraint does not appear in the optimization problem.
Similarly, is a lower bound to .
The quantity
serves as an upper bound
to .
A reasonable choice for UB is obtained by using the discrete first order methods (Algorithms 1 and 2 as described in Section 3) in combination with the
MIO formulation (8) (for a predefined amount of time).
Having obtained as described above, we can obtain an upper bound to
and as follows:
and
where, .
對於 。上面,UB 是 子集最小平方法問題 (1) 的最小值的上限。 是 的上限,因為基數限制 不會出現在最佳化問題中。類似地, 是 的下界。數量 作為 的上限。 UB 的合理選擇是透過使用離散一階方法(第 3 節中描述的演算法 1 和 2)結合 MIO 公式 (8)(對於預先定義的時間量)來獲得的。如上所述獲得 後,我們可以得到 和 的上限,如下所示: 和 。
Similarly, bounds corresponding to Parts (c) and (d) in Theorem 2.1 can be obtained by
using the upper bounds on as described above.
類似地,可以透過如上所述使用 的上限來獲得與定理2.1中的(c)和(d)部分相對應的界限。
Note that the quantities and are finite when the level sets of the least squares
loss function are finite. In particular, the bounds are loose when . In the following we describe methods to obtain non-trivial bounds on , for
that apply for arbitrary .
請注意,當最小平方法損失函數的水平集合有限時,數量 和 也是有限的。特別是,當 時,界限是寬鬆的。下面我們描述了在 上取得非平凡邊界的方法,對於適用於任意 的 。
Bounds on ’s 的界限
We now provide a generic method to obtain upper and lower bounds on the quantities :
我們現在提供一種通用方法來獲取數量 的上限和下限:
(19) |
for . Note that the bounds obtained from (19) are
non-trivial bounds for both the under-determined and overdetermined cases. The bounds obtained from (19) are upper and lower bounds
since we drop the cardinality constraint on . The bounds are finite since for every the quantity
remains bounded in the feasible set for Problems (19).
對於 。請注意,對於欠定 和超定情況,從 (19) 獲得的界限都是非平凡的界限。從 (19) 得到的界限是上限和下限,因為我們放棄了 上的基數約束。界線是有限的,因為對於每個 ,數量 在問題 (19) 的可行集中仍然有界。
2.3.3 Parameter Specifications from Advanced Warm-Starts
2.3.3 高階熱啟動的參數規範
The methods described above in Sections 2.3.1 and 2.3.2 lead to provable bounds on the parameters:
with these bounds Problem (11) is provides an optimal solution to Problem (1), and vice-versa.
We now describe some other alternatives that lead to excellent parameter specifications in practice.
上述 2.3.1 和 2.3.2 節中描述的方法導致了參數的可證明邊界:利用這些邊界,問題 (11) 為問題 (1) 提供了最優解,反之亦然。我們現在描述一些其他替代方案,這些替代方案可以在實踐中實現出色的參數規格。
The discrete first order methods described in the following section 3 provide good upper bounds to Problem (1).
These solutions when supplied as a warm-start to the MIO formulation (8) are often improved by MIO,
thereby leading to high quality solutions to Problem (1) within several minutes. If denotes an
estimate obtained from this hybrid approach, then
with a multiplier greater than one (e.g., ) provides a good estimate for the parameter .
A reasonable upper bound to is .
Bounds on the other quantities: can be derived by using
expressions appearing in Theorem 2.1, with aforementioned bounds on and
.
以下第 3 節中所述的離散一階方法為問題 (1) 提供了良好的上限。當這些解決方案作為 MIO 公式 (8) 的熱啟動提供時,通常會透過 MIO 進行改進,從而在幾分鐘內得到問題 (1) 的高品質解決方案。如果 表示從此混合方法獲得的估計值,則 和 大於 1 的乘數(例如 )提供對參數 的良好估計。 的合理上限是 。其他量的界限: 可以透過使用定理 2.1 中出現的表達式以及上述 和 上的界限來導出。
2.3.4 Some Generalizations and Variants
2.3.4一些概括和變體
Some variations and improvements of the procedures described above are presented in Section A.2 (appendix).
A.2 節(附錄)介紹了上述程序的一些變更和改進。
3 Discrete First Order Algorithms
3離散一階演算法
In this section, we develop a discrete extension of first order methods in convex optimization [45, 44]
to obtain near optimal solutions for Problem (1) and its variant for the
least absolute deviation (LAD) loss function. Our approach applies to the problem of minimizing
any smooth convex function subject to cardinality constraints.
在本節中,我們開發了凸優化中一階方法的離散擴展[45, 44],以獲得問題(1)及其最小絕對偏差(LAD)損失函數的變體的接近最優解。我們的方法適用於最小化受基數約束的任何平滑凸函數的問題。
We will use these discrete first order methods to obtain solutions to warm start the MIO formulation. In Section 5, we will demonstrate how these methods greatly enhance the performance of the MIO.
我們將使用這些離散一階方法來獲得解來熱啟動 MIO 公式。在第 5 節中,我們將示範這些方法如何大幅提升 MIO 的效能。
3.1 Finding stationary solutions for minimizing smooth convex functions with cardinality constraints
3.1尋找最小化帶有基數約束的平滑凸函數的平穩解
Related work and contributions
相關工作與貢獻
In the signal processing literature [8, 9] proposed iterative hard-thresholding algorithms,
in the context of -regularized least squares problems, i.e., Problem (4). The authors establish convergence properties of the algorithm under
the assumption that satisfies coherence [8] or Restricted Isometry Property [9].
The method we propose here applies to a larger class of cardinality constrained optimization problems of the form (20),
in particular, in the context of Problem (1)
our algorithm and its convergence analysis do not require any form of restricted isometry property on the model matrix .
在訊號處理文獻[8, 9]中,在 正則化最小平方法問題的背景下提出了迭代硬閾值演算法,即問題(4)。作者在 滿足一致性 [8] 或受限等距性質 [9] 的假設下建立了演算法的收斂性質。我們在這裡提出的方法適用於形式(20)的更大類基數約束最佳化問題,特別是在問題(1)的背景下,我們的演算法及其收斂性分析不需要任何形式的受限等距屬性模型矩陣 。
Our proposed algorithm borrows ideas from projected gradient descent methods in first order convex optimization problems [45]
and generalizes it to the discrete optimization Problem (20).
We also derive new global convergence results for our proposed algorithms as presented in Theorem 3.1.
Our proposal, with some novel modifications also applies to
the non-smooth least absolute deviation loss with cardinality constraints as discussed in Section 3.3.
我們提出的演算法借鑒了一階凸優化問題中的投影梯度下降法的想法[45],並將其推廣到離散最佳化問題(20)。我們也為我們提出的演算法得出了新的全域收斂結果,如定理 3.1 所示。我們的建議,經過一些新穎的修改,也適用於第 3.3 節中討論的具有基數約束的非平滑最小絕對偏差損失。
Consider the following optimization problem:
考慮以下優化問題:
(20) |
where is convex and has Lipschitz continuous gradient:
其中 是凸的並且具有 Lipschitz 連續梯度:
(21) |
The first ingredient of our approach is the observation that when for a given ,
Problem (20) admits a closed form solution.
我們方法的第一個要素是觀察到,當 對於給定的 時,問題 (20) 承認一個封閉形式的解。
Proposition 3.
If is an optimal solution to the following problem:
(22) |
then it can be computed as follows: retains the largest (in absolute value)
elements of and sets the rest to zero, i.e., if
denote the ordered values of the absolute values of the vector , then:
那麼它可以計算如下: 保留 的 最大(絕對值)元素,並將其餘元素設為零,即,如果 < b3> 表示向量 絕對值的有序值,則:
(23) |
where, is the th coordinate of . We will denote the set of solutions to Problem (22)
by the notation .
其中, 是 的第 座標。我們將用符號 來表示問題 (22) 的解集。
命題3.如果 是下列問題的最適解:
Proof.
We provide a proof of this in Section B.2, for the sake of completeness. ∎
證明。為了完整起見,我們在 B.2 節中提供了證明。 ∎
Note that, we use the notation “argmin” (Problem (22) and in other places that follow) to denote the set of minimizers of the optimization Problem.
請注意,我們使用符號「argmin」(問題(22)以及隨後的其他地方)來表示最佳化問題的最小化集合。
The operator (23) is also known as the hard-thresholding operator [20]—a notion that
arises in the context of the following related optimization problem:
運算子 (23) 也稱為硬閾值運算子 [20]——這個概念出現在以下相關最佳化問題的背景下:
(24) |
where admits a simple closed form expression given by if and
otherwise, for .
其中 承認由 給出的簡單閉合形式表達式,如果 和 否則,對於
Remark 1.
There is an important difference between the minimizers of Problems (22) and (24). For Problem (24), the smallest (in absolute value) non-zero element in is greater than in absolute value. On the other hand, in Problem (22) there is no lower bound to the minimum (in absolute value) non-zero element of a minimizer. This needs to be taken care of while analyzing the convergence properties of Algorithm 1 (Section 3.2).
備註 1. 問題 (22) 和 (24) 的最小化器之間存在重要差異。對於問題(24), 中最小的(絕對值)非零元素的絕對值大於 。另一方面,在問題(22)中,最小化器的最小(絕對值)非零元素沒有下界。在分析演算法 1 的收斂特性時需要注意這一點(第 3.2 節)。
Given a current solution ,
the second ingredient of our approach is to upper bound the function around .
To do so,
we use ideas from projected gradient descent methods in first order convex optimization problems [45, 44].
給定目前的解 ,我們方法的第二個要素是圍繞 限制函數 的上限。為此,我們在一階凸優化問題中使用投影梯度下降法的想法 [45, 44]。
Proposition 4.
([45, 44]) For a convex function satisfying condition (21) and for any we have :
(25) |
for all with equality holding at .
對於所有 ,相等性保持在 。
命題 4. ([ 45, 44]) 對於滿足條件 (21) 的凸函數 以及對於任何 我們有:
Applying Proposition 3 to the upper bound in Proposition 4 we obtain
將命題 3 應用於命題 4 中的上限 我們得到
(26) |
where is defined in (23).
In light of (26) we are now ready to present Algorithm 1 to find a stationary point (see Definition 1)
of Problem (20).
其中 在(23)中定義。根據 (26),我們現在準備提出演算法 1,以找出問題 (20) 的駐點(請參閱定義 1)。
Algorithm 1 演算法 1
Input: , , .
輸入: 、 、 。
Output: A first order stationary solution .
輸出:一階平穩解 。
Algorithm: 演算法:
-
1.
Initialize with such that .
1.使用 初始化,使得 。 -
2.
For , apply (26) with to obtain as:
(27)
2.對於 ,將 (26) 與 一起應用,得到 為: -
3.
Repeat Step 2, until .
3.重複步驟2,直到 。 -
4.
Let denote the current estimate and let . Solve the continuous optimization problem:
(28) and let be a minimizer.
並讓 成為最小化器。
4.令 表示目前估計並令 。求解連續最佳化問題:
The convergence properties of Algorithm 1 are presented in Section 3.2.
We also present Algorithm 2, a variant of Algorithm 1 with better empirical performance.
Algorithm 2 modifies Step 2 of Algorithm 1 by using a line search. It obtains
and
where .
演算法 1 的收斂特性如第 3.2 節所示。我們也提出了演算法 2,它是演算法 1 的變體,具有更好的經驗性能。演算法2透過使用線搜尋修改了演算法1的步驟2。它得到 和 其中 。
Note that the iterate in Algorithm 2 need not be -sparse (i.e., need not satisfy: ), however,
is -sparse (). Moreover,
the sequence may not lead to a decreasing set of objective values, but it satisfies:
請注意,演算法 2 中的迭代 不需要是 稀疏(即不需要滿足: ),但是 是 -稀疏( )。此外,該序列可能不會導致目標值集遞減,但它滿足:
3.2 Convergence Analysis of Algorithm 1
3.2演算法1的收斂性分析
In this section, we study convergence properties for Algorithm 1.
Before we embark on the analysis, we need to define the notion of
first order optimality for Problem (20).
在本節中,我們研究演算法 1 的收斂特性。
Definition 1.
Given an , the vector is said to be a first order stationary point of Problem (20) if and it satisfies the following fixed point equation:
(29) |
定義 1. 給定 ,如果 且滿足以下固定條件,則向量 稱為問題 (20) 的一階駐點點方程:
Let us give some intuition associated with the above definition.
讓我們給出一些與上述定義相關的直覺。
Consider as in Definition 1. Since it follows that there is a set such that
for all and the size of (complement of ) is .
Since
it follows that for all , we have:
where, is the th coordinate of .
It thus follows that:
for all . Since is convex in , this means that solves the following convex optimization problem:
考慮定義 1 中的 。 > , ( 的補集)的大小是 。由於 ,對所有 ,我們有: 其中, 是 th 的座標。因此可以得到: 對於所有 。由於 在 中是凸的,這意味著 解決了以下凸最佳化問題:
(30) |
Note however, that the converse of the above statement is not true. That is, if is an arbitrary subset with then a
solution to the restricted convex problem (30) with need not correspond to a first order
stationary point.
但請注意,上述說法的反面並不成立。也就是說,如果 是 的任意子集,則需要 限制凸問題 (30) 的解 不對應於一階駐點。
Note that any global minimizer to Problem (20) is also a first order stationary point, as defined above (see Proposition 7).
請注意,問題 (20) 的任何全域最小化器也是一階駐點,如上面所定義(請參閱命題 7)。
We present the following proposition (for its proof see Section B.6), which sheds light on a first order stationary point for which .
我們提出以下命題(其證明參見 B.6 節),它闡明了 的一階駐點 。
Proposition 5.
Suppose satisfies the first order stationary condition (29) and . Then .
命題5.假設 滿足一階平穩條件(29)和 。然後 。
We next define the notion of an -approximate first order stationary point of Problem (20):
接下來我們定義問題 (20) 的 近似一階駐點的概念:
Definition 2.
Given an and we say that satisfies an -approximate first order optimality condition of Problem (20) if and for some , we have .
定義 2. 給定 和 ,我們說 滿足問題 (20) 的 近似一階最適性條件如果 和一些 ,我們有 。
Before we dive into the convergence properties of Algorithm 1, we need to introduce some notation.
Let and with , if , and , if , , i.e.,
represents the sparsity pattern of the support of .
在我們深入研究演算法 1 的收斂特性之前,我們需要先介紹一些符號。讓 和 與 ,如果 ,和 ,如果 、 ,即 表示 的支持度的稀疏模式。
Suppose, we order the coordinates of by their absolute values: .
Note that by definition (27), for all and .
We denote to be the th largest (in absolute value) entry in for all .
Clearly if then and if
then . Let
and
.
假設我們按絕對值對 的座標進行排序: 。請注意,根據定義 (27), 對於所有 和 。我們將 表示為 中所有 的第 最大(絕對值)條目。顯然,如果 那麼 ,如果 那麼 。令 和 。
Proposition 6.
Consider and as defined in (20) and (21). Let be the sequence generated by Algorithm 1. Then we have :
-
(a)
For any , the sequence satisfies
(31) is decreasing and converges.
遞減並收斂。
(a)對任一 ,數列 滿足 -
(b)
If , then as .
(b)如果 ,則 為 。 -
(c)
If and then the sequence converges after finitely many iterations, i.e., there exists an iteration index such that for all . Furthermore, the sequence is bounded and converges to a first order stationary point.
(c)如果 和 ,那麼序列 在有限多次迭代後收斂,即存在一個迭代索引 ,這樣對所有 來說 。此外,序列 是有界的並且收斂到一階駐點。 -
(d)
If and then .
(d)如果 和 則 。 -
(e)
Let , and suppose that the sequence has a limit point. Then .
(e)令 、 並假設序列 有極限點。然後 。
命題 6. 考慮 (20) 和 (21) 定義的 和 。令 為演算法 1 產生的序列。
Remark 2.
Note that the existence of a limit point in Proposition 6, Part (e) is guaranteed under fairly weak conditions. One such condition is that for any finite value . In words this means that the -sparse level sets of the function is bounded.
備註 2. 請注意,命題 6 (e) 部分中極限點的存在是在相當弱的條件下得到保證的。其中一個條件是 對於任何有限值 。換句話說,這意味著函數 的 稀疏級集是有界的。
In the special case where is the least squares loss function, the above condition is equivalent to
every -submatrix () of comprising of columns being full rank. In particular, this holds with probability one when
the entries of are drawn from a continuous distribution and .
在 是最小平方法損失函數的特殊情況下,上述條件相當於 ) b3 > 包含滿等級的 欄位。特別是,當 的條目從連續分佈和 中提取時,這種情況的機率為 1。
Remark 3.
Parts (d) and (e) of Proposition 6 are probably not statistically interesting cases, since they correspond to un-regularized solutions of the problem . However, we include them since they shed light on the properties of Algorithm 1.
備註 3. 命題 6 的 (d) 和 (e) 部分可能不是統計上有趣的情況,因為它們對應於問題 的非正則化解決方案。然而,我們將它們包括在內,因為它們揭示了演算法 1 的屬性。
The conditions assumed in Part (c) imply that the support of stabilizes and Algorithm 1 behaves like vanilla gradient descent thereafter.
The support of need not stabilize for Parts (d), (e) and thus Algorithm 1 may not behave like vanilla gradient descent after finitely many iterations.
However, the objective values (under minor regularity assumptions) converge to .
(c) 部分假設的條件意味著 的支援穩定下來,之後演算法 1 的行為就像普通梯度下降一樣。 的支援不需要穩定第 (d)、(e) 部分,因此演算法 1 在有限多次迭代後可能不會像普通梯度下降那樣表現。然而,目標值(在次要規律性假設下)收斂於 。
We present the following Proposition (for proof see Section B.3) about a uniqueness property of the fixed point equation (1).
我們提出以下關於定點方程式(1)的唯一性的命題(證明參見 B.3 節)。
Proposition 7.
Suppose and let satisfy a first order stationary point as in Definition 1. Then the set has exactly one element: .
命題 7. 假設 並令 滿足定義 1 中的一階駐點。 > .
The following proposition (for a proof see Section B.4) shows that a global minimizer of the Problem (20) is also a first order stationary point.
以下命題(證明參見 B.4 節)顯示問題 (20) 的全域最小化也是一階駐點。
Proposition 8.
Suppose and let be a global minimizer of Problem (20). Then is a first order stationary point.
命題 8. 假設 並讓 是問題 (20) 的全域最小化器。那麼 就是一階駐點。
Proposition 6 establishes that Algorithm 1 either converges to a first order stationarity point (part (c)) or it
converges111under minor technical assumptions
在次要的技術假設下
命題 6 證明演算法 1 要麼收斂到一階平穩點((c) 部分),要麼收斂 1 to a
global optimal solution (Parts (d), (e)),
but does not quantify the rate of convergence.
We next characterize the rate of convergence of the algorithm to an -approximate
first order stationary point.
到全域最優解((d)、(e)部分),但不量化收斂速度。接下來我們描述演算法收斂到 近似一階駐點的速率。
Theorem 3.1.
Let and denote a first order stationary point of Algorithm 1. After iterations Algorithm 1 satisfies
(32) |
where as .
其中 為 。
定理3.1。令 和 表示演算法 1 的一階駐點。
Theorem 3.1 implies that for any there exists such that for some
, we have:
Note that the convergence rates derived above apply for a large class of problems (20), where,
the function is convex with Lipschitz continuous gradient (21). Tighter rates may be obtained
under additional structural assumptions on .
For example, the adaptation of Algorithm 1 for Problem (4) was
analyzed in [8, 9] with
satisfying coherence [8] or Restricted Isometry Property (RIP) [9].
In these cases, the algorithm can be shown to have
a linear convergence rate [8, 9], where the rate depends upon the RIP constants.
定理3.1 表示對於任何 都存在 ,這樣對於某些 ,我們有: 請注意,導出的收斂率上述適用於一大類問題 (20),其中函數 是具有 Lipschitz 連續梯度的凸函數 (21)。根據 的額外結構假設,可能會獲得更嚴格的利率。例如,在[8, 9]中分析了演算法1對問題(4)的適應性,其中 滿足相干性[8]或限制等距性(RIP)[9]。在這些情況下,可以證明該演算法具有線性收斂速度 [8, 9],其中該速度取決於 RIP 常數。
Note that by Proposition 6 the support of stabilizes after finitely many iterations, after which Algorithm 1
behaves like gradient descent on the stabilized support. If restricted to this support is strongly convex, then Algorithm 1 will enjoy a
linear rate of convergence [45], as soon as the support stabilizes. This behavior is adaptive, i.e., Algorithm 1 does not need to be modified
after the support stabilizes.
請注意,根據命題 6, 的支持在有限多次迭代後穩定,之後演算法 1 的行為類似於穩定支持上的梯度下降。如果限制於該支撐的 是強凸的,那麼一旦支撐穩定,演算法1將享有線性收斂速度[45]。這種行為是自適應的,即支撐穩定後不需要修改演算法1。
The next section describes practical post-processing schemes via which first order stationary points of
Algorithm 1 can be obtained by solving a low dimensional convex optimization problem, as soon as the support is found to stabilize, numerically.
In our numerical experiments, we this version of Algorithm 1 (with multiple starting points) took at most
a few minutes for and a few seconds for smaller values of .
下一節描述了實用的後處理方案,一旦發現支撐在數值上穩定,就可以透過求解低維凸最佳化問題來獲得演算法 1 的一階駐點。在我們的數值實驗中,我們這個版本的演算法 1(具有多個起點)對於 最多花費幾分鐘,對於較小的 值最多花費幾秒鐘。
Polishing coefficients on the active set
活動集上的拋光係數
Algorithm 1 detects the active set after a few iterations.
Once the active set
stabilizes, the algorithm may take a number of iterations to estimate the values of the regression coefficients on the active set to a
high accuracy level.
演算法 1 在幾次迭代後偵測活動集。一旦活動集穩定,演算法可能需要多次迭代才能將活動集上的迴歸係數的值估計到高精度水準。
In this context, we found the following simple polishing of coefficients to be useful.
When the algorithm has converged to a tolerance of (), we fix the current
active set, , and solve the following lower-dimensional convex optimization problem:
在這種情況下,我們發現以下簡單的係數修正很有用。當演算法收斂到容差 ( ) 時,我們修復目前活動集 ,並求解以下低維凸最佳化問題:
(33) |
In the context of the least squares and the least absolute deviation problems,
Problem (33) reduces to to a smaller dimensional least squares and a linear optimization problem respectively, which can be
solved very efficiently up to a very high level of accuracy.
在最小平方法和最小絕對偏差問題的背景下,問題(33)分別簡化為較小維的最小二乘和線性最佳化問題,可以非常有效地求解直至非常高的精度。
3.3 Application to Least Squares
3.3最小平方法的應用
For the support constrained problem with squared error loss, we have
and .
The general algorithmic framework developed above applies in a straightforward fashion for this special case.
Note that for this case .
對於平方誤差損失的支援約束問題,我們有 和 。上面開發的通用演算法框架以簡單的方式適用於這種特殊情況。請注意,對於本例 。
The polishing of the regression coefficients in the active set can be performed via a least squares problem on , where
denotes the support of the regression coefficients.
活動集中迴歸係數的拋光可以透過 上的最小平方法問題來執行,其中 表示迴歸係數的支持度。
3.4 Application to Least Absolute Deviation
3.4最小絕對偏差的應用
We will now show how the method proposed in the previous section applies to the least absolute deviation problem with
support constraints in :
現在我們將展示上一節中提出的方法如何應用於 中具有支持約束的最小絕對偏差問題:
(34) |
Since is non-smooth, our framework does not apply directly.
We smooth the non-differentiable
so that we can apply Algorithms 1 and 2.
Observing that
we make use of the smoothing technique of [43] to obtain
;
which is a smooth approximation of , with for which Algorithms 1 and 2 apply.
由於 是非平滑的,因此我們的框架不能直接應用。我們平滑不可微的 ,以便我們可以應用演算法1和2。這是 的平滑近似,其中 適用演算法 1 和 2。
In order to obtain a good approximation to Problem (34), we found the following strategy to be useful in practice:
為了獲得問題(34)的良好近似,我們發現以下策略在實務上很有用:
-
1.
Fix , initialize with and repeat the following steps [2]—[3] till convergence:
1. 修正 ,用 初始化並重複以下步驟[2]—[3]直到收斂: -
2.
Apply Algorithm 1 (or Algorithm 2) to the smooth function . Let be the limiting solution.
2. 將演算法 1(或演算法 2)應用於平滑函數 。令 為極限解。 -
3.
Decrease for some pre-defined constant (say), and go back to step [1] with . Exit if for some pre-defined tolerance.
3. 減少一些預先定義常數 的 ,然後使用 返回步驟 [1] 。如果 達到某個預先定義的容差,則退出。
4 A Brief Tour of the Statistical Properties of Problem (1)
4問題統計性質簡述(一)
As already alluded to in the introduction, there is a substantial body of impressive work characterizing the theoretical
properties of best subset solutions in terms of various metrics: predictive performance, estimation of regression coefficients, and variable selection properties.
For the sake of completeness, we present a brief review of some of the properties of solutions to Problem (1) in Section C.
如同引言中已經提到的,有大量令人印象深刻的工作從各種指標方面描述了最佳子集解決方案的理論屬性:預測性能、回歸係數估計和變量選擇屬性。為了完整起見,我們簡要回顧了 C 節中問題(1)的解決方案的一些屬性。
5 Computational Experiments for Subset Selection with Least Squares Loss
5最小平方法損失子集選擇的計算實驗
In this section, we present a variety of computational experiments to assess the algorithmic and statistical performances of our approach. We consider
both the classical overdetermined case with (Section 5.2) and the high dimensional case (Section 5.3)
for the least squares loss function with support constraints.
在本節中,我們提出了各種計算實驗來評估我們方法的演算法和統計性能。對於具有支援約束的最小平方法損失函數,我們考慮經典的超定情況(第 5.2 節)和高維度 情況(第 5.3 節)。
5.1 Description of Experimental Data
5.1實驗數據說明
We demonstrate the performance of our proposal via a series of experiments on both synthetic and real data.
我們透過對合成數據和真實數據進行的一系列實驗來證明我們建議的性能。
Synthetic Datasets. 綜合數據集。
We consider a collection of problems where
are independent realizations from a
-dimensional multivariate normal distribution with mean zero and covariance matrix .
The columns of the matrix were subsequently standardized to have unit norm.
For a fixed we generated the response as follows:
, where
. We denote the number of nonzeros in by .
The choice of determines the Signal-to-Noise Ratio (SNR) of the problem, which is defined as:
我們考慮一系列問題,其中 是來自平均值為零和協方差矩陣 的 維多元常態分佈的獨立實現。 矩陣的列隨後被標準化為具有單位 範數。對於固定的 ,我們產生回應 如下: ,其中 。我們用 表示 中非零的數量。 的選擇決定了問題的訊號雜訊比(SNR),其定義為:
We considered the following four different examples:
我們考慮了以下四個不同的例子:
Example 1:
We took for .
We consider different values of and for equi-spaced values. In the case where exactly equi-spaced values are not possible we rounded the indices to the nearest large integer value.
of in the range .
範例 1:我們將 替換為 。我們考慮 和 的不同值作為 等間距值。在不可能精確等距值的情況下,我們將索引四捨五入到最接近的大整數值。 在 範圍內。
Example 2:
We took ,
and .
範例 2:我們採用 、 和 。
Example 3: We took ,
and and — i.e., a vector with ten nonzero entries, with the nonzero values being
equally spaced in the interval .
範例 3:我們採用 、 、 和 — 即具有 10 個非零條目和非零值的向量在區間 中等距分佈。
Example 4:
We took ,
and , i.e., a vector with six nonzero entries, equally spaced in the interval .
範例4:我們採用 、 和 ,即具有六個非零條目的向量,在區間 中距分佈。
Real Datasets 真實數據集
We considered the Diabetes dataset analyzed in [23]. We used the dataset with all the second order interactions included in the model, which resulted in 64 predictors.
We reduced the sample size to by taking a random sample and standardized the response and the columns of the model matrix to have zero means and unit -norm.
我們考慮了[23]中分析的糖尿病數據集。我們使用的資料集包含模型中包含的所有二階交互作用,從而產生了 64 個預測變數。我們透過隨機抽樣將樣本大小減少到 ,並將反應和模型矩陣的列標準化為具有零平均值和單位 -norm。
In addition to the above, we also considered a real microarray dataset: the Leukemia data [18]. We downloaded
the processed dataset from http://stat.ethz.ch/~dettling/bagboost.html, which had binary responses and more than 3000 predictors.
We standardized the response and columns of features to have zero means and unit -norm.
We reduced the set of features to 1000 by retaining the features maximally correlated (in absolute value) to the
response. We call the resulting feature matrix with . We then generated a semi-synthetic dataset with
continuous response as , where the first five coefficients of were taken as one and the rest as zero. The noise was distributed as
, with chosen to get a SNR=7.
除了上述之外,我們還考慮了一個真實的微陣列資料集:白血病資料[18]。我們從 http://stat.ethz.ch/~dettting/bagboost.html 下載了處理後的資料集,其中包含 二元回應和超過 3000 個預測變數。我們將反應和特徵列標準化為具有零均值和單位 -範數。我們透過保留與反應最大相關性(絕對值)的特徵,將特徵集減少到 1000 個。我們將產生的特徵矩陣 稱為 。然後,我們產生一個連續響應為 的半合成資料集,其中 的前五個係數被視為 1,其餘係數為零。噪音分佈為 ,選擇 以獲得 SNR=7。
Computer Specifications and Software
電腦規格和軟體
Computations were carried out in a linux 64 bit server—Intel(R) Xeon(R) eight-core processor @ 1.80GHz, 16 GB of RAM
for the overdetermined case and in a Dell Precision T7600 computer with an Intel Xeon E52687 sixteen-core processor @ 3.1GHz, 128GB of Ram for the high-dimensional case.
The discrete first order methods were implemented in Matlab 2012b. We used Gurobi [33] version 5.5 and
the Matlab interface to Gurobi for all of our experiments, apart from the
computations for synthetic data for , which were done in Gurobi via its Python 2.7 interface.
計算在 Linux 64 位元伺服器上進行 - Intel(R) Xeon(R) 八核心處理器 @ 1.80GHz,超定 情況下的 16 GB RAM 以及 Dell Precision T7600 電腦Intel Xeon E52687 十六核處理器@ 3.1GHz,用於高維 機箱的128GB RAM。離散一階方法在 Matlab 2012b 中實作。我們在所有實驗中都使用了 Gurobi [33] 5.5 版和 Gurobi 的 Matlab 接口,除了 的合成數據計算(這是透過 Python 2.7 接口在 Gurobi 中完成的)。
5.2 The Overdetermined Regime:
5.2超定政權:
Using the Diabetes dataset and synthetic datasets, we demonstrate the combined effect of using the discrete first order methods with the MIO approach. Together, these methods show improvements in obtaining good upper bounds and in closing the MIO gap to certify global optimality. Using synthetic datasets where
we know the true linear regression model, we perform side-by-side comparisons of this method with several other state-of-the-art algorithms designed to estimate sparse linear models.
使用糖尿病資料集和合成資料集,我們展示了使用離散一階方法與 MIO 方法的綜合效果。總之,這些方法顯示了在獲得良好上限和縮小 MIO 差距以證明全局最優性方面的改進。使用我們知道真實線性迴歸模型的合成資料集,我們將此方法與其他幾種旨在估計稀疏線性模型的最先進演算法進行並排比較。
5.2.1 Obtaining Good Upper Bounds
5.2.1 獲得良好的上限
We conducted experiments to evaluate the performance of our
methods
in terms of obtaining high quality solutions for Problem (1).
我們進行了實驗來評估我們的方法在獲得問題(1)的高品質解決方案方面的表現。
We considered the following three algorithms:
我們考慮了以下三種演算法:
-
(a)
Algorithm 2 with fifty random initializations222we took fifty random starting values around of the form , where . We found empirically that Algorithm 2 provided better upper bounds than Algorithm 1.
我們在 周圍隨機取了 50 個起始值,其形式為 ,其中 。我們根據經驗發現演算法 2 提供了比演算法 1 更好的上限。. We took the solution corresponding to the best objective value.
。我們採取了與最佳目標值相對應的解決方案。
(a)演算法 2 有 50 個隨機初始化 2 -
(b)
MIO with cold start, i.e., formulation (9) with a time limit of seconds.
(b)冷啟動MIO,即時間限制為 秒的公式(9)。 -
(c)
MIO with warm start. This was the MIO formulation initialized with the discrete first order optimization solution obtained from (a). This was run for a total of 500 seconds.
(c)MIO 熱啟動。這是用 (a) 中得到的離散一階最佳化解初始化的 MIO 公式。總共運行了 500 秒。
To compare the different algorithms in terms of the quality of upper bounds,
we run for every instance all the algorithms and obtain the best solution among them, say, . If
denotes the value of the best subset objective function for method “alg”, then we define the relative accuracy of the solution obtained by “alg” as:
為了比較不同演算法的上限質量,我們為每個實例運行所有演算法並獲得其中的最佳解決方案,例如 。如果 表示方法「alg」的最佳子集目標函數的值,那麼我們將「alg」所得的解的相對精度定義為:
(35) |
where as described above.
其中 如上所述。
We did experiments for the Diabetes dataset for different values of (see Table 1). For each of the algorithms we report
the amount of time taken by the algorithm to reach the best objective value during the time of seconds.
我們針對不同的 數值對糖尿病資料集進行了實驗(參見表 1)。對於每種演算法,我們報告演算法在 秒時間內達到最佳目標值所需的時間。
Discrete First Order 離散一階 | MIO Cold Start MIO 冷啟動 | MIO Warm Start MIO 熱啟動 | ||||
Accuracy | Time | Accuracy | Time | Accuracy | Time | |
9 | 0.1306 | 1 | 0.0036 | 500 | 0 | 346 |
20 | 0.1541 | 1 | 0.0042 | 500 | 0 | 77 |
49 | 0.1915 | 1 | 0.0015 | 500 | 0 | 87 |
57 | 0.1933 | 1 | 0 | 500 | 0 | 2 |
表 1:對於 的不同值,糖尿病資料集問題 (1) 的上限品質。我們發現,配備熱啟動功能的 MIO 在最短的整體時間內提供了最佳的上限。熱啟動 MIO 的運行時間包括離散一階方法所花費的時間(均小於一秒)。
Using the discrete first order methods in combination with the MIO algorithm resulted in finding the best possible relative accuracy in a matter of a few minutes.
將離散一階方法與 MIO 演算法結合使用,可以在幾分鐘內找到最佳的相對精度。
5.2.2 Improving MIO Performance via Warm Starts
5.2.2透過熱啟動提高MIO性能
We performed a series of experiments on the Diabetes dataset to obtain a globally optimal solution to Problem (1) via our approach and to understand the implications
of using advanced warm starts to the MIO formulation in terms of certifying optimality.
For each choice of we ran Algorithm 2 with fifty random initializations. They took less than a few seconds to run.
We used the best solution as an advanced warm start to the MIO formulation (9).
For each of these examples, we also ran the MIO formulation
without any warm start information and also without the parameter specifications in Section 2.3 (we refer to this as “cold start”).
Figure 3 summarizes the results. The figure shows that
in the presence of warm starts and problem specific side information, the MIO closes the optimality gap significantly faster.
我們在糖尿病數據集上進行了一系列實驗,以透過我們的方法獲得問題 (1) 的全局最優解決方案,並了解在證明最優性方面對 MIO 公式使用高級熱啟動的影響。對於 的每個選擇,我們都執行具有 50 個隨機初始化的演算法 2。他們只花了不到幾秒鐘就跑了。我們使用最佳解決方案作為 MIO 配方的高級熱啟動 (9)。對於每個範例,我們還運行了 MIO 公式,沒有任何熱啟動訊息,也沒有第 2.3 節中的參數規範(我們稱之為「冷啟動」)。圖 3 總結了結果。該圖顯示,在存在熱啟動和問題特定輔助資訊的情況下,MIO 明顯更快地縮小了最適性差距。
k=9 | k=20 | k=31 | k=42 |
![]() |
![]() |
![]() |
![]() |
Time (secs) | Time (secs) | Time (secs) | Time (secs) |
圖3:問題(1) 的MIO 最適性差距(按 規模)的演變,對於具有 的糖尿病數據集,有和沒有熱啟動(以及參數規範,如第 2.3 節)針對 的不同值。 MIO 顯著受益於演算法 2 提供的高級熱啟動。
5.2.3 Statistical Performance
5.2.3統計表現
We considered datasets as described in Example 1, Section 5.1—we took
different values of with , with .
我們考慮了範例1 第5.1 節所述的資料集- 我們對 和 、 和 採取了不同的值。
Competing Methods and Performance Measures
競爭方法和績效衡量
For every example, we considered the following learning procedures for comparison purposes:
(a) the MIO approach equipped warm starts from Algorithm 2 (annotated as “MIO” in the figure), (b) the Lasso, (c) Sparsenet and (d) stepwise regression (annotated as “Step”
in the figure).
對於每個範例,我們考慮了以下學習過程以進行比較:(a) 從演算法2 開始配備熱啟動的MIO 方法(在圖中標註為「MIO」)、(b) Lasso、(c) Sparsenet 和( d) )逐步迴歸(圖中標示為「Step」)。
We used R to compute
Lasso, Sparsenet and stepwise regression
using the glmnet 1.7.3, Sparsenet and Stats 3.0.2 packages respectively, which were all downloaded from CRAN at http://cran.us.r-project.org/.
我們使用R 分別使用glmnet 1.7.3、Sparsenet 和Stats 3.0.2 軟體套件來計算Lasso、Sparsenet 和逐步迴歸,這些軟體套件皆從CRAN(http://cran.us.r-project.org/)下載。
In addition to the above, we have also performed comparisons with a debiased version of the Lasso: i.e., performing unrestricted least squares on the
Lasso support to mitigate the bias imparted by Lasso shrinkage.
除了上述內容之外,我們還與 Lasso 的去偏差版本進行了比較:即,對 Lasso 支持執行無限制的最小二乘法,以減輕 Lasso 收縮帶來的偏差。
We note that Sparsenet [38] considers a penalized likelihood formulation of the form (3), where the penalty
is given by the generalized MCP penalty family (indexed by ) for a family of
values of and . The family of penalties used by Sparsenet is thus given by:
for described as above. As with fixed, we get the penalty .
The family above includes as a special case (), the hard thresholding penalty, a penalty recommended in the paper [60] for its useful statistical properties.
我們注意到 Sparsenet [38] 考慮了形式 (3) 的懲罰似然公式,其中懲罰由針對 索引)給出。 > 和 。因此,Sparsenet 使用的懲罰系列由以下公式給出: 對於 如上所述。由於 和 固定,我們得到懲罰 。上面的家族包括一個特殊情況( ),即硬閾值懲罰,這是論文 [60] 中推薦的一種懲罰,因為它具有有用的統計特性。
For each procedure, we obtained the “optimal” tuning parameter by selecting the model that achieved
the best predictive performance on
a held out validation set. Once the model was selected, we obtained the
prediction error as:
對於每個過程,我們透過選擇在驗證集上實現最佳預測性能的模型來獲得「最佳」調整參數。選擇模型 後,我們得到的預測誤差為:
(36) |
We report “prediction error” and number of non-zeros in the optimal model in our results.
The results were averaged over ten random instances, for different realizations of .
For every run: the training and validation data had a fixed but random noise .
我們在結果中報告最佳模型中的「預測誤差」和非零數。對於 的不同實現,結果是十個隨機實例的平均值。對於每次運行:訓練和驗證資料都有固定的 但隨機雜訊 。
Figure 4 presents results for data generated as per Example 1 with = 500 and = 100.
We see that the MIO procedure performs very well across all the examples. Among the methods, MIO performs the best, followed by Sparsenet,
Lasso with Step(wise) exhibiting the worst performance.
In terms of prediction error, the MIO performs the best, only to be marginally
outperformed by Sparsenet in a few instances. This further illustrates the importance of using non-convex methods in sparse learning.
Note that the MIO approach, unlike Sparsenet certifies global optimality in terms of solving Problem 1.
However, based on the plots in the upper panel, Sparsenet selects a few redundant variables unlike MIO.
Lasso delivers quite dense models and pays the price in predictive performance too, by selecting wrong variables.
As the value of SNR increases, the predictive power of the methods improve, as expected. The differences in predictive errors between the methods diminish with increasing SNR values.
With increasing values of (from left panel to right panel in the figure), the number of non-zeros selected by the Lasso in the optimal model increases.
圖 4 顯示了根據範例 1 產生的資料結果,其中 = 500 和 = 100。其中,MIO 表現最好,其次是 Sparsenet、Lasso,Step(wise) 表現最差。就預測誤差而言,MIO 表現最好,但在少數情況下略優於 Sparsenet。這進一步說明了在稀疏學習中使用非凸方法的重要性。請注意,與 Sparsenet 不同,MIO 方法在解決問題 1 方面證明了全局最優性。 Lasso 提供了相當密集的模型,並透過選擇錯誤的變數而在預測性能方面付出了代價。如預期的那樣,隨著 SNR 值的增加,這些方法的預測能力也會提高。兩種方法之間預測誤差的差異隨著 SNR 值的增加而減少。隨著 值的增加(圖中由左圖到右圖),最優模型中Lasso選擇的非零點數量增加。
We also performed experiments with the debiased version of the Lasso. The unrestricted least squares solution on the optimal
model selected by Lasso (as shown in Figure 4) had worse predictive performance than the Lasso,
with the same sparsity pattern. This is probably due to overfitting since the model selected by the Lasso is quite dense compared to .
We also tried some variants of debiased Lasso which led to models with better performances than the Lasso but the results were inferior compared to MIO — we provide a
detailed description in Section D.2.
我們也使用 Lasso 的去偏版本進行了實驗。 Lasso 選擇的最優模型上的無限制最小二乘解(如圖 4 所示)的預測性能比 Lasso 差,但具有相同的稀疏性模式。這可能是由於過度擬合,因為 Lasso 選擇的模型與 相比非常密集。我們也嘗試了去偏 Lasso 的一些變體,這些變體導致模型的性能比 Lasso 更好,但結果與 MIO 相比較差——我們在 D.2 節中提供了詳細描述。
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
圖 4:此圖顯示了最小平方法損失的不同子集選擇過程的稀疏性(上圖)和預測性能(下圖)。在這裡,我們考慮根據範例1 產生的數據,使用 、 ,針對三個不同的SNR 值,使用[左面板] 和[中面板] 和 [右面板] 。頂部面板中的虛線表示非零值的真實數量。對於每個過程,選擇最佳模型作為在單獨的驗證集上產生最佳預測精度的模型,如第 5.2.3 節所述。
We also performed experiments with for data generated as per Example 1. We solved the problems to provable optimality and found that the MIO performed very well when compared to other competing methods. We do not report the experiments for brevity.
我們還使用 對按照範例 1 產生的數據進行了實驗。為了簡潔起見,我們不報告實驗。
5.2.4 MIO model training 5.2.4MIO模型訓練
We trained a sequence of best subset models (indexed by ) by applying the MIO approach with warm starts.
Instead of running the MIO solvers from scratch for different values of , we used callbacks, a feature of integer optimization solvers.
Callbacks allow the user to solve an initial model, and then add additional constraints to the model one at a time. These “cuts” reduce the size of the feasible region without having to rebuild the entire optimization model. Thus, in our case, we can save time by building the initial optimization model for . Once the solution for is obtained, a cut can be added to the model: for and the model can be re-solved from this point. We apply this procedure until we arrive at a model with .
我們透過應用 MIO 方法和熱啟動來訓練一系列最佳子集模型(由 索引)。我們沒有使用 的不同值從頭開始執行 MIO 求解器,而是使用了回呼(整數最佳化求解器的一項功能)。回調允許使用者求解初始模型,然後一次向模型新增一個附加限制。這些「削減」減少了可行區域的大小,而無需重建整個最佳化模型。因此,在我們的例子中,我們可以透過為 建立初始最佳化模型來節省時間。一旦獲得 的解,就可以在模型上加入一個切割: 的 ,並且可以從此時重新求解模型。我們應用此過程,直到得到帶有 的模型。
For each value of tested, the MIO best subset algorithm was set to stop the first time either an optimality gap of 1%
was reached or a time limit of 15 minutes was reached. Additionally, we only tested values of from 5 through 25, and used Algorithm 2 to warm start the MIO algorithm.
We observed that it was possible to obtain speedups of a factor of 2-4 by carefully tuning the optimization solver for a particular problem, but chose to maintain generality by solving with default parameters. Thus, we do not report times with the intention of accurately benchmarking the best possible time but rather to show that it is computationally tractable to solve problems to optimality using modern MIO solvers.
對於測試的每個 值,MIO 最佳子集演算法設定為在第一次達到 1% 的最優性差距或達到 15 分鐘的時間限制時停止。此外,我們僅測試了從 5 到 25 的 值,並使用演算法 2 熱啟動 MIO 演算法。我們觀察到,透過針對特定問題仔細調整最佳化求解器,可以獲得 2-4 倍的加速,但選擇透過使用預設參數求解來保持通用性。因此,我們報告時間的目的並不是為了準確地對最佳可能時間進行基準測試,而是為了表明使用現代 MIO 求解器在計算上可以輕鬆解決問題以實現最優。
5.3 The High-Dimensional Regime:
5.3高維體系:
In this section, we investigate
(a) the evolution of upper bounds in the high-dimensional regime,
(b) the effect of a bounding box formulation on the speed of closing the optimality gap and
(c) the statistical performance of the MIO approach in comparison to other state-of-the art methods.
在本節中,我們研究 (a) 高維狀態下上限的演變,(b) 邊界框公式對縮小最適性差距的速度的影響,以及 (c) MIO 方法的統計性能與其他最先進的方法相比。
5.3.1 Obtaining Good Upper Bounds
5.3.1 獲得良好的上限
We performed tests similar to those in Section 5.2.1 for the regime. We tested a synthetic dataset corresponding to Example 2 with
for varying SNR values (see Table 2) over a time of 500s.
As before, using the discrete first order methods in combination with the MIO algorithm resulted in finding the best possible upper bounds in the shortest possible times.
我們對 體系進行了類似於第 5.2.1 節的測試。我們測試了與範例 2 相對應的合成資料集,其中 在 500 秒的時間內針對不同的 SNR 值(參見表 2)進行了測試。與之前一樣,將離散一階方法與 MIO 演算法結合使用可以在盡可能短的時間內找到最佳的可能上限。
Discrete First Order 離散一階 | MIO Cold Start MIO 冷啟動 | MIO Warm Start MIO 熱啟動 | |||||
Accuracy | Time | Accuracy | Time | Accuracy | Time | ||
5 | 0.1647 | 37.2 | 1.0510 | 500 | 0 | 72.2 | |
6 | 0.6152 | 41.1 | 0.2769 | 500 | 0 | 77.1 | |
7 | 0.7843 | 40.7 | 0.8715 | 500 | 0 | 160.7 | |
SNR = 3 |
8 | 0.5515 | 38.8 | 2.1797 | 500 | 0 | 295.8 |
9 | 0.7131 | 45.0 | 0.4204 | 500 | 0 | 96.0 | |
5 | 0.5072 | 45.6 | 0.7737 | 500 | 0 | 65.6 | |
6 | 1.3221 | 40.3 | 0.5121 | 500 | 0 | 82.3 | |
7 | 0.9745 | 40.9 | 0.7578 | 500 | 0 | 210.9 | |
SNR = 7 |
8 | 0.8293 | 40.5 | 1.8972 | 500 | 0 | 262.5 |
9 | 1.1879 | 44.2 | 0.4515 | 500 | 0 | 254.2 |
表 2:透過演算法 2、冷啟動的 MIO 和演算法 2 熱啟動的 MIO 獲得的問題 (1) 上限的品質。噪比。當使用一階解熱啟動時,MIO 方法在最短時間內獲得良好的上限方面表現最佳。度量「準確度」在(35)中定義。一階方法速度很快,但不一定會產生最高品質的解決方案。 MIO 提高了一階方法提供的上限的質量,它們的綜合效果導致了最佳性能。
We also did experiments on the Leukemia dataset. In Figure 5 we demonstrate the evolution of the objective value of the best subset problem for
different values of . For each value of , we warm-started the MIO with the solution obtained by Algorithm 2 and allowed the
MIO solver to run for 4000 seconds. The best objective value obtained at the end of 4000 seconds is denoted by . We plot
the Relative Accuracy, i.e., , where is the objective value obtained after seconds. The figure shows that
the solution obtained by Algorithm 2 is improved by the MIO on various instances and the time taken to improve the upper bounds
depends upon . In general, for smaller values of the upper bounds obtained by the MIO algorithm stabilize earlier, i.e., the MIO
finds improved solutions faster than larger values of .
我們也在白血病資料集上做了實驗。在圖 5 中,我們示範了不同 值的最佳子集問題的目標值的演進。對於 的每個值,我們使用演算法 2 獲得的解來熱啟動 MIO,並允許 MIO 求解器運行 4000 秒。 4000秒結束時所獲得的最佳目標值以 表示。我們繪製相對精確度,即 ,其中 是 秒後獲得的目標值。該圖顯示,演算法 2 獲得的解決方案透過 MIO 在各種實例上進行了改進,並且改進上限所需的時間取決於 。一般來說,對於較小的 值,MIO 演算法獲得的上限更早穩定,即,MIO 比較大的 值更快找到改進的解決方案。
![Refer to caption](/html/1507.03133/assets/x16.png)
圖 5:MIO 的行為有助於熱啟動,從而隨著時間的推移為白血病資料集 獲得良好的上限。縱軸表示相對精度,即 ,其中 是 秒後獲得的目標值, 表示最佳值4000秒後此方法所獲得的客觀值。彩色菱形對應於 MIO(熱啟動)獲得最佳解決方案的位置。該圖顯示,MIO 在所有實例中改進了透過一階方法獲得的解。獲得最佳可能上限的時間取決於 的選擇。通常,較大的 值會使問題變得更加困難,因此在較長的等待後才能獲得最佳解決方案。
5.3.2 Bounding Box Formulation
5.3.2 邊界框公式
With the aid of advanced warm starts as
provided by Algorithm 2, the MIO obtains a very high quality solution very quickly—in most of the examples the solution thus obtained turns out to be the global minimum. However, in the typical “high-dimensional” regime, with , we observe that the certificate of global optimality comes later as the lower bounds of the problem “evolve” slowly.
This is observed even in the presence of warm starts and using the implied bounds as developed in Section 2.2 and is aggravated
for the cold-started MIO formulation (10).
借助演算法 2 提供的高級熱啟動,MIO 非常快速地獲得非常高品質的解決方案 - 在大多數範例中,由此獲得的解決方案結果是全局最小值。然而,在典型的「高維」體系中,使用 ,我們觀察到隨著問題的下界「演化」緩慢,全局最優性的證明出現得更晚。即使存在熱啟動並使用第 2.2 節中提出的隱含邊界,也可以觀察到這種情況,並且對於冷啟動 MIO 公式 (10) 會加劇這種情況。
To address this, we consider the MIO formulation (37) obtained by
adding bounding boxes around a local solution.
These restrictions guide the MIO in restricting its search space and enable the MIO to certify global optimality
inside that bounding box.
We consider the following additional bounding box constraints to the MIO formulation (10):
為了解決這個問題,我們考慮在局部解周圍添加邊界框所獲得的 MIO 公式 (37)。這些限制指導 MIO 限制其搜尋空間,並使 MIO 能夠證明該邊界框內的全域最優性。我們考慮 MIO 公式 (10) 的以下附加邊界框限制:
where, is a candidate sparse solution. The radii of the two -balls above, namely, and
are user-defined parameters and control the size of the feasible set.
其中, 是候選稀疏解。上面兩個 球的半徑,即 和 是使用者定義的參數,控制可行集的大小。
Using the notation we have the following MIO formulation (equipped with the additional bounding boxes):
使用符號 我們有以下 MIO 公式(配備了額外的邊界框):
(37) |
For large values of (respectively, ) the constraints on (respectively, )
become ineffective and one gets back formulation (10). To see the impact of these additional cutting planes in the MIO formulation, we
consider a few examples as illustrated in Figures 6,7,12.
對於較大的 值(分別為 ),對 (分別為 )的限制變得無效,且返回公式(10)。為了了解 MIO 公式中這些額外切割平面的影響,我們考慮一些範例,如圖 6、7、12 所示。
Interpretation of the bounding boxes
邊界框的解釋
A local bounding box in the variable directs the MIO solver to seek for candidate solutions that deliver models with
predictive accuracy “similar” (controlled by the radius of the ball) to a reference predictive model, given by .
In our experiments, we typically chose as the solution delivered by running MIO (warm-started with a first order solution) for a few hundred to a few thousand seconds.
More generally, may be selected by any other sparse learning method. In our experiments, we found that
the run-time behavior of the MIO depends upon how correlated the columns of are — more correlation leading to longer run-times.
變數 中的局部邊界框指示 MIO 求解器尋找候選解決方案,這些解決方案提供的模型的預測精度與參考預測模型「相似」(由球的半徑控制),由 < b1> 。在我們的實驗中,我們通常選擇 作為透過運行 MIO(使用一階解決方案熱啟動)幾百到幾千秒來提供的解決方案。更一般地,可以透過任何其他稀疏學習方法來選擇 。在我們的實驗中,我們發現 MIO 的運行時行為取決於 列的相關程度 - 相關性越高,運行時間就越長。
Similarly, a bounding box around directs the MIO to look for solutions in the neighborhood of a reference point .
In our experiments, we chose the reference as the solution obtained by MIO (warm-started with a first order solution)
and allowing it to run for a few hundred to a few thousand seconds. We observed that the MIO solver in presence of bounding boxes
in the -space certified optimality and in the process finding better solutions; much faster than the -bounding box method.
類似地, 周圍的邊界框指示 MIO 在參考點 附近尋找解決方案。在我們的實驗中,我們選擇參考 作為MIO(使用一階解熱啟動)所獲得的解,並讓它運行幾百到幾千秒。我們觀察到,MIO 求解器在 空間中存在邊界框,並在過程中找到了更好的解決方案;比 邊界框方法快得多。
Note that the -bounding box constraint leads to and the -box leads to
constraints. Thus, when the additional constraints add a fewer number of extra variables when compared
to the constraints.
請注意, 邊界框約束導致 ,而 邊界框導致 限制。因此,當 時,與 限制相比,附加 約束增加的額外變數數量較少。
Experiments 實驗
In the first set of experiments, we consider the Leukemia dataset with .
We took two different values of and for each case we
ran Algorithm 2 with several random restarts. The best solution thus obtained was used
to warm start the MIO formulation (10), which we ran for an additional 3600 seconds. The solution thus obtained is denoted by
. We then consider formulation (37) with and different
values of (as annotated in Figure 6) — the results are displayed in Figure 6.
在第一組實驗中,我們考慮有 的白血病資料集。我們採用了兩個不同的 值,並且對於每種情況,我們都執行演算法 2 並隨機重新啟動幾次。由此獲得的最佳解決方案用於熱啟動 MIO 配方 (10),我們又運行了 3600 秒。由此得到的解由 表示。然後,我們考慮使用 和不同的 值的公式 (37)(如圖 6 中註釋的那樣)——結果顯示在圖 6 中。
Leukemia dataset: Effect of a Bounding Box for MIO formulation (37) 白血病資料集:MIO 公式邊界框的影響 (37) |
|
![]() |
![]() |
圖 6:對於不同的 值,MIO 公式 (37) 對白血病資料集的影響。這裡是 和 。對於 的每個值,對於 的不同選擇,所獲得的全域最小值是相同的。
We consider another set of experiments to demonstrate the performance of the MIO in certifying global optimality for different synthetic datasets
with varying as well as with different structures on the bounding box.
In the first case, we generated data as per Example 1 with , .
We consider the case with ,
and ,
where is a -sparse solution obtained from the
MIO formulation (10) run with a time limit of 1000 seconds, after being warm-started with Algorithm 2.
The results are displayed in Figure 7[Left Panel].
In the second case (with data same as before) we obtained in the same fashion as described before—we took a bounding box around , and left the box constraint around inactive, i.e., we set
and .
We performed two sets of experiments, where the data were generated
based on different SNR values—the results are displayed in Figure 7 with SNR=1 [Middle Panel] and
SNR = 3[Right Panel].
我們考慮另一組實驗來證明 MIO 在驗證具有不同 以及邊界框上不同結構的不同合成資料集的全域最優性方面的表現。在第一種情況下,我們依照範例 1 使用 和 產生資料。我們考慮 、 和 的情況,其中 是獲得的 稀疏解決方案根據MIO 公式(10),在使用演算法2 熱啟動後,以1000 秒的時間限制運行。在第二種情況下(數據與之前相同),我們以與之前描述相同的方式獲得了 - 我們在 周圍設置了一個邊界框,並在< b10 周圍保留了框約束> 不活動,即我們設定 和 。我們進行了兩組實驗,其中數據是根據不同的 SNR 值產生的 - 結果如圖 7 所示,其中 SNR=1 [中圖] 和 SNR = 3[右圖]。
In the same vein, we have Figure 12 studying the effect of formulations (37) for
synthetic datasets generated as per Example 1 with and .
同樣,我們在圖 12 中研究了公式 (37) 對於根據範例 1 使用 和 產生的合成資料集的影響。
Evolution of the MIO gap for (37), effect of type of bounding box (). (37) 的 MIO 間隙的演變,邊界框類型的影響 ( )。 |
||
---|---|---|
![]() |
![]() |
![]() |
圖 7:MIO 公式 (37) 對範例 1 中的合成資料集的影響,其中 、 、 ,對於 < 的不同值b3 > 。 [左圖] 和 用於 SNR = 3 的資料集。 1. [右圖] 、 且SNR = 3. 此圖顯示以 表示的邊界框(左圖)與 列之間的強相關性。 SNR 值似乎對演算法的運行時間沒有太大影響(中圖和右圖)。
5.3.3 Statistical Performance
5.3.3統計表現
To understand the statistical behavior of MIO when compared to other approaches for learning sparse models,
we considered synthetic datasets for values of ranging from and values of ranging from .
The following methods were used for comparison purposes
(a) Algorithm 2. Here we used fifty different random initializations around , of the form
and took the solution corresponding to the best objective value;
(b) The MIO approach with warm starts from part (a);
(c) The Lasso solution and
(d) The Sparsenet solution.
為了了解 MIO 與學習稀疏模型的其他方法相比的統計行為,我們考慮了 值範圍從 到 值的合成資料集範圍從 。使用以下方法進行比較(a) 演算法2。的解決方案; (b) MIO 方法從 (a) 部分開始熱啟動; (c) Lasso 解決方案和 (d) Sparsenet 解決方案。
For methods (a), (b) we considered ten equi-spaced values of in the range (including the optimal value of ).
For each of the methods, the best model was selected in the same fashion as described in Section 5.2.3 using separate validation sets.
對於方法 (a)、(b),我們考慮了 範圍內 的十個等距值(包括 的最佳值)。對於每種方法,都使用單獨的驗證集以與第 5.2.3 節中所述相同的方式選擇最佳模型。
In addition, for some examples, we also study the performance of the debiased version of the Lasso, as described in Section 5.2.3.
此外,對於一些範例,我們也研究了 Lasso 的去偏版本的效能,如第 5.2.3 節所述。
In Figure 8 and Figure 9 we present selected representative results from four different examples described in Section 5.1.
在圖 8 和圖 9 中,我們展示了從 5.1 節中描述的四個不同範例中選擇的代表性結果。
![]() |
![]() |
![]() |
![]() |
圖8:不同過程的稀疏性和預測性能:[左圖] 顯示帶有 的示例1,[右圖] 顯示帶有 的示例2 — 對於每個實例都有幾個SNR 值已經展示過了。
![]() |
![]() |
![]() |
![]() |
圖 9:[左側面板] 顯示根據範例 3 使用 產生的資料的效能,[右面板] 顯示使用 的範例 4。
In Figure 8 the left panel shows the performance of different methods for Example 1 with .
In this example, there are five non-zero coefficients: the features corresponding to the non-zero coefficients are weakly correlated and
a feature having a non-zero coefficient is highly correlated with a feature having a zero coefficient. In this situation, the Lasso selects a very dense model
since it fails to distinguish between a zero and a non-zero coefficient when the variables are correlated—it brings both the coefficients in the model (with shrinkage).
MIO (with warm-start) performs the best—both in terms of predictive accuracy and in selecting a sparse set of coefficients. MIO obtains the sparsest model among the
four methods and seems to find better solutions in terms of statistical properties than the models obtained by the first order methods alone.
Interestingly, the “optimal model” selected by the first order methods is more dense than that selected by the MIO.
The number of non-zero coefficients selected by MIO remains fairly stable across different SNR values, unlike the other three methods.
For this example, we also experimented with the different versions of debiased Lasso.
In summary: the best debiased Lasso models had performance marginally better than Lasso but quite inferior to MIO. See the results in Appendix, Section D.2 for further details.
在圖 8 中,左側面板顯示了範例 1 中使用 的不同方法的效能。在這個範例中,存在五個非零係數:與非零係數對應的特徵為弱相關,並且具有非零係數的特徵與具有零係數的特徵高度相關。在這種情況下,Lasso 選擇一個非常密集的模型,因為當變數相關時,它無法區分零係數和非零係數 - 它將兩個係數都帶入模型中(帶有收縮)。 MIO(帶熱啟動)在預測準確性和選擇稀疏係數集方面表現最佳。 MIO 獲得了四種方法中最稀疏的模型,並且似乎在統計特性方面找到了比單獨透過一階方法獲得的模型更好的解決方案。有趣的是,一階方法選擇的「最優模型」比 MIO 選擇的「最優模型」更密集。與其他三種方法不同,MIO 選擇的非零係數數量在不同的 SNR 值下保持相當穩定。對於這個例子,我們也嘗試了不同版本的去偏套索。總之:最好的去偏 Lasso 模型的表現略優於 Lasso,但遠低於 MIO。有關更多詳細信息,請參閱附錄 D.2 節中的結果。
In Figure 8 the right panel shows Example 2, with and all non-zero coefficients equal one. In this example, all the methods
perform similarly in terms of predictive accuracy. This is because all non-zero coefficients in have the same value.
In fact for the smallest value of SNR, the Lasso achieves the best predictive model.
In all the cases however, the MIO achieves the sparsest model with favorable predictive accuracy.
在圖 8 中,右圖顯示了範例 2,其中 且所有非零係數都等於 1。在此範例中,所有方法在預測準確性方面的表現相似。這是因為 中的所有非零係數都具有相同的值。事實上,對於 SNR 的最小值,Lasso 實現了最佳的預測模型。然而,在所有情況下,MIO 都實現了具有良好預測準確性的最稀疏模型。
In Figure 9, for both the examples, the model matrix is an iid Gaussian ensemble. The underlying regression coefficient however, is
structurally different than Example 2 (as in Figure 8, right-panel). The structure in is responsible for different statistical behaviors of the four methods
across Figures 8 (right-panel) and Figure 9 (both panels). The alternating signs and varying amplitudes of are responsible for the poor behavior of
Lasso. The MIO (with warm-starts) seems to be the best among all the methods. For Example 3 (Figure 9, left panel) the
predictive performances of Lasso and MIO are comparable—the MIO however delivers much sparser models than the Lasso.
在圖 9 中,對於這兩個範例,模型矩陣都是獨立同分佈高斯係綜。然而,基礎迴歸係數 在結構上與範例 2 不同(如圖 8 右圖)。 中的結構負責圖 8(右圖)和圖 9(兩個圖)中四種方法的不同統計行為。 的交替符號和不同幅度是造成 Lasso 不良行為的原因。 MIO(帶熱啟動)似乎是所有方法中最好的。對於範例 3(圖 9,左圖),Lasso 和 MIO 的預測表現相當,但 MIO 提供的模型比 Lasso 稀疏得多。
The key conclusions are as follows:
主要結論如下:
-
1.
The MIO best subset algorithm has a significant edge in detecting the correct sparsity structure for all examples compared to Lasso, Sparsenet and the stand-alone discrete first order method.
1. 與 Lasso、Sparsenet 和獨立的離散一階方法相比,MIO 最佳子集演算法在檢測所有範例的正確稀疏結構方面具有顯著優勢。 -
2.
For data generated as per Example 1 with large values of , the MIO best subset algorithm gives better predictive performance compared to its competitors.
2. 對於根據範例 1 產生的具有較大 值的數據,MIO 最佳子集演算法與其競爭對手相比提供了更好的預測效能。 -
3.
For data generated as per Examples 2 and 3, MIO delivers similar predictive models like the Lasso, but produces much sparser models. In fact, Lasso seems to perform marginally better than MIO, as a predictive model for small values of SNR.
3. 對於根據範例 2 和 3 產生的數據,MIO 提供與 Lasso 類似的預測模型,但產生的模型要稀疏得多。事實上,作為小 SNR 值的預測模型,Lasso 的表現似乎比 MIO 稍好。 -
4.
For Example 4, MIO performs the best both in terms of predictive accuracy and delivering sparse models.
4. 對於範例 4,MIO 在預測準確度和提供稀疏模型方面均表現最佳。
6 Computational Results for Subset Selection with Least Absolute Deviation Loss
6 絕對偏差損失最小的子集選擇的計算結果
In this section, we demonstrate how our method can be used for the best subset selection problem with LAD objective (34).
在本節中,我們將示範如何使用我們的方法來解決具有 LAD 目標 (34) 的最佳子集選擇問題。
Since the main focus of this paper is the least squares loss function, we consider only a few representative examples for the LAD case.
The LAD loss is appropriate when the error follows a heavy tailed distribution.
The datasets used for the experiments parallel those described in Section 5.1, the difference being in the distribution of
. We took iid from a double exponential distribution with variance .
The value of was adjusted to get different values of SNR.
由於本文的主要關注點是最小二乘損失函數,因此我們僅考慮 LAD 情況的幾個代表性範例。當誤差服從重尾分佈時,LAD 損失是適當的。用於實驗的資料集與第 5.1 節中所述的資料集相似,差異在於 的分佈。我們從方差為 的雙指數分佈中取得 iid。調整 的值以獲得不同的SNR值。
Datasets analysed 分析的數據集
We consider a set-up similar to Example 1 (Section 5.1) with and .
Different choices of were taken to cover both the overdetermined ()
and high-dimensional cases ( and ).
我們考慮與範例 1(第 5.1 節)類似的設置,其中包含 和 。採用不同的 選擇來涵蓋超定( )和高維度情況( 和 )。
The other competing methods used for comparison were (a) discrete first order method (Section (3.4)) (b) MIO warm-started with the first order solutions and
(c) the LAD loss with regularization:
其他用於比較的競爭方法是(a)離散一階方法(第(3.4)節)(b)使用一階解決方案熱啟動的MIO 和(c)使用 正則化的LAD損失:
which we denote by LAD-Lasso.
The training, validation and testing were done in the same fashion as in the least squares case. For each method, we report the
number of non-zeros in the optimal model and associated prediction accuracy (36).
我們用 LAD-Lasso 表示。訓練、驗證和測試的方式與最小平方法情況相同。對於每種方法,我們報告最佳模型中的非零數以及相關的預測精度 (36)。
![]() |
![]() |
圖 10:問題 (34) 的 的不同過程的稀疏性和預測性能。數據是根據範例 1 生成的,具有 和雙指數誤差 - 文本中提供了更多詳細資訊。縮寫「Lasso」指的是 LAD-Lasso (6)。與 LAD-Lasso 相比,MIO 可以提供更稀疏的模型,並具有更好的預測精度。
Figure 10 compares the MIO approach with others for LAD in the overdetermined case ().
Figure 11 does the same for the high-dimensional case ().
The conclusions parallel those for the least squares case. Since, in the example considered, the features corresponding to the non-zero coefficients are weakly correlated and
a feature having a non-zero coefficient is highly correlated with a feature having a zero coefficient—the LAD-Lasso selects an overly dense model and misses out in terms of prediction error.
Both the MIO (with warm-starts) and the discrete first order methods behave similarly—much better than regularization schemes.
As expected, we observed that subset selection with least squares loss leads to inferior models for these examples, due to a heavy-tailed distribution of the errors.
圖 10 將 MIO 方法與超定情況下 LAD 的其他方法進行了比較 ( )。圖 11 對於高維度情況 ( ) 執行相同的操作。結論與最小平方法情況的結論相似。由於在所考慮的示例中,與非零係數相對應的特徵是弱相關的,並且具有非零係數的特徵與具有零係數的特徵高度相關——LAD-Lasso 選擇了過於密集的模型並錯過了就預測誤差而言。 MIO(帶熱啟動)和離散一階方法的行為相似,比 正則化方案好得多。如預期的那樣,我們觀察到,由於誤差的重尾分佈,具有最小二乘損失的子集選擇會導致這些範例的模型較差。
The results in this section are similar to the least squares case. The MIO approach provides an edge both in terms of sparsity and predictive accuracy compared to Lasso both for the
overdetermined and the high-dimensional case.
本節的結果與最小平方法情況類似。對於超定和高維情況,與 Lasso 相比,MIO 方法在稀疏性和預測準確性方面都具有優勢。
7 Conclusions 7結論
In this paper, we have revisited the classical best subset selection problem of choosing out of features in linear regression given observations
using a modern optimization lens, i.e.,
MIO and a discrete extension of first order methods from continuous optimization. Exploiting the astonishing progress of MIO solvers in the last twenty-five years,
we have shown that this approach solves problems with in the 1000s and in the 100s in minutes to provable optimality,
and finds near optimal solutions for in the 100s and in the 1000s in minutes. Importantly, the solutions provided by the MIO approach significantly outperform
other state of the art methods like Lasso in achieving sparse models with good predictive power.
Unlike all other methods, the MIO approach always provides a guarantee on its sub-optimality even if the algorithm is terminated early.
Moreover, it can
accommodate side constraints on the coefficients of the linear regression and also
extends to finding best subset solutions for the least absolute deviation loss function.
在本文中,我們重新審視了經典的最佳子集選擇問題,即使用現代優化鏡頭在給定 觀測值的線性回歸中從 特徵中選擇< b0> ,即 MIO 和連續最佳化的一階方法的離散擴展。利用MIO 求解器在過去25 年中取得的驚人進步,我們已經證明這種方法可以在幾分鐘內解決1000 秒內的 和100 秒內的 問題,從而證明最優性,並在幾分鐘內找到 100 秒內的 和 1000 秒內的 的接近最佳解決方案。重要的是,MIO 方法提供的解決方案在實現具有良好預測能力的稀疏模型方面明顯優於 Lasso 等其他最先進的方法。與所有其他方法不同,即使演算法提前終止,MIO 方法也始終保證其次優性。此外,它可以適應線性迴歸係數的側面約束,並且還可以擴展到尋找最小絕對偏差損失函數的最佳子集解。
While continuous optimization methods have played and continue to play an important role in statistics over the years, discrete optimization methods
have not. The evidence in this paper as well as in [2] suggests that MIO methods are tractable and lead to desirable properties
(improved accuracy and sparsity among others) at the expense of higher, but still reasonable, computational times.
雖然連續最佳化方法多年來在統計中一直發揮並將繼續發揮重要作用,但離散最佳化方法卻沒有。本文以及 [2] 中的證據表明,MIO 方法易於處理,並且會帶來理想的特性(提高準確性和稀疏性等),但代價是更高但仍然合理的計算時間。
Acknowledgements 致謝
We would like to thank the Associate editor and two reviewers for their comments that helped us improve the paper.
A major part of the work was performed when R.M. was at Columbia University.
我們要感謝副主編和兩位審查者的評論,幫助我們改進論文。大部分的工作是在 R.M. 時完成。曾就讀於哥倫比亞大學。
References 參考
-
[1]
Top500 Supercomputer Sites, Directory page for Top500 lists. Result for each
list since June 1993.
http://www.top500.org/statistics/sublist/.
Accessed: 2013-12-04.
Top500 超級電腦站點,Top500 清單的目錄頁。自 1993 年 6 月以來每個清單的結果。造訪時間:2013 年 12 月 4 日。 -
Bertsimas and Mazumder [2014]
Bertsimas 和 Mazumder [2014] D. Bertsimas and R. Mazumder. Least quantile regression via modern optimization. The Annals of Statistics, 42(6):2494–2525, 2014.
D. Bertsimas 和 R. Mazumder。透過現代優化的最小分位數回歸。 《統計年鑑》,42(6):2494–2525,2014 年。 -
Bertsimas and Shioda [2009]
Bertsimas 與 Shioda [2009] D. Bertsimas and R. Shioda. Algorithm for cardinality-constrained quadratic optimization. Computational Optimization and Applications, 43(1):1–22, 2009.
D. Bertsimas 和 R. Shioda。基數約束二次最佳化演算法。計算最佳化與應用,43(1):1–22,2009。 -
Bertsimas and Weismantel [2005]
Bertsimas 和 Weismantel [2005] D. Bertsimas and R. Weismantel. Optimization over integers. Dynamic Ideas Belmont, 2005.
D. Bertsimas 和 R. Weismantel。整數優化。動態想法貝爾蒙特,2005 年。 -
Bickel et al. [2009] 比克爾等人。 [2009]
P. Bickel, Y. Ritov, and A. Tsybakov.
Simultaneous analysis of lasso and dantzig selector.
The Annals of Statistics, pages 1705–1732, 2009.
P. 比克爾、Y. 里托夫和 A. 齊巴科夫。同時分析 lasso 和 dantzig 選擇器。 《統計年鑑》,第 1705-1732 頁,2009 年。 -
Bienstock [1996] 比恩斯托克 [1996]
D. Bienstock.
Computational study of a family of mixed-integer quadratic
programming problems.
Mathematical programming, 74(2):121–140,
1996.
D. 比恩斯托克。一系列混合整數二次規劃問題的計算研究。數學規劃,74(2):121–140, 1996。 -
Bixby [2012] 比克斯比 [2012]
R. E. Bixby.
A brief history of linear and mixed-integer programming computation.
Documenta Mathematica, Extra Volume: Optimization Stories,
pages 107–121, 2012.
R.E.比克斯比。線性和混合整數規劃計算的簡史。數學文獻展,額外卷數:最佳化故事,第 107-121 頁,2012 年。 -
Blumensath and Davies [2008]
布魯門薩斯和戴維斯 [2008] T. Blumensath and M. Davies. Iterative thresholding for sparse approximations. Journal of Fourier Analysis and Applications, 14(5-6):629–654, 2008.
T. Blumensath 和 M. Davies。稀疏近似的迭代閾值。傅立葉分析與應用雜誌,14(5-6):629–654,2008。 -
Blumensath and Davies [2009]
布魯門薩斯和戴維斯 [2009] T. Blumensath and M. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265–274, 2009.
T. Blumensath 和 M. Davies。壓縮感知的迭代硬閾值。應用與計算諧波分析,27(3):265–274, 2009。 -
Boyd and Vandenberghe [2004]
博伊德和范登堡 [2004] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, 2004.
S.博伊德和L.范登堡。凸優化。劍橋大學出版社,劍橋,2004 年。 -
Bühlmann and van-de-Geer [2011]
Bühlmann 與 van-de-Geer [2011] P. Bühlmann and S. van-de-Geer. Statistics for high-dimensional data. Springer, 2011.
P. Bühlmann 和 S. van-de-Geer。高維資料的統計。施普林格,2011。 -
Bunea et al. [2007] 佈內亞等人。 [2007]
F. Bunea, A. B. Tsybakov, M. H. Wegkamp, et al.
Aggregation for gaussian regression.
The Annals of Statistics, 35(4):1674–1697, 2007.
F. Bunea、A. B. Tsybakov、M. H. Wegkamp 等人。高斯回歸的聚合。 《統計年鑑》,35(4):1674–1697,2007 年。 -
Candes et al. [2008] 坎德斯等人。 [2008]
E. Candes, M. Wakin, and S. Boyd.
Enhancing sparsity by reweighted minimization.
Journal of Fourier Analysis and Applications, 14(5):877–905, 2008.
E. 坎德斯、M. 瓦金和 S. 博伊德。透過重新加權 最小化來增強稀疏性。傅立葉分析與應用雜誌,14(5):877–905,2008。 -
Candes [2008] 燭光 [2008]
E. Candes.
The restricted isometry property and its implications for compressed
sensing.
Comptes Rendus Mathematique, 346(9):589–592, 2008.
E.坎德斯。受限等距屬性及其對壓縮感知的影響。數學競賽,346(9):589–592, 2008。 -
Candès and Plan [2009]
坎德斯與計畫 [2009] E. Candès and Y. Plan. Near-ideal model selection by minimization. The Annals of Statistics, 37(5A):2145–2177, 2009.
E. Candès 和 Y. Plan。透過 最小化來選擇接近理想的模型。 《統計年鑑》,37(5A):2145–2177,2009 年。 -
Candes and Tao [2006]
坎德斯與陶 [2006] E. Candes and T. Tao. Near-optimal signal recovery from random projections: Universal encoding strategies? Information Theory, IEEE Transactions on, 52(12):5406–5425, 2006.
E. 坎德斯和 T. 陶。從隨機投影中恢復近乎最佳的訊號:通用編碼策略?資訊理論,IEEE 彙刊,52(12):5406–5425, 2006。 -
Chen et al. [1998] 陳等人。 [1998]
S. Chen, D. Donoho, and M. Saunders.
Atomic decomposition by basis pursuit.
SIAM Journal on Scientific Computing, 20(1):33–61, 1998.
S. Chen、D. Donoho 和 M. Saunders。透過基底追蹤進行原子分解。 SIAM 科學計算雜誌,20(1):33–61,1998。 -
Dettling [2004] 德特林 [2004]
M. Dettling.
Bagboosting for tumor classification with gene expression data.
Bioinformatics, 20(18):3583–3593, 2004.
M.德特林。使用基因表現資料進行腫瘤分類的 Bagboosting。生物資訊學,20(18):3583–3593,2004。 -
Donoho [2006] 多諾霍 [2006]
D. Donoho.
For most large underdetermined systems of equations, the minimal
-norm solution is the sparsest solution.
Communications on Pure and Applied Mathematics, 59:797–829, 2006.
D. 多諾霍。對於大多數大型欠定方程組,最小 範數解是最稀疏解。純粹數學與應用數學通訊,59:797-829,2006 年。 -
Donoho and Johnstone [1993]
多諾霍和約翰斯通 [1993] D. Donoho and I. Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81:425–455, 1993.
D. 多諾霍和 I. 約翰斯通。透過小波收縮實現理想的空間適應。生物識別學,81:425–455,1993。 -
Donoho and Elad [2003]
多諾霍和埃拉德 [2003] D. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via minimization. Proceedings of the National Academy of Sciences, 100(5):2197–2202, 2003.
D. 多諾霍和 M. 埃拉德。透過 最小化在一般(非正交)字典中實現最佳稀疏表示。美國國家科學院院刊,100(5):2197–2202,2003 年。 -
Donoho and Huo [2001]
多諾霍與霍 [2001] D. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition. Information Theory, IEEE Transactions on, 47(7):2845–2862, 2001.
D. 多諾霍和 X. 霍。不確定原理和理想原子分解。資訊理論,IEEE 彙刊,47(7):2845–2862, 2001。 -
Efron et al. [2004] 埃夫隆等。 [2004]
B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani.
Least angle regression (with discussion).
Annals of Statistics, 32(2):407–499,
2004.
ISSN 0090-5364.
B. 埃夫隆、T. 哈斯蒂、I. 約翰斯通和 R. 提布希拉尼。最小角度回歸(帶討論)。統計年鑑,32(2):407–499, 2004。 -
Fan and Li [2001]
範、李 [2001] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360(13), 2001.
J. 範和 R. 李。透過非凹懲罰似然及其預言特性進行變數選擇。美國統計協會雜誌,96(456):1348–1360(13),2001。 -
Fan and Lv [2011]
範與呂 [2011] J. Fan and J. Lv. Nonconcave penalized likelihood with NP-dimensionality. Information Theory, IEEE Transactions on, 57(8):5467–5484, 2011.
J. Fan 與 J. Lv.具有 NP 維的非凹懲罰似然。資訊理論,IEEE 彙刊,57(8):5467–5484,2011。 -
Fan and Lv [2013]
範與呂 [2013] Y. Fan and J. Lv. Asymptotic equivalence of regularization methods in thresholded parameter space. Journal of the American Statistical Association, 108(503):1044–1061, 2013.
Y. Fan 與 J. Lv.閾值參數空間中正規化方法的漸近等價。美國統計協會雜誌,108(503):1044–1061,2013 年。 -
Frank and Friedman [1993]
弗蘭克和弗里德曼 [1993] I. Frank and J. Friedman. A statistical view of some chemometrics regression tools (with discussion). Technometrics, 35(2):109–148, 1993.
I. 弗蘭克和 J. 弗里德曼。一些化學計量學迴歸工具的統計視圖(帶討論)。技術計量學,35(2):109–148,1993。 -
Friedman [2008] 弗里德曼[2008]
J. Friedman.
Fast sparse regression and classification.
Technical report, Department of Statistics, Stanford University,
2008.
J·弗里德曼。快速稀疏回歸和分類。技術報告,史丹佛大學統計系,2008 年。 -
Friedman et al. [2007] 弗里德曼等人。 [2007]
J. Friedman, T. Hastie, H. Hoefling, and R. Tibshirani.
Pathwise coordinate optimization.
Annals of Applied Statistics, 2(1):302–332, 2007.
J. Friedman、T. Hastie、H. Hoefling 和 R. Tibshirani。路徑座標優化。應用統計年鑑,2(1):302–332,2007 年。 -
Furnival and Wilson [1974]
弗尼瓦爾和威爾遜 [1974] G. Furnival and R. Wilson. Regression by leaps and bounds. Technometrics, 16:499–511, 1974.
G. 弗尼瓦爾和 R. 威爾遜。突飛猛進的回歸。技術計量學,16:499–511,1974。 -
Greenshtein [2006] 格林斯坦[2006]
E. Greenshtein.
Best subset selection, persistence in high-dimensional statistical
learning and optimization under constraint.
The Annals of Statistics, 34(5):2367–2386, 2006.
E.格林斯坦。最佳子集選擇,堅持高維度統計學習和 限制下的最佳化。 《統計年鑑》,34(5):2367–2386,2006 年。 -
Greenshtein and Ritov [2004]
格林斯坦和里托夫 [2004] E. Greenshtein and Y. Ritov. Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli, 10(6):971–988, 2004.
E. Greenshtein 和 Y. Ritov。高維線性預測器選擇的持久性和過度參數化的優點。伯努利,10(6):971–988,2004。 -
Gurobi Optimization [2013]
Gurobi 優化 [2013] I. Gurobi Optimization. Gurobi optimizer reference manual, 2013. URL http://www.gurobi.com.
I.Gurobi 優化。 Gurobi 優化器參考手冊,2013 年。 -
Hastie et al. [2009] 哈斯蒂等人。 [2009]
T. Hastie, R. Tibshirani, and J. Friedman.
The Elements of Statistical Learning, Second Edition: Data
Mining, Inference, and Prediction (Springer Series in Statistics).
Springer New York, 2 edition, 2009.
ISBN 0387848576.
T. Hastie、R. Tibshirani 和 J. Friedman。統計學習的要素,第二版:資料探勘、推理與預測(Springer 統計系列)。施普林格紐約,第 2 版,2009 年。 -
Knight and Fu [2000]
奈特與福 [2000] K. Knight and W. Fu. Asymptotics for lasso-type estimators. Annals of Statistics, 28(5):1356–1378, 2000.
K. 奈特和 W. Fu。套索型估計量的漸近。統計年鑑,28(5):1356–1378,2000。 -
Loh and Wainwright [2013]
Loh 和溫賴特 [2013] P.-L. Loh and M. Wainwright. Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. In Advances in Neural Information Processing Systems, pages 476–484, 2013.
P.-L。 Loh 和 M.Wainwright。具有非凸性的正則化 m 估計量:局部最優的統計和演算法理論。神經資訊處理系統進展,第 476-484 頁,2013 年。 -
Lv and Fan [2009]
呂與範 [2009] J. Lv and Y. Fan. A unified approach to model selection and sparse recovery using regularized least squares. The Annals of Statistics, pages 3498–3528, 2009.
J. Lv 和 Y. Fan。使用正則化最小二乘進行模型選擇和稀疏恢復的統一方法。 《統計年鑑》,第 3498-3528 頁,2009 年。 -
Mazumder et al. [2011] 馬宗德等人。 [2011]
R. Mazumder, J. Friedman, and T. Hastie.
Sparsenet: Coordinate descent with non-convex penalties.
Journal of the American Statistical Association,
117(495):1125–1138, 2011.
R. Mazumder、J. Friedman 和 T. Hastie。 Sparsenet:協調具有非凸懲罰的下降。美國統計協會雜誌,117(495):1125–1138,2011 年。 -
Meinshausen and Bühlmann [2006]
邁因斯豪森和布爾曼 [2006] N. Meinshausen and P. Bühlmann. High-dimensional graphs and variable selection with the lasso. Annals of Statistics, 34:1436–1462, 2006.
N. 邁因斯豪森和 P. 布爾曼。使用套索進行高維圖和變數選擇。統計年鑑,34:1436–1462,2006 年。 -
Miller [2002] 米勒[2002]
A. Miller.
Subset selection in regression.
CRC Press Washington, 2002.
A.米勒。回歸中的子集選擇。 CRC 出版社華盛頓,2002 年。 -
Natarajan [1995] 納塔拉揚 [1995]
B. Natarajan.
Sparse approximate solutions to linear systems.
SIAM journal on computing, 24(2):227–234,
1995.
B.納塔拉揚。線性系統的稀疏近似解。 SIAM 計算雜誌,24(2):227–234,1995。 -
Nemhauser [2013] 內姆豪瑟 [2013]
G. Nemhauser.
Integer programming: the global impact.
Presented at EURO, INFORMS, Rome, Italy, 2013.
http://euro2013.org/wp-content/uploads/Nemhauser_EuroXXVI.pdf.
Accessed: 2013-12-04.
G. 內姆豪瑟。整數規劃:全球影響。發表於 2013 年義大利羅馬 EURO、INFORMS 上。造訪時間:2013 年 12 月 4 日。 -
Nesterov [2005] 涅斯特羅夫 [2005]
Y. Nesterov.
Smooth minimization of non-smooth functions.
Mathematical Programming, Series A, 103:127–152,
2005.
Y.內斯特羅夫。非平滑函數的平滑最小化。數學規劃,A 系列,103:127–152,2005 年。 -
Nesterov [2007] 內斯特羅夫 [2007]
Y. Nesterov.
Gradient methods for minimizing composite objective function.
Technical report, Center for Operations Research and Econometrics
(CORE), Catholic University of Louvain, 2007.
Technical Report number 76.
Y.內斯特羅夫。最小化複合目標函數的梯度方法。技術報告,運籌學和計量經濟學中心 (CORE),魯汶天主教大學,2007 年。 -
Nesterov [2004] 內斯特羅夫 [2004]
Y. Nesterov.
Introductory Lectures on Convex Optimization: A Basic Course.
Kluwer, Norwell, 2004.
Y·內斯特羅夫。凸優化入門講座:基礎課程。克魯維爾,諾威爾,2004 年。 -
Raskutti et al. [2011] 拉斯庫蒂等人。 [2011]
G. Raskutti, M. Wainwright, and B. Yu.
Minimax rates of estimation for high-dimensional linear regression
over-balls.
Information Theory, IEEE Transactions on, 57(10):6976–6994, 2011.
G. Raskutti、M. Wainwright 和 B. Yu。高維線性迴歸球的極小極大估計率。資訊理論,IEEE 彙刊,57(10):6976–6994,2011。 -
Rockafellar [1996] 洛克斐拉 [1996]
R. Rockafellar.
Convex Analysis.
Princeton University Press, Princeton, 1996.
ISBN 0691015864.
URL
http://www.amazon.com/exec/obidos/redirect?tag=citeulike07-20&path=ASIN/0691015864.
R.洛克斐拉。凸分析。普林斯頓大學出版社,普林斯頓,1996 年。 -
Shen et al. [2013] 沉等人。 [2013]
X. Shen, W. Pan, Y. Zhu, and H. Zhou.
On constrained and regularized high-dimensional regression.
Annals of the Institute of Statistical Mathematics,
65(5):807–832, 2013.
X. 沈、W. 潘、Y. 朱和 H. 週。關於約束和正則化高維度回歸。統計數學研究所年鑑,65(5):807–832,2013。 -
Tibshirani [1996] 提布希拉尼 [1996]
R. Tibshirani.
Regression shrinkage and selection via the lasso.
Journal of the Royal Statistical Society, Series B,
58:267–288, 1996.
R.Tibshirani。透過套索進行回歸收縮和選擇。 《皇家統計學會雜誌》,B 系列,58:267–288,1996 年。 -
Tibshirani [2011] 提比希拉尼 [2011]
R. Tibshirani.
Regression shrinkage and selection via the lasso: a retrospective.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 73(3):273–282, 2011.
R.Tibshirani。透過套索進行回歸收縮和選擇:回顧。 《皇家統計學會期刊》:B 系列(統計方法),73(3):273–282,2011 年。 -
Tropp [2006] 特羅普 [2006]
J. Tropp.
Just relax: Convex programming methods for identifying sparse signals
in noise.
Information Theory, IEEE Transactions on, 52(3):1030–1051, 2006.
J·特羅普。放鬆點:用於識別噪音中稀疏訊號的凸編程方法。資訊理論,IEEE 彙刊,52(3):1030–1051, 2006。 -
van de Geer et al. [2011]
范德吉爾等人。 [2011] S. Geer, P. Bühlmann, and S. Zhou. The adaptive and the thresholded lasso for potentially misspecified models (and a lower bound for the lasso). Electronic Journal of Statistics, 5:688–749, 2011.
S. Geer、P. Bühlmann 和 S. Zhou。針對可能錯誤指定的模型的自適應套索和閾值套索(以及套索的下限)。電子統計雜誌,5:688–749,2011。 -
Wainwright [2009] 溫賴特 [2009]
M. Wainwright.
Sharp thresholds for high-dimensional and noisy sparsity recovery
using-constrained quadratic programming (lasso).
Information Theory, IEEE Transactions on, 55(5):2183–2202, 2009.
M.溫賴特。使用約束二次規劃(套索)進行高維度和噪音稀疏性恢復的尖銳閾值。資訊理論,IEEE 彙刊,55(5):2183–2202, 2009。 -
Zhang [2010a] 張[2010a]
C.-H. Zhang.
Nearly unbiased variable selection under minimax concave penalty.
The Annals of Statistics, 38(2):894–942,
2010a.
C.-H。張。極小極大凹罰分下幾乎無偏的變數選擇。 《統計年鑑》,38(2):894–942,2010a。 -
Zhang and Huang [2008]
張和黃[2008] C.-H. Zhang and J. Huang. The sparsity and bias of the lasso selection in high-dimensional linear regression. Annals of Statistics, 36(4):1567–1594, 2008.
C.-H。張和黃。高維度線性迴歸中套索選擇的稀疏性和偏差。統計年鑑,36(4):1567–1594,2008。 -
Zhang and Zhang [2012]
張和張 [2012] C.-H. Zhang and T. Zhang. A general theory of concave regularization for high-dimensional sparse estimation problems. Statistical Science, 27(4):576–593, 2012.
C.-H。張和 T. 張。高維度稀疏估計問題的凹正則化的一般理論。統計科學,27(4):576–593,2012。 -
Zhang [2010b] 張[2010b]
T. Zhang.
Analysis of multi-stage convex relaxation for sparse regularization.
The Journal of Machine Learning Research, 11:1081–1107, 2010b.
T. 張。稀疏正規化的多層次凸鬆弛分析。機器學習研究雜誌,11:1081–1107,2010b。 -
Zhang et al. [2014] 張等人。 [2014]
Y. Zhang, M. Wainwright, and M. I. Jordan.
Lower bounds on the performance of polynomial-time algorithms for
sparse linear regression.
arXiv preprint arXiv:1402.1918, 2014.
Y. 張、M. 溫賴特和 M. I. 喬丹。稀疏線性迴歸多項式時間演算法效能的下限。 arXiv 預印本 arXiv:1402.1918, 2014。 -
Zhao and Yu [2006]
趙與餘 [2006] P. Zhao and B. Yu. On model selection consistency of lasso. Journal of Machine Learning Research, 7:2541–2563, 2006.
P.Zhao 和 B.Yu。關於lasso的模型選擇一致性。機器學習研究雜誌,7:2541–2563,2006 年。 -
Zheng et al. [2014] 鄭等人。 [2014]
Z. Zheng, Y. Fan, and J. Lv.
High dimensional thresholded regression and shrinkage effect.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 76(3):627–649, 2014.
ISSN 1467-9868.
Z. 鄭、Y. 範、J. Lv。高維度閾值回歸和收縮效應。 《皇家統計學會雜誌》:B 系列(統計方法),76(3):627–649, 2014。 -
Zou [2006] 鄒[2006]
H. Zou.
The adaptive lasso and its oracle properties.
Journal of the American Statistical Association, 101:1418–1429, 2006.
H. 鄒。自適應套索及其預言屬性。美國統計協會雜誌,101:1418–1429,2006 年。 -
Zou and Li [2008]
鄒和李 [2008] H. Zou and R. Li. One - step sparse estimates in nonconcave penalized likelihood problems. The Annals of Statistics, 36(4):1509–1533, 2008.
H. Zou 和 R. Li。非凹懲罰似然問題中的一步稀疏估計。 《統計年鑑》,36(4):1509–1533,2008 年。
Appendix and Supplementary Material
附錄及補充資料
Appendix A Additional Details for Section 2
附錄 A 第 2 節的其他詳細信息
A.1 Solving the convex quadratic optimization Problems in Section 2.3.2
A.1 求解2.3.2節的凸二次最佳化問題
We show here that the convex quadratic optimization problems appearing in Section 2.3.2 are indeed quite simple and can be solved with
small computational cost.
我們在這裡表明,第 2.3.2 節中出現的凸二次最佳化問題確實非常簡單,並且可以用很小的計算成本來解決。
We first consider Problem (18), the computation of which is a minimization problem.
We assume without loss of generality that the feasible set of problem (18) is non-empty.
Thus by standard results in quadratic optimization [10], it follows that, there exists a such that:
我們先考慮問題(18), 的計算,這是一個最小化問題。不失一般性,我們假設問題(18)的可行集非空。因此,根據二次最佳化 [10] 中的標準結果,可以得出結論,存在 使得:
where, denotes derivative wrt and a that satisfies the above gradient condition must also be feasible for Problem (18).
Simplifying the above equation, we get:
其中, 表示對 的導數,滿足上述梯度條件的 對於問題(18)也一定是可行的。簡化上式,我們得到:
where, is a vector in such that its th coordinate is one with the remaining equal to zero. Simplifying the above expression, we have
其中, 是 中的向量,其 座標為1,其餘為零。簡化上面的表達式,我們有
Above, is the identity matrix of size and is the familiar projection matrix given by 333Note that we assume here that which typically guarantees that is invertible with probability one, provided the entries of
are drawn from a continuous ditribution.
請注意,我們在這裡假設 通常保證 以機率 1 可逆,前提是 的條目來自連續分佈。
上面, 是大小為 的單位矩陣, 是 3 and . Observing that
, one can readily estimate that satisfies the above simple quadratic equation. This leads to the solution of , which
subsequently leads to the optimal value that solves Problem (18). This readily leads to the optimum of Problem (18).
和 。觀察 ,我們可以很容易地估計 滿足上述簡單的二次方程式。這導致 的解,隨後導致解決問題 (18) 的最優值 。這很容易導致問題(18)的最優。
The above argument readily applies to Problem (18), for the computation of by writing it as an equivalent minimization problem and observing that:
上述論點很容易適用於問題 (18),透過將其寫為等效的最小化問題並觀察到來計算 :
The above derivation can also be adapted to the case of Problem (19).
Towards this end, notice that for estimating the above steps (for computing )
will be modified: gets replaced by (the th row of ); and
denotes the projection matrix onto the column space of , even if the matrix is not invertible (since here, we consider arbitrary ).
上述推導也可以適用於問題(19)的情況。為此,請注意,為了估計 ,上述步驟(用於計算 )將被修改: 被 替換( 的 行); 表示在 列空間上的投影矩陣,即使矩陣 不可逆(因為在這裡,我們考慮任意
A.2 Details for Section 2.3.4
A.2 第 2.3.4 節的詳細信息
Note that in Problem (10) we consider a uniform bound on ’s:
for all .
Note that some of the variables may have larger
amplitude than the others, thus it may be reasonable to have bounds depending upon the variable index .
Thusly motivated, for added flexibility, one can consider the following (adaptive)
bounds on ’s: for .
The parameters can be taken as as defined in (18).
請注意,在問題(10)中,我們考慮所有 的 的統一界限: 。請注意,某些變數 可能比其他變數具有更大的幅度,因此根據變數索引 確定界限可能是合理的。因此,為了增加彈性,可以考慮 的以下(自適應)邊界: for 。參數 可以被視為如(18)定義的 。
More generally, one can also consider asymmetric bounds on as: for all .
更一般地,我們也可以將 上的不對稱邊界視為:對所有 的 。
Note that the above ideas for bounding ’s can also be extended to obtain sample-specific bounds on for
.
請注意,上述關於邊界 的想法也可以擴展以獲得 上 的特定於樣本的邊界。
The bounds on and can also be adapted to the above
variable dependent bounds on ’s.
和 上的邊界也可以適應 上的上述變數相關邊界。
While the above modifications may lead to marginally improved performances, we do not dwell much on these improvements mainly for the sake of a clear exposition.
雖然上述修改可能會稍微改善效能,但我們不會過多討論這些改進,主要是為了清楚說明。
A.3 Proof of Proposition 2
A.3 命題2的證明
Proof 證明
-
(a)
Given a set , we define and let denote the th entry of . For any we have
(a) 給定一個集合 ,我們定義 並讓 表示 條目> .對於任何 我們有 -
(b)
Using and standard power-series convergence (which is valid since ) we obtain
(b)使用 和標準冪級數收斂(自 起有效),我們得到
A.4 Proof of Theorem 2.1 A.4定理2.1的證明
Proof 證明
-
(a)
Since we have
(38) Note that 注意
(39) Applying Part (b) of Proposition 2 and (39) to (38), we obtain (14) .
將命題 2 的 (b) 部分和 (39) 應用於 (38),我們得到 (14) 。
(a)自 以來,我們有 -
(b)
We write for . If denote the rows of we have:
我們為 寫 。如果 表示 的行,我們有:(40) For every we have
對於每個 我們有(41) where are the (nonzero) singular values of the matrix . To see how one arrives at (41) let us denote the singular value decomposition of with We then have
其中 是矩陣 的(非零)奇異值。為了看看如何得出 (41),讓我們用 表示 的奇異值分解,然後我們有and the singular values of are thus , .
因此 的奇異值為 、 。 - (c) (c)我們有
-
(d)
For any vector which has zero entries in the coordinates outside , we have:
leading to (17).
導致 (17)。
(d)對於任何在 之外的座標中具有零個條目的向量 ,我們有:
Appendix B Proofs and Technical Details for Section 3
附錄 B 第 3 節的證據與技術細節
B.1 Proof of Proposition 6
B.1 命題6的證明
Proof 證明
-
(a)
Let be a vector satisfying . Using the notation we have the following chain of inequalities:
This chain of inequalities leads to:
這一系列的不平等導致:(47) Applying (47) for and , the vectors generated by Algorithm 1, we obtain (31). This implies that the objective values are decreasing and since the sequence is bounded below , we obtain that converges as .
將演算法 1 產生的向量 和 應用(47),我們得到(31)。這意味著目標值 正在減少,並且由於序列在 下面有界,我們得到 收斂為 。
(a)設 為滿足 的向量。使用符號 我們有以下不等式鏈: -
(b)
If and from part (a), the result follows.
(b)如果 且來自(a)部分,則結果如下。 -
(c)
The condition means that for all sufficiently large, the entry will remain (uniformly) bounded away from zero. We will use this to prove that the support of converges. For the purpose of establishing contradiction suppose that the support does not converge. Then, there are infinitely many values of such that . Using the fact that for all large we have
(48) where are such that . As , the quantity in the rhs of (48) remains bounded away from zero since . This contradicts the fact that as established in part (b). Thus, converges, and since is a discrete sequence, it converges after finitely many iterations, that is for all . Algorithm 1 becomes a vanilla gradient descent algorithm, restricted to the space for . Since a gradient descent algorithm for minimizing a convex function over a closed convex set leads to a sequence of iterates that converge [47, 45], we conclude that Algorithm 1 converges. Therefore, the sequence converges to , a first order stationarity point:
其中 滿足 。作為 ,自 以來,(48) 的 rhs 中的數量仍然遠離零。這與(b)部分所建立的 事實相矛盾。因此, 收斂,並且由於 是離散序列,因此在有限多次迭代後收斂,即對於所有 為 。演算法 1 成為普通梯度下降演算法,僅限於 的空間 。由於用於最小化閉凸集合上的凸函數的梯度下降演算法會導致一系列迭代收斂 [47, 45],因此我們得出結論:演算法 1 收斂。因此,序列 收斂到 ,即一階平穩點:
(c)條件 意味著對於所有足夠大的 ,條目 將保持(一致)遠離零。我們將用它來證明 的支持收斂。為了建立矛盾,假設支持度不收斂。然後, 有無限多個值,使得 。利用 對於所有大 的事實,我們有 -
(d)
Let denote the set of largest values of the vector in absolute value. By the definition of , we have
for all with and . Thus,
對於所有帶有 和 的 。因此,(49) Moreover, 而且,
Using the fact that we have
利用 我們有的事實as . Combining with (49) we have that:
作為 。結合(49)我們有:Since, (by hypothesis), the lhs of the above inequality equals zero, which leads to .
因為 (根據假設),上述不等式的 lhs 等於 0,這導致 。
(d)令 表示向量 的絕對值的 個最大值的集合。根據 的定義,我們有 -
(e)
We build on the proof of Part (d).
(e)我們以(d)部分的證明為基礎。It follows from equation (49) (by suitably modifying ‘’ to ‘’) that:
由式(49)可知(透過將‘ ’適當地修改為‘ ’):Note that the lhs of the above inequality is which is zero (by hypothesis), thus as .
請注意,上述不等式的 lhs 是 ,它為零(根據假設),因此 為 。Suppose is a limit point of the sequence . Thus there is a subsequence such that and . Using the continuity of the gradient and hence the function we have that as . Thus is a solution to the unconstrained (without cardinality constraints) optimization problem . Since is a decreasing sequence, converges to the minimum of .
假設 是序列 的極限點。因此,存在一個子序列 ,使得 和 。使用梯度的連續性以及函數 我們將 作為 。因此 是無約束(無基數約束)最佳化問題 的解。由於 是遞減序列,因此 收斂到 的最小值。
B.2 Proof of Proposition 3
B.2命題3的證明
Proof:
證明:
We provide a proof of Proposition 3, for the sake of completeness.
為了完整起見,我們提供了命題 3 的證明。
It suffices to consider for all . Let be an optimal solution to Problem (22)
and let . The
objective function is given by
.
Note that by selecting for , we can make the objective function
.
Thus, to minimize the objective function, must correspond to the indices of the
largest values of
對於所有 考慮 就足夠了。令 為問題 (22) 的最適解,並令 。目標函數由 給出。請注意,透過為 選擇 ,我們可以製定目標函數 。因此,為了最小化目標函數, 必須對應於 的最大 值的索引
B.3 Proof of Proposition 7
B.3 命題7的證明
Proof
證明
This follows from Proposition 6, Part (a), which implies that:
這是從命題 6 (a) 部分得出的結論,它意味著:
for any . Now by the definition of
we have which along with implies that the rhs of the above inequality is zero:
thus , i.e., . Since the choice of was arbitrary, it follows that
is the only element in the set .
對於任何 。現在根據 的定義,我們有 ,它與 一起意味著上述不等式的 rhs 為零:因此 ,即 。由於 的選擇是任意的,因此 是集合 中的唯一元素。
B.4 Proof of Proposition 8
B.4 命題8的證明
Proof
證明
The proof follows by noting that is -sparse along with Proposition 6, Part (a), which implies that:
證明如下,指出 與命題 6 (a) 部分一樣是 稀疏的,這意味著:
for any . Now, by the definition of
we have which along with implies that the rhs of the above inequality is zero:
thus is a first order stationary point.
對於任何 。現在,根據 的定義,我們有 ,它與 一起意味著上述不等式的 rhs 為零:因此 是一階駐點。
B.5 Proof of Theorem 3.1 B.5定理3.1的證明
B.6 Proof of Proposition 5
B.6 命題 5 的證明
Proof 證明
If is a first order stationary point with ,
it follows from the argument following Definition 1, that there is a set with such
that for all and for all .
Let for . Suppose denotes the set of indices corresponding to the
top ordered values of . Note that:
如果 是具有 的一階駐點,則根據定義 1 後面的參數,存在一個帶有 對所有 和 對所有 。設 代表 。假設 表示與 的頂部 有序值相對應的索引集。注意:
(51) |
For and we have .
This implies that
. Since and , it follows that
. We thus have that
for all . In addition, note that for all .
Thus it follows that and hence .
對於 和 我們有 。這意味著 。由於 和 ,因此 。因此,我們對所有 都有 。此外,請注意 對於所有 。因此,可以得到 和 。
Appendix C Brief Review of Statistical Properties for the subset selection problem
附錄 C 子集選擇問題統計特性的簡要回顧
In this section, for the sake of completeness we briefly review some of the properties of solutions to Problem (1).
在本節中,為了完整起見,我們簡要回顧一下問題(1)的解決方案的一些屬性。
Suppose the linear
model assumption is true, i.e., , with .
Let denote a solution to (1).
[46] showed that, with probability greater than
, the worst case (over ) predictive performance has the following upper bound:
假設線性模型假設成立,即 和 。令 表示 (1) 的解。 [46]表明,當機率大於 時,最壞情況(超過 )預測性能具有以下上限:
(52) |
where, are universal constants. Similar results also appear in [12, 58].
Interestingly, the upper bound (52) does not depend upon
.
Unless , the upper bound appearing in (52) is of the order where the constants are universal.
In terms of the expected (worst case) predictive risk, an upper bound is given by [58]:
其中, 是通用常數。類似的結果也出現在[12, 58]。有趣的是,上限 (52) 並不取決於 。除非 ,否則 (52) 中出現的上限的順序是 ,其中常數是通用的。就預期(最壞情況)預測風險而言,上限由[58]給出:
(53) |
where, the symbol “” means “” upto some universal constants.
其中,符號「 」表示「 」直到某些通用常數。
A natural question is how do the bounds for Lasso-based solutions compare with (53)?
In a recent paper [58], the authors derive upper and lower bounds of the prediction performance of the thresholded version of the
Lasso solution, which we present briefly.
Suppose
一個自然的問題是,基於 Lasso 的解決方案的界限與 (53) 相比如何?在最近的一篇論文 [58] 中,作者推導了 Lasso 解決方案的閾值版本的預測性能的上限和下限,我們對此進行了簡要介紹。認為
denotes a Lasso solution
for . Let denote the thresholded version of the Lasso solution, which retains the
top entries of in absolute value and sets the remaining to zero. The bounds on the predictive performances of Lasso based solutions
depend upon a restricted eigen-value type condition. Following [58], we define, for any subset , the quantity:
where, and
.
We say that the matrix satisfies a restricted eigen-value type condition with parameter if it satisfies the following:
表示 的 Lasso 解。讓 表示 Lasso 解決方案的閾值版本,它以絕對值保留 的頂部 條目並將其餘條目設為零。基於 Lasso 的解決方案的預測效能界限取決於受限特徵值類型條件。根據[58],我們為任何子集 定義數量: 其中, 和 。如果矩陣 滿足以下條件,則我們說矩陣 滿足參數 的受限特徵值類型條件:
Note that and is also related to the so called compatibility condition [11].
In an insightful paper, [58] show that under such restricted eigenvalue type conditions the following holds:
請注意, 和 也與所謂的兼容性條件有關[11]。在一篇富有洞察力的論文中,[58]表明,在這種受限的特徵值類型條件下,以下內容成立:
(54) |
In particular, the lower bounds apply to bad design matrices
for some arbitrarily small scalar . In fact [58] establish a result stronger than (54), where,
can be replaced by a -sparse estimate delivered by a polynomial time method.
The bounds displayed in (54) show that there is a significant gap between the predictive performance of
subset selection procedures (see bound (53)) and Lasso based -sparse solutions—the magnitude of the gap depends upon
how small is. can be small if the pairwise correlations between the features of the model matrix is quite high. These results complement
our experimental findings in Section 5.
特別是,下界適用於某些任意小標量 的不良設計矩陣 。事實上,[58] 建立了一個比 (54) 更強的結果,其中 可以替換為由多項式時間方法提供的 稀疏估計。 (54) 中顯示的界限表明,子集選擇過程的預測性能(參見界限 (53))與基於 Lasso 的稀疏解決方案之間存在顯著差距——差距的大小取決於 有多小。如果模型矩陣的特徵之間的成對相關性相當高, 可以很小。這些結果補充了我們在第 5 節的實驗結果。
An in-depth analysis of the properties of solutions to the Lagrangian version of Problem (1), namely,
Problem (4) is presented in [56].
[46, 56] also analyze the errors in the regression coefficients: ,
under further minor assumptions on the model matrix .
[56, 48] provide interesting theoretical analysis of the variable selection properties of (1) and (4), showing that
subset selection procedures have superior variable selection properties over Lasso based methods.
在[56]中提出了對問題(1)的拉格朗日版本,即問題(4)的解的性質的深入分析。 [ 46, 56] 也分析了迴歸係數的誤差: ,在模型矩陣 的進一步次要假設下。 [56, 48]提供了對(1)和(4)的變數選擇屬性的有趣的理論分析,表明子集選擇過程比基於 Lasso 的方法具有更優越的變數選擇屬性。
In passing, we remark that [56] develop statistical properties of inexact solutions to Problem (4). This may serve as
interesting theoretical support for near global solutions to Problem (1), where the certificates of sub-optimality are delivered by our MIO framework in terms of
global lower bounds. A precise and thorough understanding of the statistical properties of sub-optimal solutions to Problem (1) is left for an interesting piece of future work.
順便說一句,我們注意到[56]開發了問題(4)的不精確解的統計特性。這可能為問題 (1) 的近全局解決方案提供有趣的理論支持,其中次優性證書由我們的 MIO 框架根據全局下界提供。對問題(1)的次優解的統計特性的精確和透徹的理解留待未來有趣的工作進行。
Appendix D Additional Details on Experiments and Computations
附錄 D 有關實驗和計算的其他詳細信息
D.1 Some additional figures related to the radii of bounding boxes
D.1與邊界框半徑相關的一些附加數字
Some figures illustrating the effect of the bounding box radii are presented in Figure 12.
圖 12 顯示了一些說明邊界框半徑影響的圖。
Evolution of the MIO gap for (37), effect of bounding box radii ().
(37) 的 MIO 間隙的演變,邊界框半徑的影響 ( )。
and 和
![Refer to caption](/html/1507.03133/assets/x36.png)
![Refer to caption](/html/1507.03133/assets/x37.png)
and 和
![Refer to caption](/html/1507.03133/assets/x38.png)
![Refer to caption](/html/1507.03133/assets/x39.png)
圖 12:MIO 公式中不同邊界框半徑的 MIO 間隙的演變 (37)。頂部面板的半徑是底部面板尺寸的兩倍。所考慮的資料集是根據範例1 產生的,其中 和 對於不同的SNR 值:[左圖] SNR = 1,[右圖] SNR = 3。對於每種情況,已考慮 的不同值。頂部面板的邊界框半徑是下部面板中相應情況的兩倍。如預期的那樣,MIO 間隙閉合的時間取決於盒子的半徑。發現獲得的最佳解決方案對邊界框半徑的選擇不敏感。
D.2 Lasso, Debiased Lasso and MIO
D.2 Lasso、Debiased Lasso 與 MIO
We present here comparisons of the debiased Lasso with MIO and Lasso.
我們在此將去偏 Lasso 與 MIO 和 Lasso 進行比較。
Debiasing is often used to mitigate the shrinkage imparted by the Lasso regularization parameter. This is done by performing an unrestricted least squares
on the support selected by the Lasso. Of course the results will depend upon the tuning parameter used for the problem. We use two methods towards this end.
In the first method we find the best Lasso solution (by obtaining an optimal tuning parameter based on minimizing predictive error on a held out validation set); we then obtain the
un-regularized least squares solution for that Lasso solution.
This typically performed worse than Lasso in all the experiments we tried—see Tables 3 and 4.
The unrestricted least squares solution on the optimal
model selected by the Lasso (as shown in Figure 4) had worse predictive performance than the Lasso,
with the same sparsity pattern, as shown in Table 3.
This is probably due to overfitting since the model selected by the Lasso is quite dense compared to .
Table 4 presents the results for . We consider the same example presented in Figure 9, Example 1.
First of all, Table 4 presents the prediction performance of Lasso after debiasing—we considered the same tuning parameter considered
optimal for the Lasso problem. We see that as in the case of Table 3, the debiasing does not lead to improved performance in terms of prediction error.
去偏通常用於減輕 Lasso 正規化參數造成的收縮。這是透過對套索選擇的支援執行無限制最小二乘來完成的。當然,結果將取決於問題所使用的調整參數。為此,我們使用兩種方法。在第一種方法中,我們找到最佳的 Lasso 解決方案(透過在最小化驗證集上的預測誤差的基礎上獲得最佳調整參數);然後我們得到該 Lasso 解的非正規化最小平方法解。在我們嘗試的所有實驗中,這通常比Lasso 表現更差- 請參閱表3 和4。具有相同的預測性能稀疏模式,如表 3 所示。表 4 顯示了 的結果。我們考慮圖 9 範例 1 中提供的相同範例。我們看到,與表 3 的情況一樣,去偏並沒有改善預測誤差方面的表現。
We thus experimented with another variant of the debiased Lasso, where for every we computed the Lasso solution (2)
and obtained by performing an unrestricted least squares fit on the support selected by the Lasso solution at .
This method can be thought of delivering feasible solutions for Problem (1), for a value of determined by the Lasso solution at .
The success of this method makes a case in support of using criterion (1). The tuning parameter was then selected by minimizing predictive performance on a held out test validation set. This method in general performed better than Lasso in delivering a sparser model with better predictive accuracy than the Lasso.
The performance of the debiased Lasso was similar to Sparsenet and was in general inferior to MIO by orders of magnitude, especially for the problems
where the pairwise correlations between the variables was large and SNR was low and .
The results are presented in Table 5,6 (for the case ) and 7 and 8 (for the case ).
因此,我們嘗試了去偏Lasso 的另一種變體,其中對於每個 ,我們計算Lasso 解(2) 並通過在選擇的支持上執行無限制最小二乘擬合來獲得 處的套索解決方案。此方法可以被認為是為問題 (1) 提供可行的解決方案,其值 由 處的 Lasso 解決方案確定。此方法的成功為標準(1)的使用提供了支持。然後透過最小化測試驗證集上的預測性能來選擇調整參數。此方法通常比 Lasso 表現更好,可以提供比 Lasso 更好的預測精度的稀疏模型。除偏 Lasso 的表現與 Sparsenet 類似,但總體上比 MIO 低幾個數量級,特別是對於變數之間的成對相關性較大且 SNR 較低的問題和 。結果如表 5,6(針對情況 )以及表 7 和 8(針對情況 )所示。
Debiasing at optimal Lasso model,
最佳 Lasso 模型的去偏,
SNR | Ratio: Lasso/ Debiased Lasso 比率:套索/去偏套索 |
|
---|---|---|
6.33 | 0.5 | 0.33 |
3.17 | 0.5 | 0.54 |
1.58 | 0.5 | 0.53 |
6.97 | 0.8 | 0.67 |
3.48 | 0.8 | 0.64 |
1.74 | 0.8 | 0.63 |
8.73 | 0.9 | 1 |
4.37 | 0.9 | 0.58 |
2.18 | 0.9 | 0.61 |
表 3:對應於圖 4 的數值實驗的套索和去偏套索,範例 1 具有 和 。這裡,「Ratio」等於 Lasso 與去偏 Lasso 在 Lasso 選擇的最佳調整參數下的預測誤差比
Debiasing at optimal Lasso model,
最佳 Lasso 模型的去偏,
SNR | Ratio: Lasso/Debiased Lasso 比率:套索/去偏套索 |
|
---|---|---|
10 | 0.8 | 0.90 |
7 | 0.8 | 1.0 |
3 | 0.8 | 0.91 |
表 4:對應於圖 9 的數值實驗的套索和去偏套索,範例 1 具有 和 。這裡,「Ratio」等於 Lasso 與去偏 Lasso 在 Lasso 選擇的最佳調整參數下的預測誤差之比。
The performance of this model
was comparable with Sparsenet—it was better than Lasso in terms of obtaining a sparser model with better predictive accuracy.
However, the performance of MIO was significantly better than the debiased version of the Lasso, especially for larger values of and smaller SNR values.
該模型的性能與 Sparsenet 相當——在獲得具有更好預測精度的稀疏模型方面,它比 Lasso 更好。然而,MIO 的性能明顯優於 Lasso 的去偏版本,特別是對於較大的 值和較小的 SNR 值。
Sparsity of Selected Models,
所選模型的稀疏性,
SNR | Lasso | Debiased Lasso 去偏套索 | MIO | |
6.33 | 0.5 | 27.6 (2.122) | 10.9 (0.65) | 10.8 (0.51) |
3.17 | 0.5 | 27.7 (2.045) | 10.9 (0.65) | 10.1 (0.1) |
1.58 | 0.5 | 28.0 (2.276) | 10.9 (0.65) | 10.2 (0.2) |
6.97 | 0.8 | 34.1 (3.60) | 10.4 (0.15) | 10 (0.0) |
3.48 | 0.8 | 34.0 (3.54) | 10.9 (0.55) | 10.2 (0.2) |
1.74 | 0.8 | 33.7 (3.49) | 13.7 (1.50) | 10 (0.0) |
8.73 | 0.9 | 25.9 (0.94) | 13.9 (0.68) | 10.5 (0.17) |
4.37 | 0.9 | 34.6 (3.23) | 18.1 (1.30) | 10.2 (0.25) |
2.18 | 0.9 | 34.7 (3.28) | 20.5 (1.85) | 10.1 (0.10) |
表 5:Lasso、Debiased Lasso 和 MIO 所選模型中的非零數,對應於圖 4 的數值實驗,例如使用 和 的範例 1。所有三個模型的調整參數都是根據驗證集上的最佳預測模型單獨選擇的。括號內的數字表示標準誤。去偏套索會導致模型密度低於套索。當 較小且SNR較大時,去偏Lasso表現的模型大小與MIO相似。然而,對於較大的 值和較小的 SNR 值,子集選擇會導致比去偏套索稀疏幾個數量級的解決方案。
Predictive Performance of Selected Models,
所選模型的預測性能,
SNR | Lasso | Debiased Lasso 去偏套索 | MIO | Ratio: | |
Debiased Lasso/MIO 去偏套索/MIO | |||||
6.33 | 0.5 | 0.0384 (0.001) | 0.0255 (0.002) | 0.0266 (0.001) | 1.0 |
3.17 | 0.5 | 0.0768 (0.003) | 0.0511 (0.004) | 0.0478 (0.002) | 1.0 |
1.58 | 0.5 | 0.1540 (0.007) | 0.1021 (0.009) | 0.0901 (0.009) | 1.1 |
6.97 | 0.8 | 0.0389 (0.002) | 0.0223 (0.001) | 0.0231 (0.002) | 1.0 |
3.48 | 0.8 | 0.0778 (0.004) | 0.0464 (0.003) | 0.0484 (0.004) | 1.0 |
1.74 | 0.8 | 0.1557 (0.007) | 0.1156 (0.008) | 0.0795 (0.008) | 1.5 |
8.73 | 0.9 | 0.0325 (0.001) | 0.0220 (0.002) | 0.0197 (0.002) | 1.2 |
4.37 | 0.9 | 0.0632 (0.002) | 0.0532 (0.003) | 0.0427 (0.008) | 1.3 |
2.18 | 0.9 | 0.1265 (0.005) | 0.1254 (0.006) | 0.0703 (0.011) | 1.8 |
表 6:對應於圖 4 的數值實驗的 Lasso、Debiased Lasso 和 MIO 測試的預測性能,對於具有 和 的範例 1。括號內的數字表示標準誤。所有三個模型的調整參數都是根據驗證集上的最佳預測模型單獨選擇的。當 較小且SNR較大時,去偏Lasso效能與MIO相似。然而,對於較大的 值和較小的 SNR 值,子集選擇的表現優於基於去偏 Lasso 的解。
We then follow the method described above (for the case), where we consider a sequence of models and find the that delivers the best predictive model
on a held out validation set.
然後,我們按照上述方法(對於 情況),考慮一系列模型 並找到提供最佳預測模型的 在保留的驗證集上。
Sparsity of Selected Models,
所選模型的稀疏性,
SNR | Lasso | Debiased Lasso 去偏套索 | MIO | |
---|---|---|---|---|
10 | 0.8 | 25.7 (1.73) | 7.9 (0.43) | 5 (0.12) |
7 | 0.8 | 27.8 (2.69) | 8.1 (0.43) | 5 (0.16) |
3 | 0.8 | 28.0 (2.72) | 10.0 (0.88) | 6 (1.18) |
表 7:Lasso、Debiased Lasso 和 MIO 所選模型中的非零數,對應於圖 9 的數值實驗,例如帶有 和 的範例 1。括號內的數字表示標準誤。所有三個模型的調整參數都是根據驗證集上的最佳預測模型單獨選擇的。去偏 Lasso 會導致模型密度低於 Lasso,但模型密度高於 MIO。隨著 SNR 值的降低,MIO 和去偏 Lasso 之間的效能差距變得更大。
Predictive Performance of Selected Models,
所選模型的預測性能,
SNR | Lasso | Debiased | MIO | Ratio: | |
Lasso | Debiased Lasso/ MIO 去偏套索/ MIO | ||||
10 | 0.8 | 0.084 (0.004) | 0.046 (0.003) | 0.014 (0.005) | 3.3 |
7 | 0.8 | 0.122 (0.005) | 0.070 (0.004) | 0.020 (0.007) | 3.5 |
3 | 0.8 | 0.257 (0.012) | 0.185 (0.016) | 0.151 (0.027) | 1.2 |
表 8:Lasso、Debiased Lasso 和 MIO 的預測效能,對應於圖 9 的數值實驗,範例 1 中使用 和 。括號內的數字表示標準誤。所有三個模型的調整參數都是根據驗證集上的最佳預測模型單獨選擇的。 MIO 始終能夠產生比 Debiased Lasso 和普通 Lasso 更好的預測模型。 Debiased Lasso 的性能比普通 Lasso 更好。