高维统计学习笔记2——The Square Root Lasso
高维统计学习笔记2——The Square Root Lasso
主要参考资料:Sara van de geer 《Estimation and Testing Under Sparsity》.
学习笔记1,https://blog.csdn.net/qq_37353305/article/details/89003023
前言
本来想直奔通过LASSO进行Inference的课题,但是Square Root Lasso将会在后面用到,而且它自有存在的道理和优势,借Square Root Lasso(后面简称SR LASSO)也刚好可以对上一节所讲的Oracle性质练练手。
SR LASSO
Motivation:上一节我们讲LASSO的时候,默认ϵ∼Nn(0,σ2I)\epsilon\sim\mathcal{N}_n(0,\sigma^2I)ϵ∼Nn(0,σ2I),而且σ\sigmaσ是已知的,所以我们可以选择合适的参数λ\lambdaλ,使得在很高的概率下∣∣XTϵ/n∣∣∞<λ0||X^T\epsilon/n||_\infty<\lambda_0∣∣XTϵ/n∣∣∞<λ0,现在的问题是如果σ\sigmaσ不知道的话怎么办呢?为了避免这个问题,Belloni(2011)提出了SR LASSO。
Belloni的想法是,既然我不知道σ\sigmaσ,那么我就同时对σ\sigmaσ进行估计:(1)(β^,σ^2)=argminβ∈Rp,σ2>0{∥Y−Xβ∥n2σ+σ+2λ∥β∥1}.
\left(\hat{\beta}, \hat{\sigma}^{2}\right)=\arg \min _{\beta \in \mathbb{R}^{p}, \sigma^{2}>0}\left\{\frac{\|Y-X \beta\|_{n}^{2}}{\sigma}+\sigma+2 \lambda_{}\|\beta\|_{1}\right\}.\tag1
(β^,σ^2)=argβ∈Rp,σ2>0min{σ∥Y−Xβ∥n2+σ+2λ∥β∥1}.(1)右边那一项对σ\sigmaσ求导等于0可以得到σ^2=∣∣Y−Xβ^∣∣n2=∣∣ϵ^∣∣n2,\hat{\sigma}^2=||Y-X\hat{\beta}||_n^2=||\hat\epsilon||_n^2,σ^2=∣∣Y−Xβ^∣∣n2=∣∣ϵ^∣∣n2,将它代入(1)立即得到SR LASSO的形式
β^:=argminβ∈Rp{∥Y−Xβ∥n+λ∥β∥1}.
\hat{\beta} :=\arg \min _{\beta \in \mathbb{R}^{p}}\left\{\|Y-X \beta\|_{n}+\lambda_{}\|\beta\|_{1}\right\}.
β^:=argβ∈Rpmin{∥Y−Xβ∥n+λ∥β∥1}.现在我们用同样的方法来研究它的Oracle性质,由KKT条件,
XT(Y−Xβ^)/n∥Y−Xβ^∥n=λz^,
\frac{X^{T}(Y-X \hat{\beta}) / n}{\|Y-X \hat{\beta}\|_{n}}=\lambda_{} \hat{z},
∥Y−Xβ^∥nXT(Y−Xβ^)/n=λz^,
这里z^Tβ^=∣∣β^∣∣1,∣∣z^∣∣∞≤1\hat{z}^T\hat{\beta}=||\hat\beta||_1,||\hat{z}||_\infty\leq1z^Tβ^=∣∣β^∣∣1,∣∣z^∣∣∞≤1。在左右两边同时乘上(β−β^)(\beta-\hat{\beta})(β−β^),在通过一点简单的运算有
∣∣X(β0−β^)∣∣n2≤∣∣X(β−β0)∣∣n2+(β^−β)TXTϵn+λ∣∣ϵ^∣∣n(∣∣β∣∣1−∣∣β^∣∣1),||X(\beta_0-\hat{\beta})||_n^2\leq||X(\beta-\beta_0)||_n^2+\frac{(\hat{\beta}-\beta)^TX^T\epsilon}{n}+\lambda||\hat{\epsilon}||_n(||\beta||_1-||\hat{\beta}||_1),
∣∣X(β0−β^)∣∣n2≤∣∣X(β−β0)∣∣n2+n(β^−β)TXTϵ+λ∣∣ϵ^∣∣n(∣∣β∣∣1−∣∣β^∣∣1),右边后两项我们可以写成下面这个样子,
∣∣ϵ∣∣n((β^−β)TXTϵn∣∣ϵ∣∣n+λ∣∣ϵ^∣∣n∣∣ϵ∣∣n(∣∣β∣∣1−∣∣β^∣∣1))||\epsilon||_n(\frac{(\hat{\beta}-\beta)^TX^T\epsilon}{n||\epsilon||_n}+\lambda\frac{||\hat{\epsilon}||_n}{||\epsilon||_n}(||\beta||_1-||\hat{\beta}||_1))
∣∣ϵ∣∣n(n∣∣ϵ∣∣n(β^−β)TXTϵ+λ∣∣ϵ∣∣n∣∣ϵ^∣∣n(∣∣β∣∣1−∣∣β^∣∣1))这样我们去bound∣∣XTϵn∣∣ϵ∣∣n∣∣∞||\frac{X^T\epsilon}{n||\epsilon||_n}||_\infty∣∣n∣∣ϵ∣∣nXTϵ∣∣∞时,注意到ϵn∣∣ϵ∣∣n∼Nn(0,I)\frac{\epsilon}{n||\epsilon||_n}\sim\mathcal{N}_n(0,I)n∣∣ϵ∣∣nϵ∼Nn(0,I),方差上没有σ\sigmaσ这一项,也就达到了我们的目的。按照上一节的步骤,构造F={∣∣XTϵn∣∣ϵ∣∣n∣∣∞≤λ0}\mathcal{F}=\{||\frac{X^T\epsilon}{n||\epsilon||_n}||_\infty\leq\lambda_0\}F={∣∣n∣∣ϵ∣∣nXTϵ∣∣∞≤λ0},λ0\lambda_0λ0的选取和σ\sigmaσ无关就能得到P[F]≈1\mathbb{P}[\mathcal{F}]\approx1P[F]≈1,而且在F\mathcal{F}F上我们有∣∣X(β0−β^)∣∣n2≤∣∣X(β−β0)∣∣n2+λ0∣∣ϵ∣∣n(∣∣β−β^∣∣1)||X(\beta_0-\hat{\beta})||_n^2\leq||X(\beta-\beta_0)||_n^2+\lambda_0||\epsilon||_n(||\beta-\hat{\beta}||_1)∣∣X(β0−β^)∣∣n2≤∣∣X(β−β0)∣∣n2+λ0∣∣ϵ∣∣n(∣∣β−β^∣∣1)+λ∣∣ϵ^∣∣n(∣∣β∣∣1−∣∣β^∣∣1),+\lambda||\hat{\epsilon}||_n(||\beta||_1-||\hat{\beta}||_1),
+λ∣∣ϵ^∣∣n(∣∣β∣∣1−∣∣β^∣∣1),现在我们来通过一些有技巧的方法来得到一个一般的Oracle inequality,为了方便起见,不妨令λ0∗=λ0∣∣ϵ∣∣n,λ∗=λ∣∣ϵ^∣∣n\lambda_0^*=\lambda_0||\epsilon||_n,\lambda^*=\lambda||\hat{\epsilon}||_nλ0∗=λ0∣∣ϵ∣∣n,λ∗=λ∣∣ϵ^∣∣n,则∥X(β0−β^)∥n2≤∥X(β−β0)∥n2+λ0∗∥β−β^∥1+λ∗∥β∥1−λ∗∥β^∥1,
\left\|X\left(\beta_{0}-\hat{\beta}\right)\right\|_{n}^{2} \leq\left\|X\left(\beta-\beta_{0}\right)\right\|_{n}^{2}+\lambda_{0}^{*}\|\beta-\hat{\beta}\|_{1}+\lambda^{*}\|\beta\|_{1}-\lambda^{*}\|\hat{\beta}\|_{1},
∥∥∥X(β0−β^)∥∥∥n2≤∥X(β−β0)∥n2+λ0∗∥β−β^∥1+λ∗∥β∥1−λ∗∥β^∥1,注意到右边后三项小于等于
λ0∗∣∣βS−β^S∣∣1+λ0∗∥β−S∥1+λ0∗∣∣β^−S∣∣1
\lambda_{0}^{*}||\beta_{S}-\hat{\beta}_{S}||_{1}+\lambda_{0}^{*}\left\|\beta_{-S}\right\|_{1}+\lambda_{0}^{*}||\hat{\beta}_{-S}||_{1}λ0∗∣∣βS−β^S∣∣1+λ0∗∥β−S∥1+λ0∗∣∣β^−S∣∣1+λ∗∣∣βS−β^S∣∣1+λ∗∥β−S∥1+λ∗∣∣β^−S∣∣1+\lambda^{*}||\beta_{S}-\hat{\beta}_{S}||_{1}+\lambda^{*}\left\|\beta_{-S}\right\|_{1}+\lambda^{*}||\hat{\beta}_{-S}||_{1}
+λ∗∣∣βS−β^S∣∣1+λ∗∥β−S∥1+λ∗∣∣β^−S∣∣1整理下一可得
∥X(β0−β^)∥n2+(λ∗−λ0∗)∣∣β^−S−β−S∣∣1≤\left\|X\left(\beta_{0}-\hat{\beta}\right)\right\|_{n}^{2}+(\lambda^*-\lambda_0^*)||\hat{\beta}_{-S}-\beta_{-S}||_1\leq
∥∥∥X(β0−β^)∥∥∥n2+(λ∗−λ0∗)∣∣β^−S−β−S∣∣1≤∥X(β0−β)∥n2+(λ∗+λ0∗)∣∣β^S−βS∣∣1\left\|X\left(\beta_{0}-{\beta}\right)\right\|_{n}^{2}+(\lambda^*+\lambda_0^*)||\hat{\beta}_{S}-\beta_{S}||_1∥X(β0−β)∥n2+(λ∗+λ0∗)∣∣β^S−βS∣∣1+(λ∗+λ0∗)∣∣β−S∣∣1+(\lambda^*+\lambda_0^*)||\beta_{-S}||_1
+(λ∗+λ0∗)∣∣β−S∣∣1令λδ∗=λU∗+δλL∗,λU∗=λ∗+λ0∗,λL=λ∗−λ0∗,0<δ<1\lambda_\delta^*=\lambda_U^*+\delta\lambda_L^*,\lambda_U^*=\lambda^*+\lambda_0^*,\lambda_L=\lambda^*-\lambda_0^*,0<\delta<1λδ∗=λU∗+δλL∗,λU∗=λ∗+λ0∗,λL=λ∗−λ0∗,0<δ<1,则
∥X(β0−β^)∥n2+λL∗∣∣β^−S−β−S∣∣1+δλL∗∣∣β^S−βS∣∣1≤\left\|X\left(\beta_{0}-\hat{\beta}\right)\right\|_{n}^{2}+\lambda_L^*||\hat{\beta}_{-S}-\beta_{-S}||_1+\delta\lambda_L^*||\hat{\beta}_{S}-\beta_{S}||_1\leq
∥∥∥X(β0−β^)∥∥∥n2+λL∗∣∣β^−S−β−S∣∣1+δλL∗∣∣β^S−βS∣∣1≤(2)∥X(β0−β)∥n2+λδ∗∣∣β^S−βS∣∣1+2λ∗∣∣β−S∣∣1.\left\|X\left(\beta_{0}-{\beta}\right)\right\|_{n}^{2}+\lambda_\delta^*||\hat{\beta}_{S}-\beta_{S}||_1+2\lambda^*||\beta_{-S}||_1.\tag2
∥X(β0−β)∥n2+λδ∗∣∣β^S−βS∣∣1+2λ∗∣∣β−S∣∣1.(2)当然我们只需要考虑(想一想为什么?)∥X(β0−β^)∥n2+δλL∗∣∣β^−β∣∣1≥∥X(β0−β)∥n2+2λ∗∣∣β−S∣∣1\left\|X\left(\beta_{0}-\hat{\beta}\right)\right\|_{n}^{2}+\delta\lambda_L^*||\hat{\beta}-\beta||_1\geq\left\|X\left(\beta_{0}-{\beta}\right)\right\|_{n}^{2}+2\lambda^*||\beta_{-S}||_1
∥∥∥X(β0−β^)∥∥∥n2+δλL∗∣∣β^−β∣∣1≥∥X(β0−β)∥n2+2λ∗∣∣β−S∣∣1的情况,故自然有
(1−δ)λL∗∣∣β^−S−β−S∣∣1≤λδ∗∣∣β^S−βS∣∣1.(1-\delta)\lambda_L^*||\hat{\beta}_{-S}-\beta_{-S}||_1\leq\lambda_\delta^*||\hat{\beta}_{S}-\beta_{S}||_1.
(1−δ)λL∗∣∣β^−S−β−S∣∣1≤λδ∗∣∣β^S−βS∣∣1.令L=λδ∗(1−δ)λL∗L=\frac{\lambda_\delta^*}{(1-\delta)\lambda_L^*}L=(1−δ)λL∗λδ∗,则有
(3)∣∣βS−β^S∣∣1≤∣S∣∣∣Xβ^−Xβ∣∣n2ϕ^2(L,S),||\beta_S-\hat{\beta}_S||_1\leq\frac{|S|||X\hat{\beta}-X\beta||_n^2}{\hat{\phi}^2(L,S)},\tag3∣∣βS−β^S∣∣1≤ϕ^2(L,S)∣S∣∣∣Xβ^−Xβ∣∣n2,(3)由基本不等式,再结合(2),做一点计算最后可得2δλL∗∥β^−β0∥1+∥X(β^−β0)∥n2≤minS⊂{1,…,p}minβ∈Rp{2δλL∗∥β−β0∥1+∥X(β−β0)∥n2+λδ∗2∣S∣ϕ^2(L,S)+4λ0∥ϵ^∥n∥β−S∥1}.
\begin{array}{l}{2 \delta \lambda^*_{\mathrm{L}}\left\|\hat{\beta}-\beta^{0}\right\|_{1}+\left\|X\left(\hat{\beta}-\beta^{0}\right)\right\|_{n}^{2}} \\ {\quad \leq \min _{S \subset\{1, \ldots, p\}} \min _{\beta \in \mathbb{R}^{p}}\left\{2 \delta \lambda^*_{\mathrm{L}}\left\|\beta-\beta^{0}\right\|_{1}+\left\|X\left(\beta-\beta^{0}\right)\right\|_{n}^{2}\right.} \\ {+\frac{{\lambda^*_{\delta}}^2|S|}{\hat{\phi}^{2}(L, S)}+4 \lambda_{0}\|\hat{\epsilon}\|_{n}\left\|\beta_{-S}\right\|_{1} \}}.\end{array}
2δλL∗∥∥∥β^−β0∥∥∥1+∥∥∥X(β^−β0)∥∥∥n2≤minS⊂{1,…,p}minβ∈Rp{2δλL∗∥∥β−β0∥∥1+∥∥X(β−β0)∥∥n2+ϕ^2(L,S)λδ∗2∣S∣+4λ0∥ϵ^∥n∥β−S∥1}.注意到我们在这里显然要求λ∗>λ0∗,i.e.,λ∗λ0∗>∥ϵ∥n∥ϵ^∥n\lambda^*>\lambda_0^*,i.e.,\frac{\lambda^*}{\lambda_0^*}>\frac{\|\epsilon\|_n}{\|\hat{\epsilon}\|_n}λ∗>λ0∗,i.e.,λ0∗λ∗>∥ϵ^∥n∥ϵ∥n,并且∥ϵ^∥n≠0\|\hat{\epsilon}\|_n=\not0∥ϵ^∥n≠0,这意味着∥Y−Xβ^∥n≠0\|Y-X\hat{\beta}\|_n=\not0∥Y−Xβ^∥n≠0,也就是说我们假设没有overfitting。实际上,我们也可以作∣∥ϵ^∥n∥ϵ∥n−1∣≤η|\frac{\|\hat{\epsilon}\|_n}{\|\epsilon\|_n}-1|\leq \eta∣∥ϵ∥n∥ϵ^∥n−1∣≤η的假设,0<η<1.,0<\eta<1.,0<η<1.这样对原设定稍作修改我们可以得到一个更一般的Oracle inequality(自己动手试一下,这里就不展示了)。
多元SR LASSO
Notation
∥A∥1:=∑j,k∣Aj,k∣
\|A\|_{1} :=\sum_{j, k}\left|A_{j, k}\right|
∥A∥1:=j,k∑∣Aj,k∣∥A∥ nuclear :=trace((ATA)1/2)
\|A\|_{\text { nuclear }} :=\operatorname{trace}\left(\left(A^{T} A\right)^{1 / 2}\right)
∥A∥ nuclear :=trace((ATA)1/2)对于多元的情况,即响应变量YYY是一个矩阵,多元SR LASSO的定义是B^:=argminB{∥Y−XB∥ nuclear /n+λ∥B∥1}
\hat{B} :=\arg \min _{B}\left\{\|Y-X B\|_{\text { nuclear }} / \sqrt{n}+\lambda_{}\|B\|_{1}\right\}
B^:=argBmin{∥Y−XB∥ nuclear /n+λ∥B∥1}i.e.,(B^,Σ^)=argminB,Σ>0{trace((Y−XB)T(Y−XB)Σ−1/2)/n+trace(Σ1/2)+2λ∥B∥1}
\begin{array}{r}{(\hat{B}, \hat{\Sigma})=\arg \min _{B, \Sigma>0}\left\{\operatorname{trace}\left((Y-X B)^{T}(Y-X B) \Sigma^{-1 / 2}\right) / n\right.} \\ {+\operatorname{trace}\left(\Sigma^{1 / 2}\right)+2 \lambda_{}\|B\|_{1} \}}\end{array}
(B^,Σ^)=argminB,Σ>0{trace((Y−XB)T(Y−XB)Σ−1/2)/n+trace(Σ1/2)+2λ∥B∥1}多元SR LASSO是为了我们后面nodeiwse regression做准备,以便在高维下通过LASSO进行Inference。
参考文献
Sara van de geer, Estimation and Testing Under Sparsity, 2016
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(六)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(十九):SVM
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(七)
- 统计学习精要(The Elements of Statistical Learning)课堂笔记(二十一):SMO算法
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(四)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(八)
- 统计学习精要(The Elements of Statistical Learning)课堂笔记(二十二):核函数和核方法
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(二)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(三)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(九)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(十)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(一)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(十一)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(二)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(二)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(十二)
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(十三)
- 统计学习精要(The Elements of Statistical Learning)课堂笔记(二十三):原型方法和最近邻KNN
- ≪统计学习精要(The Elements of Statistical Learning)≫课堂笔记(三)
- 统计学习精要(The Elements of Statistical Learning)课堂笔记(二十四):聚类