【论文阅读】Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
参考:
https://blog.csdn.net/dcz1994/article/details/88837760
用一个Gibbs分布来表征条件随机场:
P(X∣I)=1Z(I)exp(−∑c∈CGϕc(Xc∣I))
P(\mathbf{X} | \mathbf{I})=\frac{1}{Z(\mathbf{I})} \exp \left(-\sum_{c \in \mathcal{C}_{\mathcal{G}}} \phi_{c}\left(\mathbf{X}_{c} | \mathbf{I}\right)\right)
P(X∣I)=Z(I)1exp⎝⎛−c∈CG∑ϕc(Xc∣I)⎠⎞
取随机场最大后验概率对应的x作为标签:
x∗=argmalx∈LNP(x∣I)
\mathbf{x}^{*}=\arg \operatorname{mal}_{\mathbf{x} \in \mathcal{L}^{N}} P(\mathbf{x} | \mathbf{I})
x∗=argmalx∈LNP(x∣I)
整个随机场的Gibbs能量为:
E(x)=∑iψu(xi)+∑i<jψp(xi,xj)
E(\mathrm{x})=\sum_{i} \psi_{u}\left(x_{i}\right)+\sum_{i<j} \psi_{p}\left(x_{i}, x_{j}\right)
E(x)=i∑ψu(xi)+i<j∑ψp(xi,xj)
式中,ψu(xi)\psi_{u}\left(x_{i}\right)ψu(xi)和 ψp(xi,xj)\psi_{p}\left(x_{i},x_j\right)ψp(xi,xj)分别代表unary and pairwise cliques
考虑二元势:
ψp(xi,xj)=μ(xi,xj)∑m=1Kw(m)k(m)(fi,fj)⎵k(fi,fj)
\psi_{p}\left(x_{i}, x_{j}\right)=\mu\left(x_{i}, x_{j}\right) \underbrace{\sum_{m=1}^{K} w^{(m)} k^{(m)}\left(\mathbf{f}_{i}, \mathbf{f}_{j}\right)}_{k\left(\mathbf{f}_{i}, \mathbf{f}_{j}\right)}
ψp(xi,xj)=μ(xi,xj)k(fi,fj)m=1∑Kw(m)k(m)(fi,fj)
式中表示的是整个概率图模型中某一个pairwise cliques的势函数,那个K是指一共有k个高斯核吗?μ(xi,xj)\mu(x_i,x_j)μ(xi,xj)是标签相关性函数:
对于多类别图像分割问题使用contrast-sensitive two-kernel potentials,IiI_iIi和IjI_jIj表示颜色向量,pip_ipi和pjp_jpj表示位置:
k(fi,fj)=w(1)exp(−∣pi−pj∣22θα2−∣Ii−Ij∣22θβ2)⎵ appearance kernel +w(2)exp(−∣pi−pj∣22θγ2)⎵ smoothness kernel
k\left(\mathbf{f}_{i}, \mathbf{f}_{j}\right)=\underbrace{w^{(1)} \exp \left(-\frac{\left|p_{i}-p_{j}\right|^{2}}{2 \theta_{\alpha}^{2}}-\frac{\left|I_{i}-I_{j}\right|^{2}}{2 \theta_{\beta}^{2}}\right)}_{\text { appearance kernel }}+w^{(2)} \underbrace{\exp \left(-\frac{\left|p_{i}-p_{j}\right|^{2}}{2 \theta_{\gamma}^{2}}\right)}_{\text { smoothness kernel }}
k(fi,fj)= appearance kernel w(1)exp(−2θα2∣pi−pj∣2−2θβ2∣Ii−Ij∣2)+w(2) smoothness kernel exp(−2θγ2∣pi−pj∣2)
Efficient Inference in Fully Connected CRFs
使用Q(X)Q(X)Q(X)近似代替原始的P(X)P(X)P(X)分布,并使得KL散度D(Q∣∣P)D(Q||P)D(Q∣∣P)最小。
推导过程参考FCN(5)——DenseCRF推导
这里我直接搬运过来了,这样方变做笔记哈哈哈
下面变分推断的目的是找到一个函数Q(x)Q(x)Q(x),来近似表示P(x)P(x)P(x),以降低模型的复杂度。这个过程经过推导可知需要进行迭代近似。CRF的参数包括θ和w\theta和wθ和w,参数的学习需要使用其他算法进行。
我们首先给出denseCRF的Gibbs分布:
P(X)=1ZP~(X)=1Zexp(∑iψu(xi)+∑i<jψp(xi,xj))
P(X)=\frac{1}{Z} \tilde{P}(X)=\frac{1}{Z} \exp \left(\sum_{i} \psi_{u}\left(x_{i}\right)+\sum_{i<j} \psi_{p}\left(x_{i}, x_{j}\right)\right)
P(X)=Z1P~(X)=Z1exp(i∑ψu(xi)+i<j∑ψp(xi,xj))
D(Q∥P)=∑xQ(x)log(Q(x)P(x))=−∑xQ(x)logP(x)+∑xQ(x)logQ(x)
D(Q \| P)=\sum_{x} Q(x) \log \left(\frac{Q(x)}{P(x)}\right)=-\sum_{x} Q(x) \log P(x)+\sum_{x} Q(x) \log Q(x)
D(Q∥P)=x∑Q(x)log(P(x)Q(x))=−x∑Q(x)logP(x)+x∑Q(x)logQ(x)
=−EX∈Q[logP(X)]+EX∈Q[logQ(X)] =-E_{X \in Q}[\log P(X)]+E_{X \in Q}[\log Q(X)] =−EX∈Q[logP(X)]+EX∈Q[logQ(X)]
=−EX∈Q[logP~(X)]+EX∈Q[logZ]+∑iEXi∈Q[logQi(Xi)] =-E_{X \in Q}[\log \tilde{P}(X)]+E_{X \in Q}[\log Z]+\sum_{i} E_{X_{i} \in Q}\left[\log Q_{i}\left(X_{i}\right)\right] =−EX∈Q[logP~(X)]+EX∈Q[logZ]+i∑EXi∈Q[logQi(Xi)]
=−EX∈Q[logP~(X)]+logZ+∑iEXi∈Qi[logQi(Xi)]
=-E_{X \in Q}[\log \tilde{P}(X)]+\log Z+\sum_{i} E_{X_{i} \in Q_{i}}\left[\log Q_{i}\left(X_{i}\right)\right]
=−EX∈Q[logP~(X)]+logZ+i∑EXi∈Qi[logQi(Xi)]
由于我们要求的是Q,而logZ项中没有Q,所以这一项可以省略。
Q(X)是在当前输入下,某一标签取得x值的概率
同时Q还需要满足:
概率归一化
∑xiQi(xi)=1
\sum_{x_{i}} Q_{i}\left(x_{i}\right)=1
xi∑Qi(xi)=1
所以利用拉格朗日乘子法,可以得到
L(Qi)=−EXi∈Q[logP~(X)]+∑iExi∈Qi[logQi(xi)]+λ(∑xiQi(xi)−1)
L\left(Q_{i}\right)=-E_{X_{i} \in Q}[\log \tilde{P}(X)]+\sum_{i} E_{x_{i} \in Q_{i}}\left[\log Q_{i}\left(x_{i}\right)\right]+\lambda\left(\sum_{x_{i}} Q_{i}\left(x_{i}\right)-1\right)
L(Qi)=−EXi∈Q[logP~(X)]+i∑Exi∈Qi[logQi(xi)]+λ(xi∑Qi(xi)−1)
这个公式的后面两项相对比较简单,但是前面一项比较复杂,我们单独做一下处理:
该项在之前被表示为:∑xQ(x)logQ(x)\sum_{x} Q(x) \log Q(x)∑xQ(x)logQ(x)
−EXi∈Q[logP~(X)]=−∫∏iQi(xi)[logP~(X)]dX
-E_{X_{i} \in Q}[\log \tilde{P}(X)]=-\int \prod_{i} Q_{i}\left(x_{i}\right)[\log \tilde{P}(X)] d X
−EXi∈Q[logP~(X)]=−∫i∏Qi(xi)[logP~(X)]dX
=−∫Qi(xi)∏iQ(x‾i)[logP~(X)]dxidX‾ =-\int Q_{i}\left(x_{i}\right) \prod_{i} Q\left(\overline{x}_{i}\right)[\log \tilde{P}(X)] d x_{i} d \overline{X} =−∫Qi(xi)i∏Q(xi)[logP~(X)]dxidX
=−∫Qi(xi)EX‾∈Q[logP~(X)]dxi
=-\int Q_{i}\left(x_{i}\right) E_{\overline{X} \in Q}[\log \tilde{P}(X)] d x_{i}
=−∫Qi(xi)EX∈Q[logP~(X)]dxi
经过上面的公式整理,我们可以求出偏导,可得
∂L(Qi)∂Qi(xi)=−EX‾∈Qi[logP~(X∣xi)]−logQi(xi)−1+λ
\frac{\partial L\left(Q_{i}\right)}{\partial Q_{i}\left(x_{i}\right)}=-E_{\overline{X} \in Q_{i}}\left[\log \tilde{P}\left(X | x_{i}\right)\right]-\log Q_{i}\left(x_{i}\right)-1+\lambda
∂Qi(xi)∂L(Qi)=−EX∈Qi[logP~(X∣xi)]−logQi(xi)−1+λ
令偏导为0,就可以求出极值:
Qi(xi)=exp(λ−1)exp(−EX‾∈Qi[logP~(X∣xi)])
Q_{i}\left(x_{i}\right)=\exp (\lambda-1) \exp \left(-E_{\overline{X} \in Q_{i}}\left[\log \tilde{P}\left(X | x_{i}\right)\right]\right)
Qi(xi)=exp(λ−1)exp(−EX∈Qi[logP~(X∣xi)])
由于每一个Q的exp(λ−1)\exp(\lambda-1)exp(λ−1)都相同,我们将其当作一个常数项,之后在renormalize的时候将其抵消掉,于是Q函数就等于:
Q(xi)=1Z1exp(−EX‾∈Qi[logP~(X∣xi)])
Q\left(x_{i}\right)=\frac{1}{Z_{1}} \exp \left(-E_{\overline{X} \in Q_{i}}\left[\log \tilde{P}\left(X | x_{i}\right)\right]\right)
Q(xi)=Z11exp(−EX∈Qi[logP~(X∣xi)])
我们将文章开头关于\tilde{P}的定义带入,就得到了
Q(xi)=1Z1exp(−EX‾∈Q[(∑iψu(xi)+∑j≠iψp(xi,xj))∣xi])
Q\left(x_{i}\right)=\frac{1}{Z_{1}} \exp \left(-E_{\overline{X} \in Q}\left[\left(\sum_{i} \psi_{u}\left(x_{i}\right)+\sum_{j \neq i} \psi_{p}\left(x_{i}, x_{j}\right)\right) | x_{i}\right]\right)
Q(xi)=Z11exp⎝⎛−EX∈Q⎣⎡⎝⎛i∑ψu(xi)+j̸=i∑ψp(xi,xj)⎠⎞∣xi⎦⎤⎠⎞
这里面xi的由于是已知的,所以我们可以得到补充材料里的结果(但是变量名不太一样):
Qi(xi=l)=1Ziexp[−ψu(l)−∑j≠iEX‾∈Qjψp(l,Xj)]
Q_{i}\left(x_{i}=l\right)=\frac{1}{Z_{i}} \exp \left[-\psi_{u}(l)-\sum_{j \neq i} E_{\overline{X} \in Q_{j}} \psi_{p}\left(l, X_{j}\right)\right]
Qi(xi=l)=Zi1exp⎣⎡−ψu(l)−j̸=i∑EX∈Qjψp(l,Xj)⎦⎤
继续扩展,就可以得到
=1Ziexp[−ψu(l)−∑m=1Kw(m)∑j≠iEX∈Qj[μ(l,Xj)k(m)(fi,fj)]]
=\frac{1}{Z_{i}} \exp \left[-\psi_{u}(l)-\sum_{m=1}^{K} w^{(m)} \sum_{j \neq i} E_{X \in Q_{j}}\left[\mu\left(l, X_{j}\right) k^{(m)}\left(f_{i}, f_{j}\right)\right]\right]
=Zi1exp⎣⎡−ψu(l)−m=1∑Kw(m)j̸=i∑EX∈Qj[μ(l,Xj)k(m)(fi,fj)]⎦⎤
=1Ziexp[−ψu(l)−∑m=1Kw(m)∑j≠i∑l′∈LQj(l′)μ(l,l′)k(m)(fi,fj)] =\frac{1}{Z_{i}} \exp \left[-\psi_{u}(l)-\sum_{m=1}^{K} w^{(m)} \sum_{j \neq i} \sum_{l^{\prime} \in L} Q_{j}\left(l^{\prime}\right) \mu\left(l, l^{\prime}\right) k^{(m)}\left(f_{i}, f_{j}\right)\right] =Zi1exp⎣⎡−ψu(l)−m=1∑Kw(m)j̸=i∑l′∈L∑Qj(l′)μ(l,l′)k(m)(fi,fj)⎦⎤
=1Ziexp[−ψu(l)−∑l′∈Lμ(l,l′)∑m=1Kw(m)∑j≠iQj(l′)k(m)(fi,fj)]
=\frac{1}{Z_{i}} \exp \left[-\psi_{u}(l)-\sum_{l^{\prime} \in L} \mu\left(l, l^{\prime}\right) \sum_{m=1}^{K} w^{(m)} \sum_{j \neq i} Q_{j}\left(l^{\prime}\right) k^{(m)}\left(f_{i}, f_{j}\right)\right]
=Zi1exp⎣⎡−ψu(l)−l′∈L∑μ(l,l′)m=1∑Kw(m)j̸=i∑Qj(l′)k(m)(fi,fj)⎦⎤
这样,一个类似message passing的公式推导就完成了。其中最内层的求和可以用截断的高斯滤波完成。搬运最后的一点公式,可以得:
Qi(m~)(l)=∑j≠iQj(l′)k(m)(fi,fj)=∑jQj(l)k(m)(fi,fj)−Qi(l)
Q_{i}^{(\tilde{m})}(l)=\sum_{j \neq i} Q_{j}\left(l^{\prime}\right) k^{(m)}\left(f_{i}, f_{j}\right)=\sum_{j} Q_{j}(l) k^{(m)}\left(f_{i}, f_{j}\right)-Q_{i}(l)
Qi(m~)(l)=j̸=i∑Qj(l′)k(m)(fi,fj)=j∑Qj(l)k(m)(fi,fj)−Qi(l)
最终得到的迭代公式是:
Qi(xi=l)=1Ziexp{−ψu(xi)−∑l′∈Lμ(l,l′)∑m=1Kw(m)∑j≠ik(m)(fi,fj)Qj(l′)}
Q_{i}\left(x_{i}=l\right)=\frac{1}{Z_{i}} \exp \left\{-\psi_{u}\left(x_{i}\right)-\sum_{l^{\prime} \in \mathcal{L}} \mu\left(l, l^{\prime}\right) \sum_{m=1}^{K} w^{(m)} \sum_{j \neq i} k^{(m)}\left(\mathbf{f}_{i}, \mathbf{f}_{j}\right) Q_{j}\left(l^{\prime}\right)\right\}
Qi(xi=l)=Zi1exp⎩⎨⎧−ψu(xi)−l′∈L∑μ(l,l′)m=1∑Kw(m)j̸=i∑k(m)(fi,fj)Qj(l′)⎭⎬⎫
- 【论文笔记】Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
- Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials之python代码运行
- Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
- 【fcCRFs】Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
- 论文阅读理解 - Semantic Image Segmentation With Deep Convolutional Nets and Fully Connected CRFs
- 文献阅读笔记——Object Co-Detection via Efficient Inference in a Fully-Connected CRF
- 论文阅读学习 - CTPN-Detecting Text in Natural Image with Connectionist Text Proposal Network
- 深度学习论文(八)---DeepLabV1-SEMANTIC IMAGE SEGMENTATION WITH DEEP CONVOLUTIONAL NETS AND FULLY CONNECTED C
- 论文阅读:《Visual Tracking with Fully Convolutional Networks》ICCV 2015
- 论文阅读:Efficient isolation of trusted subsystems in embedded systems
- 论文阅读(Weilin Huang——【ECCV2016】Detecting Text in Natural Image with Connectionist Text Proposal Network)
- Semantic Segmentation -- (DeepLabv2)Semantic Image Segmentation ... Fully Connected CRFs论文解读
- 《Semantic image segmentation with deep convolution nets and fully connected CRFs》学习总结(deeplabV1)
- Efficient Multi-Scale 3D CNN with fully connected CRF for Accurate Brain Lesion Segmentation 论文解读及实现
- 论文阅读(Weilin Huang——【arXiv2016】Accurate Text Localization in Natural Image with Cascaded Convolutional Text Network)
- 文献阅读:3D U-net with Multi-level Deep Supervision: Fully Automatic Segmentation of Proximal Femur in 。
- 《Fast and Accurate Inference with Adaptive Ensemble Prediction in Image Classification阅读笔记
- Modification of UCT with Patterns in Monte-Carlo Go(论文阅读)
- [论文阅读] EIE: Efficient Inference Engine on Compressed Neural Network
- 论文阅读(Xiang Bai——【CVPR2016】Multi-Oriented Text Detection with Fully Convolutional Networks)