您的位置:首页 > 大数据 > 人工智能

姿态估计 Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations

2016-05-21 12:51 1161 查看


Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations


Xianjie Chen and Alan
Yuille




Abstract

We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which
exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We
use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency
and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.



Poster



PDF



Results & Evaluation Code



Full Code



Trained Model

@InProceedings{Chen_NIPS14,
title        = {Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations},
author       = {Xianjie Chen and Alan Yuille},
booktitle    = {Advances in Neural Information Processing Systems (NIPS)},
year         = {2014},
}



Key Ideas

1. Intuition: We can reliably predict the relative positions of a part's neighbors (as well as the presence of the part itself) by only observing the local image patch around it.


2. Deep Convolutional Neural Network is suitable to extract information about pairwise part relations, as well as part presence, from local image patches, which can be used in the unary and pairwise terms of the Graphical
Model.



Estimation Examples




Performance

Comparison of strict PCP results on the Leeds Sport Pose (LSP) Dataset using Observer-Centric
(OC) annotations.
Numbers are from the corresponding papers or errata.
MethodTorsoHeadUpper ArmsLower ArmsUpper LegsLower LegsMean
Ours92.787.869.255.482.977.075.0
Pishchulin et al., ICCV'1388.785.661.544.978.873.469.2
Ouyang et al., CVPR'1485.883.163.346.676.572.268.6
Ramakrishna et al., ECCV'1488.180.962.339.178.973.467.6
Eichner&Ferrari, ACCV'1286.280.156.537.474.369.364.3
Pishchulin et al., CVPR'1387.578.154.233.975.768.062.9
Yang&Ramanan, CVPR'1184.177.152.535.969.565.660.8
Kiefel&Gehler, ECCV'1484.478.453.327.474.467.160.7
Comparison of strict PCP results on the Leeds Sport Pose (LSP) Dataset using Person-Centric
(PC) annotations. Note that both our method and Tompson et al., NIPS'14* include
the Extended Leeds Sport Pose (ex_LSP) Dataset as training data.
Numbers are from the performance evaluation by Pishchulin et al.
MethodTorsoHeadUpper ArmsLower ArmsUpper LegsLower LegsMean
Ours*96.085.669.758.177.272.273.6
Tompson et al., NIPS'14*90.383.763.051.270.461.166.6
Pishchulin et al., ICCV'1388.785.146.035.263.658.458.0
Wang&Li, CVPR'1387.579.143.132.156.055.854.1
Comparison of strict PCP results on the Frames Labeled In
Cinema (FLIC) Dataset using Observer-Centric (OC) annotations.
Numbers are from our evaluation using the prediction results released by the authors.
MethodUpper ArmsLower ArmsMean
Ours97.086.891.9
Tompson et al., NIPS'1493.780.987.3
MODEC, CVPR'1384.452.168.3
Comparison of PDJ curves of elbows and wrists on the Frames
Labeled In Cinema (FLIC) Datasetusing Observer-Centric (OC) annotations. The curves are for Tompson
et al., NIPS'14, DeepPose, CVPR'14 and MODEC,
CVPR'13.


Figure Data: flic_elbows.fig | flic_wrists.fig

Cross-dataset PCP results on the Buffy Stickmen Dataset using Observer-Centric (OC) annotations.
Numbers are from the corresponding papers.
MethodUpper ArmsLower ArmsMean
Ours*96.889.092.9
Ours* strict94.584.189.3
Yang, PAMI'1397.868.683.2
Yang, PAMI'13 strict94.357.575.9
Sapp, ECCV'1095.363.079.2
FLPM, ECCV'1293.260.676.9
Eichner, IJCV'1293.260.376.8
Cross-dataset PDJ curves of elbows and wrists on the Buffy Stickmen Dataset using
Observer-Centric (OC) annotations. Note that both our method and DeepPose are trained on the FLIC dataset. Compared
with the curves on the FLIC dataset, the margin between our method and DeepPose significantly increases, which implies that our model generalizes better.


Figure Data: cross_dataset_buffy_elbows.fig | cross_dataset_buffy_elbows.fig


Related Pages


Nice Performance Evaluation by Pishchulin et al.


Buffy Stickmen Dataset (Buffy)


Leeds Sports Pose Dataset (LSP)


Extended Leeds Sports Pose Dataset (ex_LSP)


Frames Labeled In Cinema Dataset (FLIC)

from: http://www.stat.ucla.edu/~xianjie.chen/projects/pose_estimation/pose_estimation.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: