[ICLR2017]Deep Biaffine Attention for Neural Dependency Parsing
2017-09-07 09:23
330 查看
依存树解析任务目前有两种做法,一是Transition-based approach, 另一种就是graph-based方法;针对每种方法文中给出了将一句话解析成依存书的具体实现步骤,本文的方法是用的graph-based框架。
本文的框架图:
![](https://oscdn.geek-share.com/Uploads/Images/Content/201709/6404d5388c321b254aacd8a948aeacd2)
graph-based方法: 从左向右解析句子,针对句中的每个词,找该词的head词(该词到head词之间的arc)以及从该词到head词之间的依存关系类型。
假设句子长度为L+1, 那么针对每个词可能的head word个数就是L个(除词本身以外的其他所有词都可能是该词的head word),所以不同的句子中的词分类时类别是不同的。
举个例子:
句1长为10,句2长为7, 那么句1中的word i 的候选head word 个数是9,即类别为9, 句2中word j 的候选 head word个数是6.
![](https://oscdn.geek-share.com/Uploads/Images/Content/201709/436e06f62dcaac7d25776e732f4f28ce)
![](https://oscdn.geek-share.com/Uploads/Images/Content/201709/e5350b08b4f7341139623bf6148daf8c)
其中公式(6)就是就是套用的公式(2)所得
针对每个arc, arc的标签类别个数就是依存标签的个数是固定的:
![](https://oscdn.geek-share.com/Uploads/Images/Content/201709/e8795120ed35c2f1e9735e6a48a6c0b2)
本文的框架图:
graph-based方法: 从左向右解析句子,针对句中的每个词,找该词的head词(该词到head词之间的arc)以及从该词到head词之间的依存关系类型。
假设句子长度为L+1, 那么针对每个词可能的head word个数就是L个(除词本身以外的其他所有词都可能是该词的head word),所以不同的句子中的词分类时类别是不同的。
举个例子:
句1长为10,句2长为7, 那么句1中的word i 的候选head word 个数是9,即类别为9, 句2中word j 的候选 head word个数是6.
其中公式(6)就是就是套用的公式(2)所得
针对每个arc, arc的标签类别个数就是依存标签的个数是固定的:
相关文章推荐
- Look Closer to See Better Recurrent Attention Convolutional Neural Network for Fine-grained Image Re
- 用matlab训练数字分类的深度神经网络Training a Deep Neural Network for Digit Classification
- #Paper Reading# A Neural Attention Model for Abstractive Sentence Summarization
- 论文笔记——ThiNet: A Filter Level Pruning Method for Deep Neural Network Compreesion
- PVANET: Deep but Lightweight Neural Networks for Real-time Object Detection
- 论文笔记:Deep neural networks for YouTube recommendations
- Deep Neural Networks for YouTube Recommendations
- Assignment | 01-week4 -Deep Neural Network for Image Classification: Application
- Multi-column deep neural networks for image classification阅读
- 代码开源:Channel Pruning for Accelerating Very Deep Neural Networks
- Aggregated Residual Transformations for Deep Neural Networks - arxiv 16.11
- Channel Pruning for Accelerating Very Deep Neural Networks代码详解
- 第四周编程作业(二)-Deep Neural Network for Image Classification: Application
- 目标检测--A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection
- 论文笔记 《Deep Neural Networks for Object Detection》
- Reading Note: ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
- Why are Eight Bits Enough for Deep Neural Networks?
- [文献阅读]combined group and exclusive sparsity for deep neural networks
- 车辆计数--FCN-rLSTM: Deep Spatio-Temporal Neural Networks for Vehicle Counting in City Cameras
- DL-1: Tips for Training Deep Neural Network