Urine Sediment Examination(USE)
JMOS-2018
目录
- 目录
- 1 Background and Motivation
- 2 Innovation
- 3 Advantages
- 4 Methods(Meta-architectures)
- 5 Experiments
- 6 Conclusion
1 Background and Motivation
The urine sediment analysis of particles in microscopic images can assist physicians in evaluating patients with renal
and urinary tract diseases. Manual urine sediment examination is labor-intensive, subjective and time-consuming, and the traditional automatic algorithms often extract the hand-crafted features for recognition.
The Overfeat made the earliest efforts to apply deep CNNs to learn highly discriminative yet invariant feature for object detection. 作者采用基于 deep learning 采用 CNN 提取特征的方式,来取代 hand-crafted features
传统的 multi-stage 方法 heavily depends on the accuracy of the segmentation and the effectiveness of the hand-crafted features.
基于CNN 的方法 可以可以实现 end-to-end,segmentation free and 提取的 feature more discriminatory
2 Innovation
- Exploit Faster R-CNN and SSD for urine particle recognition
- Investigate various factors to improve the performance of Faster R-CNN and its variants
- Trimmed SSD to achieve better performance
3 Advantages
- Best mAP of 84.1%(准),get a best AP of 77.2% for cast particles
- Only 72ms per image for 7 categories(快)
4 Methods(Meta-architectures)
- MS-FRCNN(multiple scale Faster RCNN)
- OHEM-FRCNN(Faster R-CNN with online hard example mining)
- Trimmed SSD
详细的说明
Faster RCNN :shareable CNN feature extraction + region proposal generation + region classification and regression,采用 a pyramid of anchors
MS-Faster RCNN:builds a more sophisticated network for Fast R-CNN detector by a combination of both global context and local appearance features.
OHEM-Faster RCNN:Instead of a sampled mini-batch, it eliminates several heuristics and hyperparameters in common use and selects automatically hard examples by loss.
SSD:Unlike YOLO, it improves detection quality by applying a set of small convolutional filters to multiple feature maps to predict confidences and boxes offsets for various-size categories.
Trimmed SSD: 作者的数据集类别数比较少,SSD直接拿来用,会 produce a large number of redundant prediction results interfering with the final detection performance. For simplification, we attempt to remove several top convolutional layers from the auxiliary network of SSD, which leads to the trimmed SSD.
removing conv7, conv8, and conv9 layers
5 Experiments
5.1 datasets
Dataset consisting of 5,376 annotated images corresponding to 7 categories:
- erythrocyte (红细胞)目标数:21,815
- leukocyte(白细胞)目标数:6,169
- epithelial cell(上皮细胞)目标数:6,175
- crystal(结晶)目标数:1,644
- cast(管型)目标数:3,663
- mycete(霉菌)目标数:2,083
- epithelial nuclei(上皮核)目标数:687
数据集分布情况
5.2 Trainning
5.2.1 Feature extractors
(ZF、VGG、ResNet-50、ResNet-101、PVANet)
5.2.2 Training strategies
- 4 steps as Faster RCNN
- approximate joint training(end-to-end training)
end-to-end 比较好
5.3 不同scales 和 backbones 比较
不同 backbone,anchor 的不同 scales(ratios都是 1:1,1:2,2:1,因为数据集的 object 比较小,所以增加了scales的种类) 的结果如下,PVANet 比较好。
5.4 Data augmentation
a horizontal flip to augment training set
下图展示了 horizontal 和 verticle flip 的 比较,单独用都有提升,一起用没有提升,一般都是用 horizontal,为啥 vertical 也会有提升呢,个人感觉因为数据集是细胞,所以形状在竖直方向翻转,影响没有那么大。
5.5 Faster RCNN vs MS-Faster RCNN
从表格可以分析,MS-Faster RCNN 的效果会比 Faster RCNN 差,但是随着 anchor scales 的 diversity 增加,他们的之间的 gap 会缩小,且 MS-Faster RCNN 在小目标上会有更好的效果。
5.6 Faster RCNN vs Faster RCNN+OHEM
加了效果好,数据集越多,more benefits
5.7 SSD vs Trimmed SSD
为了适应小目标,smaller is better
5.8 Adding bells & whistles
5.8.1 anchor scales
- the more the better
- the smaller the superior
下图(a)VGG-16 为例,不同 anchor scales 的 proposal recall,(b)是不同 backbones 的 proposal recall,(c)不同 backbones 的 mAP
5.8.2 Feature extractors
图6 (b),用不同的 backbones
5.8.3 PVANet vs. VGG-16
由图6(c)可以看出,PVANet的 proposal 质量会差(曲线下降的比较快),但是由 table 2 看出,他最终的结果比较好,下图是检测时 recall 和prediction 的图,可以看出,随着 recall 增加,PVANet 的 precision 相较于 VGG-16 下降的更慢,且比 VGG-16 高。
6 Conclusion
在Faster RCNN 和 SSD 的基础上结合自己的数据集,用不同的 backbones,anchor scales,training stages 来提升 mAP。
- MS Faster RCNN
- Trimmed SSD(去掉一些层)