HSCA-Net: A Hybrid Spatial-Channel Attention Network in Multiscale Feature Pyramid for Document Layout Analysis

HSCA-Net: A Hybrid Spatial-Channel Attention Network in Multiscale Feature Pyramid for Document Layout Analysis

Authors

DOI:

https://doi.org/10.37965/jait.2022.0145

Keywords:

layout analysis, attention mechanism, deep learning, deformable convolution

Abstract

Document images often contain various page components and complex logical structures, which make document layout analysis task challenging. For most deep learning-based document layout analysis methods, convolutional neural networks (CNNs) are adopted as the feature extraction networks. In this paper, a hybrid spatial-channel attention network (HSCA-Net) is proposed to improve feature extraction capability by introducing attention mechanism to explore more salient properties within document pages. The HSCA-Net consists of spatial attention module (SAM), channel attention module (CAM), and designed lateral attention connection. CAM adaptively adjusts channel feature responses by emphasizing selective information, which depends on the contribution of the features of each channel. SAM guides CNNs to focus on the informative contents and capture global context information among page objects. The lateral attention connection incorporates SAM and CAM into multiscale feature pyramid network, and thus retains original feature information. The effectiveness and adaptability of HSCA-Net are evaluated through multiple experiments on publicly available datasets such as PubLayNet, ICDAR-POD, and Article Regions. Experimental results demonstrate that HSCA-Net achieves state-of-the-art performance on document layout analysis task.

Metrics

Metrics Loading ...

Downloads

Published

2022-12-23

How to Cite

Zhang, H., Xu, C., Shi, C., Bi, H., Li, Y., & Sami Mian. (2022). HSCA-Net: A Hybrid Spatial-Channel Attention Network in Multiscale Feature Pyramid for Document Layout Analysis. Journal of Artificial Intelligence and Technology, 3(1), 10–17. https://doi.org/10.37965/jait.2022.0145

Issue

Section

Research Articles

Funding data

Loading...