Temporal sparse feature auto-combination deep network for video action recognition

Wang, Q., Gong, D., Qi, M., Shen, Y. and Lei, Y. (2018) Temporal sparse feature auto-combination deep network for video action recognition. Concurrency and Computation: Practice and Experience. ISSN 1532-0626.

Full text not available from this repository.

Abstract

In order to deal with action recognition for large‐scale video data, we present a spatio‐temporal auto‐combination deep network, which is able to extract deep features from short video segments by making full use of temporal contextual correlation of corresponding pixels among successive video frames. Based on conventional sparse encoding, we further consider the representative features in adjacent nodes of the hidden layers according to activation states similarities. A sparse auto‐combination strategy is applied to multiple input maps in each convolution stage. An information constraint of the representative features of hidden layer nodes is imposed to handle the adaptive sparse encoding of the topology. As a result, the learned features can represent the spatio‐temporal transition relationships better and the number of hidden nodes can be restricted to a certain range.

We conduct a series of experiments on two public data sets. The experimental results show that our approach is more effective and robust in video action recognition compared with traditional methods.

Item Type: Article
Subjects: Q Science > Q Science (General)
Divisions: Faculty of Social and Applied Sciences > School of Law, Criminal Justice and Computing
Depositing User: Dr Man Qi
Date Deposited: 03 Apr 2018 12:36
Last Modified: 03 Apr 2018 12:36
URI: https://create.canterbury.ac.uk/id/eprint/17180

Actions (login required)

Update Item (CReaTE staff only) Update Item (CReaTE staff only)

Downloads

Downloads per month over past year

View more statistics

Share

Connect with us

Last edited: 29/06/2016 12:23:00