このエントリーをはてなブックマークに追加
ID 50422
creator
Suita, Shunsuke
Nishimura, Takahiro
Tokura, Hiroki
Kasagi, Akihiko
Tabaru, Tsuguchika
subject
Deep learning
Neural Networks
Convolution
Average pooling
GPU
abstract
The main contribution of this paper is to show efficient implementations of the convolution-pooling in the GPU, in which the pooling follows the multiple convolution. Since the multiple convolution and the pooling operations are performed alternately in earlier stages of many Convolutional Neural Networks (CNNs), it is very important to accelerate the convolution-pooling. Our new GPU implementation uses two techniques, (1) convolution interchange with direct sum, and (2) conversion to matrix multiplication. By these techniques, the computational and memory access cost are reduced. Further the convolution interchange is converted to matrix multiplication, which can be computed by cuBLAS very efficiently. Experimental results using Tesla V100 GPU show that our new GPU implementation compatible with cuDNN for the convolution-pooling is expected 2.90 times and 1.43 times faster for fp32 and fp16 than the multiple convolution and then the pooling by cuDNN, respectively. the most popular library of primitives to implement the CNNs in the GPU.
journal title
Journal of Parallel and Distributed Computing
volume
Volume 138
start page
222
end page
229
date of issued
2020-04
publisher
Elsevier
issn
0743-7315
publisher doi
language
eng
nii type
Journal Article
HU type
Journal Articles
DCMI type
text
format
application/pdf
text version
author
rights
© 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
This is not the published version. Please cite only the published version. この論文は出版社版ではありません。引用の際には出版社版をご確認、ご利用ください。
relation url
department
Graduate School of Advanced Science and Engineering
note
Post-print version/PDF may be used in an institutional repository after an embargo period of 24 months.