このエントリーをはてなブックマークに追加
ID 50422
著者
Suita, Shunsuke
Nishimura, Takahiro
Tokura, Hiroki
Nakano, Koji 大学院先進理工系科学研究科 広大研究者総覧
Itou, Yasuaki 大学院先進理工系科学研究科 広大研究者総覧
Kasagi, Akihiko
Tabaru, Tsuguchika
キーワード
Deep learning
Neural Networks
Convolution
Average pooling
GPU
抄録(英)
The main contribution of this paper is to show efficient implementations of the convolution-pooling in the GPU, in which the pooling follows the multiple convolution. Since the multiple convolution and the pooling operations are performed alternately in earlier stages of many Convolutional Neural Networks (CNNs), it is very important to accelerate the convolution-pooling. Our new GPU implementation uses two techniques, (1) convolution interchange with direct sum, and (2) conversion to matrix multiplication. By these techniques, the computational and memory access cost are reduced. Further the convolution interchange is converted to matrix multiplication, which can be computed by cuBLAS very efficiently. Experimental results using Tesla V100 GPU show that our new GPU implementation compatible with cuDNN for the convolution-pooling is expected 2.90 times and 1.43 times faster for fp32 and fp16 than the multiple convolution and then the pooling by cuDNN, respectively. the most popular library of primitives to implement the CNNs in the GPU.
掲載誌名
Journal of Parallel and Distributed Computing
138巻
開始ページ
222
終了ページ
229
出版年月日
2020-04
出版者
Elsevier
ISSN
0743-7315
出版者DOI
言語
英語
NII資源タイプ
学術雑誌論文
広大資料タイプ
学術雑誌論文
DCMIタイプ
text
フォーマット
application/pdf
著者版フラグ
author
権利情報
© 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
This is not the published version. Please cite only the published version. この論文は出版社版ではありません。引用の際には出版社版をご確認、ご利用ください。
関連情報URL
部局名
先進理工系科学研究科
備考
Post-print version/PDF may be used in an institutional repository after an embargo period of 24 months.