Title

A CUDA Implementation of the Continuous Space Language Model

Document Type

Article

Publication Date

4-2014

Publication Source

Journal of Supercomputing

Volume

68

Issue

1

Inclusive pages

65-86

DOI

10.1007/s11227-013-1023-7

Publisher

Springer New York LLC

Place of Publication

United States

ISBN/ISSN

0920-8542

Peer Reviewed

yes

Abstract

The training phase of the Continuous Space Language Model (CSLM) was implemented in the NVIDIA hardware/software architecture Compute Unified Device Architecture (CUDA). A detailed explanation of the CSLM algorithm is provided. Implementation was accomplished using a combination of CUBLAS library routines, NVIDIA NPP functions, and CUDA kernel calls on three different CUDA enabled devices of varying compute capability and a time savings over the traditional CPU approach demonstrated. The efficiency of the CUDA version of the open source implementation is analyzed and compared to that using the Intel Math Kernel Libraries (MKL) on a variety of CUDA enabled and multi-core CPU platforms. It is demonstrated that substantial performance benefit can be obtained using CUDA, even with nonoptimal code. Techniques for optimizing performance are then provided. Furthermore, an analysis is performed to determine the conditions in which the performance of CUDA exceeds that of the multi-core MKL realization.

Keywords

CUDA, CSLM, GPU, Statistical signal processing, CUBLAS, Math Kernel Library, BLAS, High performance computing

Disciplines

Electrical and Computer Engineering | Engineering | Signal Processing

This document is currently not available here.

  Contact Author

Share

COinS