题名: |
Scaling to Multiple Graphics Processing Units (GPUs) in TensorFlow. |
作者: |
Park, S. J. |
关键词: |
Artificial neural networks, Deep learning, Computing system architectures, Graphics processing unit, Iterative training, Tensorflow, Gpu(graphics processing unit) scalability, Matrix multiply |
摘要: |
Although accuracies of neural networks are surpassing human performance, training a deep neural network is a time-consuming task due to its increasing high-dimensional parameters. It is not uncommon for the training of deep neural networks to run for a week. Accordingly, the size of neural networks has doubled every 2.4 years, exhibiting an exponential growth from 1958 to 2014. The increasing size of neural network architectures will likely lead to higher computational complexity that will need scalable solutions. To mitigate the computational requirement and maximize throughput, this work focuses on multi-graphics-processing-unit scalability. |
报告类型: |
科技报告 |