Speeding Up A.I.

| January 10, 2020

USC Viterbi Researchers Win NSF Grant to Tackle the Challenge of Scalable Parallelism.

xuehai Qian, Viktor Prasanna, scalable parallelism

From left: Professor Yanzhi Wang of Northeastern University, USC Viterbi research associate Ajitesh Srivastava, and USC Viterbi Co-PI’s Xuehai Qian and Viktor Prasanna. (PHOTO CREDIT: USC Viterbi)

With a new three-year NSF grant, Ming Hsieh Department of Electrical and Computer Engineering researchers hope to solve the problem of scalable parallelism for AI. Co-PI’s Professor Viktor Prasanna, Charles Lee Powell Chair in Electrical and Computer Engineering and Professor Xuehai Qian both from USC Viterbi, along with USC Viterbi alum and assistant professor at Northeastern University Yanzhi Wang, and USC Viterbi senior research associate Ajitesh Srivastava were awarded the $800,000 grant last month.

Parallelism is the ability of an algorithm to perform several computations at the same time, rather than sequentially. For artificial intelligence challenges which require fast solutions, like the image processing related to autonomous vehicles, parallelism is an essential step to make these technologies practical to every-day life. Parallelism in neural networks has been explored, but the problem has been scaling it up to a point where it’s applicable in time-critical/realtime tasks. Hence the name scalable parallelism.

The researchers are approaching the problem by building a new, unified framework. “By integrating the algorithmic aspect of neural networks, the architecture, and the hardware, we hope to establish a new and more efficient model,” said Qian. Traditionally, the efficiency of neural networks has been improved through a process known as compression. In this case, algorithms are made more efficient by reducing the number of computations they are responsible for. But compression has its limitations. “Improving the computation speed by a factor of ten does not actually lead to an overall speed by a factor of ten,” said Srivastava. “Our unified framework has the potential to bridge that gap.”

In fact, the researchers are already on their way. In their recent work, they have obtained close to eight times speedup resulting from eight times compression using their hardware-aware compression technique; all without compromising the accuracy of image detection. The next step is to scale the speedup to even higher numbers. “With the progress we have made so far, we hope that our work will prove to be an important tool in the future of artificial intelligence,” Qian said.

 

 

 

 

Published on January 10th, 2020

Last updated on September 17th, 2021

Share This Story