Baidu Upgrades Neural Net Benchmark

发布时间:2017-06-29 00:00
作者:
来源:EE Times
阅读量:679

 Baidu updated its open-source benchmark for neural networks, adding support for inference jobs and support for low-precision math.DeepBench provides a target for optimizing chips that help data centers build larger and, thus, more accurate models for jobs such as image and natural-language recognition.

The work shows that it’s still early days for neural nets. So far, results running the training version of the spec launched last September are only available on a handful of Intel Xeon and Nvidia graphics processors.

Results for the new benchmark on server-based inference jobs should be available on those chips soon. In addition, Baidu is releasing results on inference jobs run on devices including the iPhone 6, iPhone 7, and a Raspberry Pi board.

Inference in the server has longer latency but can use larger processors and more memory than is available in embedded devices like smartphones and smart speakers. “We’ve tried to avoid drawing big conclusions; so far, we’re just compiling results,” said Sharan Narang, a systems researcher at Baidu’s Silicon Valley AI Lab.

At press time, it was not clear whether Intel would have inference results for the release today, and it is still working on results for its massively parallel Knights Mill. AMD expressed support for the benchmark but has yet to release results running it on its new Epyc x86 and Radeon Instinct GPUs.



A handful of startups including Corenami, Graphcore, Wave Computing, and Nervana — acquitted by Intel — have plans for deep-learning accelerators.

“Chip makers are very excited about this and want to showcase their results, [but] we don’t want any use of proprietary libraries, only open ones, so these things take a lot of effort,” said Narang. “We’ve spoken to Nervana, Graphcore, and Wave, and they all have promising approaches, but none can benchmark real silicon yet.”

The updated DeepBench supports lower-precision floating-point operations and sparse operations for inference to boost performance.

“There’s a clear correlation in deep learning of larger models and larger data sets getting better accuracy in any app, so we want to build the largest possible models,” he said. “We need larger processors, reduced-precision math, and other techniques we’re working on to achieve that goal.”v

在线留言询价

相关阅读
  • 一周热料
  • 紧缺物料秒杀
型号 品牌 询价
PCA9306DCUR Texas Instruments
TPIC6C595DR Texas Instruments
TPS5430DDAR Texas Instruments
TPS61021ADSGR Texas Instruments
TXB0108PWR Texas Instruments
CD74HC4051QPWRQ1 Texas Instruments
型号 品牌 抢购
TPS61256YFFR Texas Instruments
TPS5430DDAR Texas Instruments
TPS61021ADSGR Texas Instruments
ULQ2003AQDRQ1 Texas Instruments
TPS63050YFFR Texas Instruments
TXS0104EPWR Texas Instruments
热门标签
ROHM
Aavid
Averlogic
开发板
SUSUMU
NXP
PCB
传感器
半导体
相关百科
关于我们
AMEYA360商城(www.ameya360.com)上线于2011年,现 有超过3500家优质供应商,收录600万种产品型号数据,100 多万种元器件库存可供选购,产品覆盖MCU+存储器+电源芯 片+IGBT+MOS管+运放+射频蓝牙+传感器+电阻电容电感+ 连接器等多个领域,平台主营业务涵盖电子元器件现货销售、 BOM配单及提供产品配套资料等,为广大客户提供一站式购 销服务。