Amazon on AI and Accelerators

发布时间:2018-04-09 00:00
作者:Ameya360
来源:Rick Merritt
阅读量:808

  Not quite 18 months into his job as Mr. AI for Amazon Web Services, Matt Wood is convinced the fledgling business will someday be bigger than the $20 billion/year AWS itself. At a corporate event in San Francisco, Wood talked with EE Times about the status and outlook of deep learning, what Amazon wants in semiconductors for it and his not-so-strange career path from genomics to cloud computing.

  After earning a PhD in bioinformatics in 2004, Wood went to work for a U.K. institute that handled a third of the initial work decoding the human genome.

  “It was just a sample to get a blueprint. We did 40 other species including zebra fish and the duck-billed platypus — an odd creature with 10 sex chromosomes,” Wood quipped.

  Technology caught up with what was a billion-dollar effort that took a decade. A nearby U.K. startup developed a $100,000 system that could sequence a genome in a week.

  “They were just around the corner, so they sent their first instrument over in the back of a taxi. Within six months we had 200 more, working on thousands of genomes and cell lines,” he recalled.

  The advance was opening the door to leaps such as personalized treatments for cancer. There was just one problem.

  “It was data-intensive — we generated several hundred terabytes a week. We had a data center, but we couldn’t get any more power on the site without spending tens of millions,” he said.

  “With no more storage on the premises, I called a friend who had just joined Amazon. It was around the time AWS was just getting started. They gave me a $300 credit in return for writing a white paper on how to start a clud-based genomics platform — I still haven’t finished the white paper,” he quipped.

  What he did get was “religion around cloud computing” and a phone call from AWS offering him a job.

  He helped set technical strategy for the team, and since 2008 has had a hand in launching a laundry list of AWS products including Lambda, which AWS pitches as the future of software development. He was also present at the birth of Alexa, Amazon’s virtual voice assistant, embedded in its Echo smart speaker.

  “Echo came from brainstorm about what things we could build if we had infinite compute. The original idea was like the Star Trek computer you talk to--that was the seed for what become Echo,” he said.

  Wood was tapped to head AWS’s AI efforts in part because of his genomics background.

  “Today’s machine learning uses the same foundational concepts we were using for folding proteins. The big change was in deep convolutions in the networks to build a hierarchical view for interpreting data such as images. Adding deep layers lets you fine tune a model to select images of cats from dogs, for example,” he said.

  So far Amazon seems happy with Nvidia’s Volta GPUs, but its open to whatever offers the best price/performance. It has not designed its own machine learning accelerator — yet — although it as designed several ASICs for its data centers.

  “What we’ve seen as of today is Volta is exceptionally effective for deep learning training, and in some cases for inference if you have the right workload. We packaged it to get a petaflop in a single instance — that’s materially larger than what is available anywhere else,” Wood said.

  Volta nearly doubles AI performance over Nvidia’s previous Pascal GPU and “there’s still performance to be wrung from optimizations,” he said. It’s “too early to tell” if Intel’s Nervana or chips from startups such as Graphcore, Wave Computing or Cerebras will offer anything better, he added.

  Google garnered significant attention with its machine-learning ASIC, the Tensor Processing Unit. So far, Amazon has focused on delivering excellent performance on GPUs and CPUs for jobs written in Google’s Tensor framework, but it has not crafted its own hardware accelerator.

  That’s not because Amazon is averse to designing chips. “We use custom accelerators across our platform for network security, network switching and for our underlying EC2 platform,” he said.

  AWS also worked with Intel to create an ASIC for DeepLens, a smart camera that runs machine learning inference tasks and connects to AWS services. DeepLens is based on a customized Intel Atom SoC that runs neural net jobs on its embedded GPU block.

  The ASIC in DeepLens re-encodes an H.264 video stream into MPEG and handles other image pre-processing jobs. But the core inferencing jobs run on the Intel GPU and model training is done on Amazon’s servers in the cloud.

  The device is designed to train a new generation of machine-learning developers. Amazon gave dozens away to coders when it was announced at an event last year. At the event here, it held more training sessions on the device and the AWS SageMaker cloud service that provides neural models users can customize.

  “Developers just need to write Lambda code, nothing else,” said Wood referring to the latest AWS development technique.

  Amazon already fields a version of Echo with a smart camera. Whether it plans more such consumer products is unclear.

  “We’ve worked with OEMs on the general flow of local inference on the edge device and offloading training to the cloud like Echo does. The components of DeepLens are all Amazon products or open source software,” he said, noting the exception of the image processing ASIC.

  Amazon’s overall model is clear. It wants to get as many people as possible using its data center computers and storage. The company has become at heart a massive collection of networked servers, and the beast needs to be fed.

  These days it's a data economy in which the company with the most servers and hard disks wins. So far, that’s Amazon by far. But it’s still early days for the data tsunami machine learning is expected to generate.

  “Machine learning is like a primordial soup with so many frameworks and algorithms — we want to make it all easy and available,” Wood said in a keynote here.

  Toward that goal, Amazon added new transcription and translation services to its portfolio of image and face recognition, text-to-speech and chatbot tools. It listed some three dozen deep learning customers from airlines to dating sites. They include Cathay Pacific, Dow Jones, Expedia, GE Healthcare, Intuit, Moody’s, the NFL, Tinder and Verizon.

  “It’s in every imaginable use case…I’m a bit biased, but I’m bullish that machine learning will be a larger business that AWS which is moving at a $20 billion run rate…we haven’t found the limits yet but we will in the fullness of time,” he said in an interview.

  The roadmap is more of everything. Wood’s group is working on optimizing support for all frameworks, creating more software platforms to supplement SageMaker and adding to its library of pre-packaged neural-network models.

  It’s an extension of Amazon’s overall model of disrupting the IT world with low cost, fast moving services. In his keynote, Werner Vogels, a chief technologist for Amazon and the public face of AWS, talked about having more, better and cheaper database services than IBM or Oracle and with AWS Lambda a lower cost development method than today’s containers.

  Some observers described Lambda as the ultimate in vendor lock ins, an approach that basically appropriated techniques a third party originally defined. Wood said Lambda represents a leap to simpler code he suggested is portable because it is generally based on Java.

  “It’s a new way of development, you don’t need to think about how many servers you need. It scales, it’s all taken care of for you,” Vogel said.

  What’s clear is that to stay ahead, AWS is rapidly filling out a portfolio of IT products from low-cost storage options to freely issued security certificates.

  “We’ve been architecting AWS to decompose services into small building blocks so you can pick what you need to get the job done instead of using large monolithic blocks. The cloud has revolutionized how we develop software,” Vogels said.

(备注:文章来源于网络,信息仅供参考,不代表本网站观点,如有侵权请联系删除!)

在线留言询价

相关阅读
Inside Lab126 with Amazon's Alexa
  In a rare move, Amazon escorted a dozen tech journalists into the secretive bowels of Lab126, its client hardware division here. Its aim was to tell the world that it’s ready to let anyone make any kind of Alexa device they can think of — and fast.  The smart voice interface is arguably Amazon’s first hit in client devices. Its Kindle peaked early with the e-book category as a whole, and it largely struck out with its Fire tablets and smartphones.  With a big lead over rivals, the promise of Alexa-powered devices is rising. But it’s still an emerging market that’s a long way from Amazon’s home-run vision — becoming the next interface of choice for client devices and end-user queries.  In the last-generation battle of operating systems, Windows won the PC and Apple and Google tied in smartphones. Amazon is betting that the next battle is the voice user interface and that Alexa will draw the world to its massive data centers. Google,China’s Alibaba and Baidu, and many others are chasing a similar dream.  Amazon now supports nearly 100 third-party Alexa products from light switches and thermostats to smoke alarms, HP notebooks, and cars. It offers hardware reference designs from 10 semiconductor companies and two of its own design that have sold in thousands since October 2016. Altogether, millions of Alexa systems have shipped across 11 countries.  “The velocity is really increasing, and we have an amazing pipeline … so many different form factors will be coming out — that’s what’s exciting,” said Pete Thompson, who joined six months ago to lead Amazon Voice Services, the division driving Alexa into partner products. “I’ve been involved with a lot of developer networks, and you never know what you will get.”  One of the latest is the Eufy Genie, a smart Alexa speaker that can also control home appliances. Thompson proudly reports that it took just seven months from the first meeting with the startup until they shipped a product.  “In the hardware world, that is extremely quick, and we continue to try to take parts out of the process so we can go faster,” said Thompson, who led work on Microsoft’s touch-based tabletop computer, the original Surface, announced about the same time as the iPhone.  “Everybody thought you couldn’t design a device that used only a touch interface, but now that’s a natural part of our lives,” he said, suggesting that the next big thing is letting users speak to computers all around them.  That concept, borrowed from the TV series “Star Trek,” inspired the original Amazon Echo, said an Amazon executive in an interview last month. Indeed, one of the four approved wake words that Amazon allows includes the “Star Trek,” command, “Computer!”  Amazon believes that it now has a full a la carte menu of cloud APIs, reference designs, developer’s kits, modules, and ODMs. It also provides engineering services to help OEMs select hardware, refine user experiences, and run performance tests.  Basically, any technology that Amazon’s design team has or is developing for its products will be made available to third parties. Some still require “some packaging,” said Thompson, referring to Alexa calling/texting services and devices with displays.  Chipmakers powering Alexa reference designs include Cirrus Logic, Intel, Mediatek, NXP, Qualcomm, Synaptics, Texas Instruments, and Xmos. For those who prefer an SoC to an ARM core and separate DSP, Allwinner and Amlogic offer chips and designs.  They range from designs supporting beam-forming across eight mics listening across a 360-degree range to a single directional mic. Amazon worked with Amlogic to remove GPU and HDMI blocks from an existing design to create an audio-only chip for lower cost.  Amazon works with transducer companies on producing smaller, better speakers. It also developed techniques for shielding mics from the distractions of a smart speaker playing music.  “There’s a huge number of products coming with audio and low cost in mind,” said Chris Hagler, director of hardware for Amazon’s core Alexa team, who worked on its first Echo product.  One thing that none of the designs use so far is any logic dedicated to accelerating machine learning.  “That’s a good idea,” quipped Hagler, declining to give details of what client AI acceleration Amazon will deliver or when.  “We are trying to design and seed smaller lower-power and -cost devices, so having accelerators is really important — some are DSPs and may have extra blocks. We want it as low in power as we can get. I can’t get into the architecture, but we do have some ideas for it.”
2018-04-23 00:00 阅读量:954
  • 一周热料
  • 紧缺物料秒杀
型号 品牌 询价
RB751G-40T2R ROHM Semiconductor
TL431ACLPR Texas Instruments
CDZVT2R20B ROHM Semiconductor
MC33074DR2G onsemi
BD71847AMWV-E2 ROHM Semiconductor
型号 品牌 抢购
ESR03EZPJ151 ROHM Semiconductor
BU33JA2MNVX-CTL ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
TPS63050YFFR Texas Instruments
STM32F429IGT6 STMicroelectronics
BP3621 ROHM Semiconductor
热门标签
ROHM
Aavid
Averlogic
开发板
SUSUMU
NXP
PCB
传感器
半导体
相关百科
关于我们
AMEYA360微信服务号 AMEYA360微信服务号
AMEYA360商城(www.ameya360.com)上线于2011年,现 有超过3500家优质供应商,收录600万种产品型号数据,100 多万种元器件库存可供选购,产品覆盖MCU+存储器+电源芯 片+IGBT+MOS管+运放+射频蓝牙+传感器+电阻电容电感+ 连接器等多个领域,平台主营业务涵盖电子元器件现货销售、 BOM配单及提供产品配套资料等,为广大客户提供一站式购 销服务。