<span style='color:red'>IBM</span> Simulates Complex Chemistry with Quantum Computing
  A novel algorithm developed by IBM scientists is improving the understanding of complex chemical reactions and optimizing quantum computing.  The scientists have developed a new approach to simulate molecules on a quantum computer using a seven-qubit quantum processor to address the molecular structure problem for beryllium hydride (BeH2), which is the largest molecule simulated on a quantum computer to date, according to IBM. The results are significant as they could lead to practical applications such as the creation of novel materials, development of personalized drugs and discovery of more efficient and sustainable energy sources.  In a telephone interview with EE Times, IBM quantum computer research team member Abhinav Kandala outlined how they implemented an algorithm that is efficient with respect to the number of quantum operations required for the simulation. Using six qubits of a seven-qubit processor, they were able to measure BeH2's lowest energy state, a key measurement for understanding chemical reactions. The results were just published in the peer-reviewed journal Nature, which Kandala co-authored.  The research team's paper outlined how it demonstrated the experimental optimization of up to six-qubit Hamiltonian problems with more than one hundred Pauli terms, determining the ground state energy for molecules of increasing size, up to BeH2. This was enabled by “a hardware-efficient quantum optimizer with trial states specifically tailored to the available interactions in our quantum processor, combined with a compact encoding of fermionic Hamiltonians and a robust stochastic optimization routine."  Although this model of BeH2 can be simulated on a “classical" computer, this new approach has the potential to scale towards investigating larger molecules that would traditionally be seen to be beyond the scope of classical computational methods as more powerful quantum systems get built. Kandala said the experiments ultimately show that a hardware-efficient optimizer implemented on a six-qubit superconducting quantum processor can address molecular problems beyond period 1 elements, up to BeH2.  Essentially, he said, they turned the traditional approach of forcing previously known classical computing methods onto quantum hardware on its head: They reversed it by building an algorithm suited to the capability of the current available quantum devices. This allows for extracting the maximal quantum computational power to solve problems that grow exponentially more difficult for classical computers. To characterize the computational power, IBM has adopted a new metric, Quantum Volume. It accounts for the number and quality of qubits, circuit connectivity, and error rates of operations.  Solving problems in chemistry using traditional computing methods presents several problems that could be potentially solved with quantum computing. “All of chemistry today deals with approximate methods," said Kandala. “Trying to solve a problem like this on a classical computer is one that has exponential cost."  The problem scales in complexity with the number of orbitals. A molecular orbital is a mathematical function describing the wave-like behavior of an electron in a molecule. “The hope with quantum computing is to deal with the problem in a precise manner," Kandala said.  For example, a simulation of the simplest molecule, hydrogen, maps four orbitals onto two qubits. “As you try to address larger molecules you have more orbitals you need to account for because you have more orbitals, you need more qubits," said Kandala. “These problems are sufficiently small that you can solve. The reason we are able to attempt to solve problems like this on a quantum computer is because there are mathematical mappings," he said. “The number of orbitals in molecules relates to the number of qubits you require in the simulation."  Part of the point of the experiment is it's an opportunity to compare the quantum computing results with a traditional computing approach, said Kandala, and identify errors. “The hope is to get more information beyond the scope of classical computing," he said. "The field is pretty new."  To help showcase how quantum computers are adept to simulating molecules, developers and users of the IBM Q experience are now able to access a quantum chemistry Jupyter Notebook, which are open source available through the open access QISKit github repo. The IBM Q experience was launched a year ago by placing a robust five-qubit quantum computer on the cloud for anyone to freely access, and most recently upgraded to a 16-qubit processor available for beta access.  “We want to build a community," said Kandala. “We want to learn ourselves, but we want other people to learn too."
Key word:
Release time:2017-09-15 00:00 reading:1389 Continue reading>>
<span style='color:red'>IBM</span> Goes All Out for AI
  IBM is investing $240 million in joint development work with the Massachusetts Institute of Technology to take artificial intelligence to the next level in humanlike capabilities over the next 10 years. The new MIT-IBM Watson AI Laboratory, collocated with IBM's Watson Health and IBM Security headquarters at Kendall Square (near the MIT campus in Cambridge, Mass.), will gather more than 100 AI experts to advance AI capabilities, physical architectures, and applications, particularly to expand Watson’s expertise in health care and cybersecurity.  Beyond advancing the state of the art in current, proven AI algorithms, such as deep learning, the Watson AI Lab will look to create algorithms that mimic other critical brain functions. Concurrently with its technical work, the research effort will evaluate the economic and ethical implications of AI for society, with an emphasis on maximizing AI’s positive impact on health care and cybersecurity.  To get started, the Watson AI Lab will concentrate on three gaping needs that surpass the almost exclusive concentration of current researchers on deep-learning algorithms derived from work done in the 1980s and 1990s. First, more-advanced AI algorithms will develop and expand the capabilities of AI based on recent discoveries about parts of the working brain beyond the cortex, where deep learning occurs. For example, humans can come up with ad hoc solutions for problems of any complexity by leveraging continuous learning, which effects radical changes in the brain's structure that allow nonlinear extrapolation. Watson AI Lab researchers will use the new findings about the human brain to create brain algorithms that obsolete old ones for tackling, for instance, Big Data conundrums, no matter how complex.  Second, the Watson AI Lab will aim to redefine the physical materials, subsystems, and overall architectures of e-brains — especially those that leverage the analog functions of the brain — to maximize training speed; ease deployment; and incorporate newly minted technologies, such as quantum computing. The lab intends to invent brainlike quantum devices and incorporate them into a new breed of analog algorithms that achieve quantum speeds with far less hardware than would be required for analog or digital processors used alone.  Third, in tandem with the work at the Watson AI lab, the IBM Watson Health and IBM Security headquarters in Kendall Square will develop biomedical and cybersecurity applications that leverage the new AI techniques. The researchers will explore AI solutions for ensuring the privacy of medical data and personalization of health care, including optimal treatment plans for specific patients, in conjunction with health care delivery to a broader range of people, nations, and enterprises. The hope is that AI can level the playing field for the delivery of both medical and cybersecurity solutions, so that individuals will not have to the foot the full bill for the costly services of modern medical and cyber experts.  The fruits of the lab will be split evenly between open-source material that fosters the ethical application of AI to everyone who can benefit from it and private-sector endeavors that encourage MIT faculty and students to commercialize the lab’s inventions and innovations.  The creation of the Watson AI Lab builds on a 2016 pact between IBM and MIT's Department of Brain And Cognitive Sciences. IBM also has a five-year, $50 million effort under way with MIT and Harvard University to research AI and genomics. The Watson AI Lab is by far the most ambitious of these initiatives.  The lab is co-chaired by Dario Gil, IBM Research vice president of AI, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering. Engineers looking for more information on the lab or wishing to join its ranks should contact the Watson AI Lab directly.
Key word:
Release time:2017-09-14 00:00 reading:1448 Continue reading>>
<span style='color:red'>IBM</span> Deep Learning Breaks Through
  IBM Research has reported an algorithmic breakthrough for deep learning that comes close to achieving the holy grail of ideal scaling efficiency: Its new distributed deep-learning (DDL) software enables a nearly linear speedup with each added processor (see figure). The development is intended to achieve similar speedups for each server added to IBM’s DDL algorithm.  The aim “is to reduce the wait time associated with deep-learning training from days or hours to minutes or seconds,” according to IBM fellow and Think Blogger Hillery Hunter, director of the Accelerated Cognitive Infrastructure group at IBM Research.  Hunter notes in a blog post on the development that “most popular deep-learning frameworks scale to multiple GPUs in a server, but not to multiple servers with GPUs.” The IBM team “wrote software and algorithms that automate and optimize the parallelization of this very large and complex computing task across hundreds of GPU accelerators attached to dozens of servers,” Hunter adds.  IBM claims test results of 95 percent scaling efficiency for up to 256 Nvidia Tesla P100 GPUs added to a single server using the open-source Caffe deep-learning framework. The results were calculated for image recognition learning but are expected to apply to similar learning tasks. IBM achieved the nearly linear scaling efficiency in 50 minutes of training time. Facebook Inc. previously achieved 89 percent efficiency in 60 minutes of training time on the same data set.  IBM is also claiming a validation accuracy record of 33.8 percent on 7.5 million images in just seven hours of training on the ImageNet-22k data set, compared with Microsoft Corp.’s previous record of 29.8 percent accuracy in 10 days of training on the same data set. IBM’s processor was its PowerAI platform — a 64-node Power8 cluster (plus the 256 Nvidia GPUs) — providing more than 2 petaflops of single-precision floating-point performance.  The company is making its DDL suite available free to any PowerAI platform user. It is also offering third-party developers a variety of application programming interfaces to let them select the underlying algorithms that are most relevant to their application.
Key word:
Release time:2017-08-14 00:00 reading:1280 Continue reading>>
<span style='color:red'>IBM</span> Claims Tape Density Record
  IBM researchers have set a tape areal-density record of 201 gigabytes per square inch — 20 times the areal density of current commercial tape drives — enabling a single palm-sized cartridge to hold 330 terabytes of uncompressed data. IBM Research and Sony Storage Media Solutions, which developed the nano-grained sputtered tape used for the demonstration prototype, described the achievement in Tsukuba, Japan, today (Aug. 2) at The Magnetic Recording Conference (TMRC 2017).  Tape was invented more than 60 years ago and has repeatedly been deemed obsolete, but it remains the dominant method for storing cold data — data that is infrequently accessed but must be maintained, such as tax documents and health care records. The Big Data era has seen a resurgence in popularity for tape, which is valued for its small size and low cost relative to other storage alternatives as well as for its ability to store not just backup and archival data, but also the massive sensor and transactional data streams going up to the cloud. Indeed, business at IBM's tape storage unit grew by 8 percent last year, according to Gartner Inc.  An increase in areal recording density simply means that less space is needed to store massive amounts of seldomly accessed information. IBM has broken the areal density record for tape storage five times since 2006 (see table), with most of the breakthroughs ending up in commercial products. Although the IBM-Sony's demonstration at TMRC 2017 is a prototype, IBM suggested that a commercial product could arrive next year.  Sony achieved nano-granularity by sputtering vertically oriented, 7-nanometer magnetic grains on a tape substrate topped with a protective layer and a permanent lubricant. IBM created signal-processing algorithms that use noise-predictive detection to enable a linear density of 818,000 bits per inch of tape using an ultra-narrow, 48-nm tunneling magnetoresistive head. The ultra-narrow tracks enable a thirteenfold increase in track density over IBM’s previous generation, to 246,200 tracks per inch with a bit-error rate of <1e-20, the researchers reported.
Key word:
Release time:2017-08-04 00:00 reading:1395 Continue reading>>
<span style='color:red'>IBM</span> Processor Claims New Level of Data Encryption
  IBM claims its new z14 microprocessor is the fastest in the world, enabling encryption of "all-data all-the-time."  Encryption accelerators encode all data used by real-time analytics, interactions with Internet of Things (IoT) devices and in-house or cloud applications, all within the same transaction, and without changing a single line of application code or impacting throughput, according to IBM. More than 12-billion encrypted transactions per day can be performed by the z14, compared to 2.5 billion for the z13, which was accomplished by a 400 percent increase in z14 silicon real-estate dedicated to cryptography plus an accelerated PCI-bus Crypto Express card.  "Of the 9 billion records breached in the past five years, only 4 percent were encrypted, leading to a predicted $2 trillion in losses to cybercrime worldwide by 2019," Mike Desens, vice president of IBM Z Systems, told EE Times in an exclusive interview.  Solitaire Interglobal Ltd. (Carpentersville, Ill.), in a report released Monday (July 17), claims IBM's z14 processor encrypts data 18-times faster than x86 platforms and at 5 percent of the cost, while still meeting both the Federal Reserve and the European Union' (EU's) General Data Protection Regulations. Gemalto (Belcamp, Maryland) also claims IBM's on-chip cryptographic engine can encrypt application programmer interfaces (APIs) three-times faster than x86 systems.  "The perimeter defense we use today has never been able to keep up with the bad guys," Desens told EE Times. "But as long as we can protect our encryptions keys, the z14 puts the hackers out of business."  To protect the keys, the z14 includes tamper-responding hardware on-chip that prevents intruders from getting hold of them by instantly erasing up to millions of encryption keys before they can be stolen. After the intruder is blocked from the system, the keys are automatically reconstituted, meeting the level four standard of the Federal Information Processing Standards (FIPS).  "Encrypted data is only as good as your key protection, which is why we have included tamper-responding hardware in the IBM Z key management system which meets the Level 4 Federal Information Processing Standard, where the norm for other high security computers is just level 2," Desens told EE Times. "The key difference between level 4 and level 2 is our tamper response that deletes all keys, then automatically reconstitutes them after the intruder is repelled."  IBM also claims its new security measures protect against insider threats from contractors (like Edward Snowden) and any other privileged user whether the encoded data is in-flight, at-rest or currently running.  On the technical side, the z14 processor has access to 32 terabytes of memory (three-times more than the z13), 10-times lower latency to mass storage, three times faster input/output (I/O) and runs Java 50 percent faster than x86 servers, according to Solitaire.  Each z14 processor has up to 10 cores and can pack 170 cores in a four-drawer rack which then executes 145-billion instructions per second (145,000 MIPS). It also incorporates new single-instruction multiple-data (SIMD) instructions, a special hardware engine enabling a guarded storage facility (GSF) for pause-less garbage collection when using Java and similar programming languages.  IBM z14 has increased the on-chip cryptographic performance by 7x over z13. In addition, IBM is announcing its next generation PCI-e Crypto Express6S Hardware Security Module, with 2x more performance than the prior generation.  "Now the z14 encryption speed is fast enough to encrypt all data automatically—moving away from the perimeter defense approach that hackers have so easily breeched," Desens told EE Times. "IBM's z14 enables the only industry platform that has 100 percent encryption, but the biggest ah-ha when building it was how much easier it makes the lives of users, programmers and system managers. Now nobody has to pick and choose what to encrypt. Everything is encrypted automatically without changing a line of application code. Even the tasks of IoT designers are simplified."  The new IBM Z Systems can be located in-house, can be accessed at any of six new IBM Cloud Blockchain data centers (in New York, London, Frankfurt, Sao Paolo, Tokyo and Toronto) or can use the "cloud consumption model" (platform-as-a-service) using instant-payment for pay-as-you-use micro-services.  Here's where to get all the details about IBM Z Systems and IBM Security.
Key word:
Release time:2017-07-19 00:00 reading:1194 Continue reading>>
<span style='color:red'>IBM</span> Optics Go CMOS
Trade news

IBM Optics Go CMOS

  Researchers from IBM this week are describing a breakthrough in 60-gigabit-per-second (Gb/s) optical interconnect that the company claims will lead to broad replacement of costlier 56 Gb/s copper interconnects.  At the 2017 Symposia on VLSI Technology and Circuits, in Kyoto, Japan, scientists from IBM Research in Zurich will describe an inexpensive 60 Gb/s optical receiver that is expected to be followed next year by a matching optical transmitter. Together, the two devices will form a complete optical-transceiver built in CMOS at costs that the company expects to be lower than the costs of a copper interconnect.  "We are developing a single lane 60-Gigabit per second optical receiver with non-return to zero (NRZ) signaling targeting low cost multi-mode vertical-cavity surface-emitting laser (VCSEL) based links," Alessandro Cevrero, an engineer at IBM,  told EE Times in advance of the symposium.  "The power is way lower than our competitors, ~120mW for the receiver and eventually below 300mW for the full transceiver," Cevrero said. "Also, its compact CMOS footprint and low power consumption means it can be moved closer to the processor or switch chip and eventually even be put in the same package or even on processor chip die, providing high bandwidth connectivity directly from the processor or switch chip spanning up to 100-meters. This covers links from processor-to-processor, processor-to-memory, from drawer-to-drawer inside a rack and from a rack to a tier-1 Internet switch."  Cevrero said that implementing the devices in CMOS enabled IBM to essential double transmission speed, essentially cutting the costs per Gigabit per second by two. "Some people believed that a SiGe solution was required to achieve good optical sensitivity at data rates above 32Gb/s," Cevero said. "Our work demonstrates that CMOS can achieve the same sensitivity, but at much lower power consumption.”  The 60 Gb/s optical link IBM demonstrated still depends on discrete III-V photodetectors (for the receiver) and discrete III-V lasers (for the transmitter) together forming a transceiver that is otherwise all-CMOS. Others, such as Intel (which offers a 25-Gigabit per second optical transceiver), use silicon photonics to modulate the light from a III-V lasers. Intel combines four such channels to achieve 100-Gbits per second today, but at much higher cost and power consumption, according to IBM. Intel, however, is shooting for the same goal as IBM by 2020.  IBM's current prototype runs at a wavelength of 850 nanometers, which is the standard wavelength for VCSEL-based multi-mode optical links, making it suitable for processor-to-memory, processor-to-processor and server-to-server communications. Once the complete transceiver is demonstrated later this year or early 2018, the price crossover point will have been reached, according to Thomas Toifl, manager of the high-speed Interconnects group at IBM Research in Zurich.  "So far, optical links were always pushed out due to their higher costs, but now we have reached the point where optics are at the same price as electrical links," Toifl told EE Times in advance of the VLSI Symposium. "Electrical links, however, need complex equalization when we go to higher data rates, and hence require more power. Also, their distance is limited to about two meters of cable compared to 100 meters for our optical solution."  Toifl also claimed IBM's "breakthrough" CMOS photonics technology provides superior sensitivity ( -9dBm) and is ideal for the high throughput requirements of cloud computing. The team also claims to have pushed its existing CMOS circuitry to 70+ Gigabits per second already, but is waiting for the III-V photodiodes and vertical-cavity surface-emitting lasers to catch up before publicizing it.  IBM has already demonstrated graphene photodetectors on silicon-on-insulator substrates and silicon-germanium lasers, but Toifl claims their next-generation all-CMOS transceivers will not require germanium, making them exceptionally cool running.
Key word:
Release time:2017-06-09 00:00 reading:1488 Continue reading>>
<span style='color:red'>IBM</span> Claims 5nm Nanosheet Breakthrough
  IBM researchers and their partners have developed a new transistor architecture based on stacked silicon nanosheets that they believe will make FinFETs obsolete at the 5nm node.  The architecture, which was described Monday (June 5) at the 2017 Symposia on VLSI Technology and Circuits conference in Kyoto, Japan, is the culmination of 10 years of research on nanosheets by IBM, its Research Alliance partners GlobalFoundries and Samsung, and equipment suppliers. Compared to FinFETs, the new architecture consumes far less power, according to the researchers.  The Alliance breakthrough should enable battery powered devices like smartphones and other mobile devices to run for 2-to-3 days on a single charge, as well as boost performance of artificial intelligence (AI), virtual reality and even supercomputers, they say.  Less than two years after developing 7nm test chips with 20 billion transistors, the researchers say they have paved the way for 30 billion transistors on a fingernail-sized chip with quadruple all-around nanowire gates. Test results indicate a 40 percent boost in performance (at the same power as 7nm FinFETs) or up to a 75 percent savings in power compared with today's advanced 10nm transistors.  According to IBM, the new 5nm breakthrough to more performance will boost its cognitive computing efforts as well as everybody's efforts toward higher-throughput cloud computing and deep learning, along with lower power and longer battery life for all mobile Internet-of-Things (IoT) devices.  To achieve the breakthrough the Research Alliance had to overcome the problems plaguing EUV (extreme ultraviolet) lithography, which was already on its roadmap for producing 7nm FinFETs. Beside the shorter wavelength advantage of EUV, the Research Alliance also found ways to continuously adjust the width of its nanosheets in both the chip design and manufacturing process phases. This fine-tuning of performance versus power tradeoffs is impossible for FinFETs, which are constrained by their fin height, rendering them unable to increase current flow for higher performance when scaled to 5nm, according to the researchers.  IBM believes its nanosheet architecture will rank alongside proces technoogy breakthrouths in single-cell DRAMs, chemically amplified photoresists, copper interconnects, silicon-on-insulator, strained materials, multi-core processors, immersion lithography, high-k dielectrics, embedded DRAM, 3D chip stacking and air-gap insulators.  Gary Patton, Globalfoundries' chief technology officer and head of worldwide R&D, called the announcement "groundbreaking" and said it demonstrates that Globalfoundries is actively pursuing next-generation technologies at 5nm and beyond.  Also contributing to the Research Alliance's 5nm nanosheets was the SUNY Polytechnic Institute Colleges of Nanoscale Science and Engineering’s NanoTech Complex in Albany, NY.
Key word:
Release time:2017-06-06 00:00 reading:1317 Continue reading>>

Turn to

/ 2

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
MC33074DR2G onsemi
RB751G-40T2R ROHM Semiconductor
TL431ACLPR Texas Instruments
BD71847AMWV-E2 ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
model brand To snap up
ESR03EZPJ151 ROHM Semiconductor
TPS63050YFFR Texas Instruments
BP3621 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
IPZ40N04S5L4R8ATMA1 Infineon Technologies
BU33JA2MNVX-CTL ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code