Expert Panel Debunks AI Hype

Release time:2017-06-28
author:
source:EE Times
reading:436

Neural networks have hit the peak of a hype cycle, according to a panel of experts at an event marking the 50th anniversary of the Alan Turing Award. The technology will see broad use and holds much promise, but it is still in its early days and has its limits.

Many panelists said that artificial intelligence is a misnomer for neural networks, which do not address fundamental types of human reasoning and understanding. Instead, they are tools to take on a long journey to building AI.

The discussion of deep learning was particularly relevant given Turing’s vision that machines would someday exceed humans in intelligence. “Turing predicted [that] AI will exceed human intelligence, and that’s the end of the race — if we’re lucky, we can switch them off,” said Stuart Russell, a professor of computer science at Berkeley and AI researcher, now writing a new version of a textbook on the field.

“We have at least half a dozen major breakthroughs to come before we get [to AI], but I am pretty sure they will come, and I am devoting my life to figure out what to do about that.”

He noted that a neural network is just part of Google’s AlphaGo system that beat the world’s best players.

“AlphaGo … is a classical system … and deep learning [makes up] two parts of it … but they found it better to use an expressive program to learn the rules [of the game]. An end-to-end deep learning system would need … [data from] millions of past Go games that it could map to next moves. People tried and it didn’t work in backgammon, and it doesn’t work in chess,” he said, noting that some problems require impossibly large data sets.

Russell characterized today’s neural nets as “a breakthrough of sorts … fulfilling their promise from the 1980’s … but they lack the expressive power of programming languages and declarative semantics that make database systems, logic programming, and knowledge systems useful.”

Neural nets also lack the wealth of prior understanding that humans bring to problems. “A deep-learning system would never discover the Higgs boson from the raw data” of the Hadron Collider, he added. “I worry [that] too much emphasis is put on big data and deep learning to solve all our problems.”

Online messageinquiry

reading
Artificial Intelligence (AI) has inspired the general populace, but its rapid rise over the past few years has given many people pause. From realistic concerns about robots taking over jobs to sci-fi scares about robots more intelligent than humans building ever smarter robots themselves, AI inspires plenty of angst.Within the technology industry, we have a better understanding about the potential for the technology, but the ways in which it will develop are less clear. Semiconductor Engineering asked the community to assess the status of AI and machine learning (ML) and if they thought the technology was being overhyped.“What makes AI so interesting is that it’s a global phenomenon with universities, established companies, start-ups and even countries all trying to move the game forward as fast as possible,” says Andrew Grant, senior business development director for Vision & AI at Imagination Technologies. “The Fourth Industrial Revolution is perhaps the first where people can see change happening on an almost daily basis.”We are still in the early days of this. “In the technology adoption cycle, this technology has moved past the tech enthusiasts and visionaries that define the early market,” says Markus Levy, head of AI at NXP. “We are now standing at the edge of the chasm, which we are successfully crossing to reach the mainstream market. The good news is we know what it takes to cross this chasm and there are hundreds of companies around the world, including tech bellwether companies, working hard to make that possible. We believe that within the next couple of years this revolutionary technology will have made substantial foray into the mainstream market. Even though we know that this technology is real and not a passing attempt to grow a market, people will continue to use and misuse the buzzwords until they clearly understand the real meanings.”It is the creation of those buzzwords that may separate the technical realities from mainstream’s currently perceptions. “ML is just pattern matching at its core, and often the two words are interchanged to sensationalize ongoing research and industry press releases,” says Sharad Singh, digital marketing engineer for Allied Analytics. “AI is definitely overhyped in the media as the next technological breakthrough that has profound life-changing applications, and institutions are cashing in on the hype to promote themselves.”Some of the changes seen by the mass market may not be life-changing. “It might be overhyped today,” says Benjamin Prautsch, group manager for advanced mixed-signal automation at Fraunhofer EAS. “However, I believe that AI will be a core element in almost every future system. AI won’t be visible—just like the transistor. It’s effect, however, will be. AI will not only add new function to devices, it will also improve the electronic design and design automation, and many other fields.”That is already happening. “AI is broadly deployed today, in many ways you many not notice, such as smart unlock features on your smartphone, using your face or fingerprint, predictive text in your emails and instant messages, and efficient energy-management monitoring,” says Steve Roddy, vice president of special projects in the Machine Learning Group at Arm. “However, some AI applications are overhyped, such as self-driving autonomous cars or companion robots replacing human interaction. The technology just isn’t sufficiently advanced for these kinds of things to be routinely and consistently deployed.”Raymond Nijssen, vice president and chief technologist for Achronix,agrees. “The implications will be much broader than anyone can imagine. We do hear some wild claims, and some of them definitely are overhyped. But it will find its way into our lives and other areas of technology in ways that have not yet been foreseen. There will be a lot of development, but we will encounter some glass ceilings where we had high expectations that will not become reality. That will have a lot to do with where AI is just not intelligent enough.”The term AI itself is problematic. “It is all about context and whose expectations are considered,” says David White, senior group director for R&D in the Custom IC & PCB Group at Cadence. “I believe there are extremes on both sides of the debate. I don’t believe we are anywhere near true machine intelligence that threatens our safety, and I don’t believe that AI and deep learning are pure hype with no redeeming engineering value. My expectations are that AI and deep learning would provide value in real-world systems for specific tasks, and in that context, I believe we are on track.”And context is important. “Zachary Lipton, an assistant professor at Carnegie Mellon University, states that the AI hype is blinding people to its limitations and is dangerous in the long run,” says Allied Analytics’ Singh. “He argues that the current state of AI is poorly understood by the public, as the latter often associates AI with self-aware robots taking over humanity. In reality, machines still have a long way to go before being able to develop human-like intelligence. Legendary physicist Stephen Hawking and Tesla founder Elon Musk have both publicly spoken about the dangers of AI, while Microsoft co-founder Bill Gates believes there’s reason to be cautious.”What complicates the picture is the rate of change. “It’s only a few years since Geoffrey Hinton’s team at the University of Toronto made breakthroughs in CNNs,” points out Grant. “Since then Google, Facebook and others have made many of their own developments available to the wider audience of data scientists, software developers and hardware teams.”Understanding the roots of the technology can help. “If you look at AI, the best way to think about it today is a super-universal curve fitting function,” explains Achronix’s Nijssen. “Anything that fits that mold can make a lot of progress beyond what we see now. But there are other forms of intelligence that are not an extrapolation of patterns or images or sequences of events that have been seen before where actual interpretation and deeper understanding is necessary. Today, that is not part of what is being considered.”The area covered by curve fitting is large. “We still haven’t cataloged all the ways and places where it can be used,” says Peter Glaskowsky, computer system architect at Esperanto Technologies. “Almost anywhere that decisions depend on recognizing repeated patterns, AI will play a role.”Many of these will continue to involve humans. “There are so many areas that will benefit from the combination of person, machine and AI,” says Imagination’s Grant. “With that combination we can begin to tackle problems that would otherwise elude us. In health care, security and economics, for example, the opportunities are literally endless.”Taking the human out of the loop is where problems may start. “During this process, it will be important to understand AI’s decision-making so the quality of decisions can be measured,” warns Fraunhofer’s Prautsch. “If the decision, however, gets too much attention over the process of decision-making, then hidden dangers could arise.”And there will be failures. “There are opportunities within the market for one actor, or one group of actors, to do something that is sub-optimal around AI,” says Marc Naddell, vice president of marketing for Gyrfalcon Technologies. “If they over promote the capability of the solution, that could result in disappointment. That can be used as evidence that AI does not really live up to the billing.”NXP’s Levy tackles this problem. “Every technology has the hype cycle with troughs of disillusionment. We view ML and AI as a natural progression of technological advances that has characterized human evolution for millennia. Look at it this way—humans have become the most successful species because we figured out how to transfer our acquired knowledge, problem-solving skills, and decision-making techniques to our progeny, not through genes, but extra-somatically. We have been doing the same thing to our machines by making them more efficient, smarter, and now the natural progression is to enable them to think. So unlike other technologies, AI & ML are not over-hyped or short-lived. They are fundamental to human nature.”What of machines creating better machines? At present, the furthest we have gone is the employ these techniques to create better silicon. “There is now unprecedented interest and investment in applying ML to chip design,” says Jeff Dyck, director of engineering/R&D at Mentor, a Siemens Business. “This has led to a new generation of ML practitioners in EDA, many of which have a solid academic knowledge of ML. They are now developing promising results in controlled environments. However, we are still learning from the school of ML hard knocks about the challenges of bringing ML methods from the lab to production. Perhaps we are at the early stage of a golden age of ML for chip design, but we need to see the promising techniques in the lab successfully move to production for the value to be realized.”Accelerating development ML and AI run on very sub-optimal hardware today. “We will see AI processing move from CPUs and GPUs to dedicated AI accelerator chips,” says Glaskowsky. “Because these new devices are designed specifically for machine-learning algorithms, they will deliver better performance at lower prices, and they’ll be much more energy-efficient on the same tasks—typically 10 times better than GPUs and 100 times better than CPUs.”And we are beginning to see custom silicon being used. “There are dozens of companies bringing AI chips to market in 2019 and 2020,” says Geoff Tate, CEO of Flex Logix. “Many will miss the mark, but some of them will deliver the goods enabling rapid growth of edge AI. The long-term winners in AI chips will be those who can keep up with the rapid pace of change as neural networks improve.”According to a recent report by Allied Market Research, the global deep learning chip market is projected to reach $29.4 billion by 2025, growing at a CAGR of 39.9 % from 2018 to 2025.Xilinx has jumped into this market in a big way. “They have invested billions in their Everest platform, expected to tape out by 2018 on 7nm technology,” says Sergio Marchese, technical marketing manager for OneSpin Solutions. “Flexible and powerful hardware platforms supporting heterogeneous computing are crucial to accelerate the development and deployment of machine learning and AI-based applications.”We have to look at all metrics. “At some point, it is not just about performance,” warns Naddell. “It is about cost of ownership and that includes energy use.”Achieving that will require a range of devices. “They will cover a wide range of cost and power points,” says Glaskowsky. “There will be AI chips (and IPblocks for SoC designs) that cost less than a dollar. Big standalone chips may cost over a thousand dollars, but will outperform a box full of GPUs costing far more. Most of the world’s AI processing will shift from legacy platforms to optimized solutions as quickly as the new silicon can be manufactured.”Some of those devices are already in consumer devices. “Neural networkaccelerators will become ubiquitous, in every device in our environment—indeed we could call it ambient AI,” says Grant. “As the ability to process complex neural networks increases and the price per device falls, we will see this everywhere, from urban infrastructure to provide advanced services such as traffic and building management and security, to monitoring the elderly in care homes.”There is a lot of work ahead. “The first generation of solutions is not very efficient,” says Nijssen. “Both training and inferencing are done in a very brute-force fashion. GPUs are useful, but they are simple-minded and they don’t allow for things that deviate from just pumping through a lot of MAC functions. There are many techniques that people have not had a chance to try out yet because the field is moving so quickly. Once the dust settles and the way that people do training becomes more uniform, and the algorithms do not change on a daily basis, you will see people pushing down the power consumption curve.”“In the hardware space, it’s critical to have flexible, scalable and energy-efficient hardware that spans all performance points, from CPUs to GPUs and NPUs,” says Arm’s Roddy. “The market is expanding and will continue to ramp up. AI is here to stay.”
2018-12-26 00:00 reading:573
A lot has been accomplished in the last year to improve comprehension, accuracy and scalability of artificial intelligence, but 2019 will see efforts focused on eliminating bias and making decision making more transparent.Jeff Welser, vice president at IBM Research, says the organization has hit several AI milestones in the past year and is predicting three key areas of focus for 2019. Bringing cognitive solutions powered by AI to a platform businesses can easily adopt is a strategic business imperative for the company, he said, while also increasing understanding of AI and addressing issues such as bias and trust.When it comes to advancing AI, Welser said there’s been progress in several areas, including comprehension of speech and analyzing images. IBM’s Project Debater work has been able to extend current AI speech comprehension capabilities beyond simple question answering tasks, enabling machines to better understand when people are making arguments, he said, and taking it beyond just “search on steroids.” One scenario involved asking a question that had no definitive answer — whether government should increase funding for telemedicine.Just as it’s critical to get AI to better understand what is being said, progress has been made for it to recognize what it sees faster and more accurately, said Welser. Rather than requiring thousands or possibly millions of labeled images to train a visual recognition model, IBM has demonstrated it’s now possible for AI to recognize new objects with as little as one example as a guideline, which makes AI scalable.IBM Research AI introduced a Machine Listening Comprehension capability for argumentative content stemming from its work on Project Debater, pictured with professional human debater, Dan Zafrir, in San Francisco. (Photo Credit: IBM Research).Another way that AI learning is becoming scalable is getting AI agents to learn from each other, said Welser. IBM researchers have developed a framework and algorithm to enable AI agents to exchange knowledge, thereby learning significantly faster than previous methods. In addition, he said, they can learn to coordinate where existing methods fail.“If you have a more complex task, you don't have to necessarily train a big system," Welser said. "But you could take individual systems and combine them to go do that task.”Progress is also being made in reducing the computational resources necessary for deep learning models. In 2015, IBM outlined how it was possible to train deep learning models using 16-bit precision, and today 8-bit precision is now possible without compromising model accuracy across all major AI dataset categories, including image, speech, and text. Scaling of AI can also be achieved through a new neural architecture search technique that reduces the heavy lifting required to design a network.All this progress needs to be tempered by the fact AI must be trustworthy, and Welser said there will be a great deal of focus on this in the next year. Like any technology, AI can be subject to malicious manipulation, so it needs to be able to anticipate adversarial attacks.Right now, AI can vulnerable to what are called “adversarial examples,” where a hacker might imperceptibly alter an image such to fool a deep learning model into classifying it into any category the attacker desires. IBM Research has made some progress addressing this with an attack-agnostic measure to evaluate the robustness of a neural network and direct systems on how to detect and defend against attacks.Another conundrum is neural nets tend to be black boxes in that how they come to a decision is not immediately clear, Welser. This lack of transparency is a barrier to putting trust in AI. Meanwhile, it’s also important to eliminate bias as AI is increasingly relied on to make decisions, he said, but it’s challenging.“Up to now we've seen mostly that people have been just so excited to design AI systems to be able to do things," Wesler said. "Then afterwards they try and figure out if they're biased or if they're robust or if they've got some issue with the decisions they're making.”
2018-12-17 00:00 reading:505
Uncle Sam wants to restrict a few good technologies — and it needs engineers to help identify them.As part of legislation passed this summer, the U.S. Commerce Department put out a call for input by Dec. 19 on which of 14 broad emerging technologies should face export controls. The call quickly got attention from industry veterans and groups concerned that controls could hurt U.S. companies and worsen a growing tech trade war with China.The call issued on Nov. 14 listed aspects of biotech, AI, quantum computing, semiconductors, robotics, drones, and advanced materials as possible candidates. It gave special attention to AI, listing 10 specific areas ranging from computer vision and natural-language processing to AI chipsets. In semiconductors, it called out even broader areas including microprocessor technology, SoCs, stacked memory on chip, and memory-centric logic.The effort aims to determine which emerging technologies could be strategic to national security and how to identify and control them without “negatively impacting U.S. leadership in the science, technology, engineering, and manufacturing sectors.” It did not define the range of the controls except to say that, “at a minimum, it [would] require a license for [their] export … to countries subject to a U.S. embargo, including those subject to an arms embargo.”A government spokesperson said that the Commerce Dept. plans to publish proposed controls on emerging technologies after reviewing comments to its call. It will take public comments on the proposed controls before making them final, but the spokesperson gave no timeline for the process.The Commerce Dept. is expected to issue a second call early next year for guidance on what it calls fundamental or more mature technologies, including semiconductors and manufacturing equipment. The actions stem from the Foreign Investment Risk Reduction Management Act (FIRRMA) aimed to use export controls to stem a perceived leaking of sensitive technologies, especially to China.The bill also expanded the role of the Committee on Foreign Investment in the U.S. Under an 18-month pilot program, CFIUS can now review non-controlling investments in U.S. companies in 27 areas, including semiconductors and semiconductor tools.More than a dozen reactions to the Commerce call are already live on the government’s website, several pointing out the challenges and dangers of the effort. The Association for Computing Machinery is one of multiple groups requesting up to a 60-day extension of the deadline to submit responses due to the effort’s “enormous import not only to national security but to the future of American technological progress in industry and academia.”“The list of technologies that Commerce is considering for controls is so broad that restrictions could severely limit opportunities to participate in international markets, weakening U.S. companies and U.S. competitiveness overall,” said Chris Rowen, a serial entrepreneur in semiconductors, now CEO of BabbleLabs, an AI software startup in Campbell, California.The idea of export controls on AI is “analogous to saying, ‘Let’s not export software because it’s used in military systems,’” said Rowen, who is preparing his own response to the government call.“AI has become a basic software technique. I would not limit it in sweeping ways … they need to focus on areas where the majority of use is associated with the military.”Nvidia is most likely to feel the impact of any export controls on AI given that its GPUs are widely used for training neural networks in data centers of global web giants such as Amazon, Alibaba, and Google. Controlling sales of its GPUs could “represent one of the few temporary choke points in AI development,” said Rowen.Both Nvidia and Intel declined to comment on the government effort.The move comes at an interesting moment in the rising trade war between the U.S. and China. President Trump and China’s Xi Jinping are expected to meet in Buenos Aires this weekend. It will be their first encounter since the two started levying increasing tariffs on each other’s goods, moves that industry groups lobbied against.Looking toward the new export controls, industry representatives “just want to make sure this process is done thoughtfully, with a scalpel and not an ax,” said one expert, who asked not to be named.One of the trickiest parts of the export controls is untangling so-called dual-use technologies that have clear military and commercial uses.“We want appropriate controls on a targeted subset of technologies relevant to security interests, but we want to make sure we have access to commercial markets around the world … in addition, it serves no purpose if the U.S. controls technology that’s available elsewhere,” said the expert.Another challenge is that China, the primary target of the moves, “is a big part of the tech supply chain and one of the largest markets for U.S. semiconductors,” he added.It’s unclear how long the process will take. Government policy makers will need time to sift through what could become hundreds of comments to form proposed export controls. Industry representatives hope that they get at least 90 days to review and comment on the proposed rules before they are made final.“We view this as a really important process that our industry is taking very seriously and plan to engage in because the outcomes are of great consequence for us,” Christian Troncoso, a policy director at BSA, a Washington-based trade group for companies including Apple, Microsoft, IBM, and Oracle, told the Washington Post.
2018-11-30 00:00 reading:545
Two of Europe’s key electronics and nanotechnologies research institutes — imec in Belgium and CEA-Leti in France — will collaborate to develop a European hub for artificial intelligence and quantum computing.As security and privacy issues rise up the agenda in almost every organization, the race is on to process more at the edge and put more intelligence at endpoints. For electronics systems design, most of the major chip companies now offer or are developing deep learning and edge AI devices or intellectual property. The edge AI devices are often complete computer sub-systems displaying intelligent behavior locally on the hardware devices (chips), analyzing their environment and taking required actions to achieve specific goals.Edge AI is considered now to hold the promise of solving many societal challenges — from treating diseases that cannot yet be cured today, to minimizing the environmental impact of farming. Decentralization from the cloud to the edge is a key challenge of AI technologies applied to large heterogeneous systems. This requires innovation in the components industry with powerful, energy-guzzling processors.This is where imec and CEA-Leti hope to develop a European center of excellence. The two organizations signed a memorandum of understanding during the state visit of French president Emmanuel Macron to Belgium, laying the foundation for a strategic partnership in AI and quantum computing, two key strategic value chains for European industry, to strengthen European strategic and economic sovereignty.The joint efforts of imec and CEA-Leti underline Europe’s ambition to take a leading role in the development of these technologies. The research centers’ increased collaboration will focus on developing, testing and experimenting neuromorphic and quantum computing — and should result in the delivery of a digital hardware computing toolbox that can be used by European industry partners to innovate in a wide variety of application domains — from personalized healthcare and smart mobility to the new manufacturing industry and smart energy sectors."The ability to develop technologies such as AI and quantum computing — and put them into industrial use across a wide spectrum of applications —  is one of Europe’s major challenges," said Luc Van den hove, president and CEO of imec, in a press statement. "Both quantum and neuromorphic computing (to enable artificial intelligence) are very promising areas of innovation, as they hold a huge industrialization potential.”  Van den hove said a stronger collaboration in these domains between imec and CEA-Leti would help to speed up the technologies’ development time, providing them with the critical mass needed to create faster impact.Emmanuel Sabonnadière, CEA-Leti CEO, said the collaboration with imec as well as previous innovation-collaboration agreements with Germany's the Fraunhofer Group for Microelectronics "will focus all three institutes to the task of keeping Europe at the forefront of new digital hardware for AI, HPC and cyber-security applications.”Imec and CEA-Leti are inviting partners from industry as well as academia to join them and benefit from access to the research centers’ technology —  enabling a much higher degree of device complexity, reproducibility and material perfection while sharing the costs of precompetitive research.
2018-11-22 00:00 reading:406
  • Week of hot material
  • Material in short supply seckilling
model brand Quote
TPIC6C595DR Texas Instruments
PCA9306DCUR Texas Instruments
TL431ACLPR Texas Instruments
TPS5430DDAR Texas Instruments
TXB0108PWR Texas Instruments
TPS61021ADSGR Texas Instruments
model brand To snap up
TPS61256YFFR Texas Instruments
TXS0104EPWR Texas Instruments
TPS63050YFFR Texas Instruments
TPS61021ADSGR Texas Instruments
ULQ2003AQDRQ1 Texas Instruments
TPS5430DDAR Texas Instruments
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
Information leaderboard
  • Week of ranking
  • Month ranking
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.