There is growing optimism about the potential for artificial intelligence and the potential for explosive power, and the ability to develop high-computing and market-suited chips is a key part of the artificial intelligence platform. As a result, 2016 has been a year of full deployment of chip companies and Internet giants in the field of chips. In this case, nvidia maintains an absolute lead. But as the giants, including Google, facebook, Microsoft, amazon and baidu, have joined the showdown, the future of ai is still to be solved.
In 2016, everyone saw the prospects of artificial intelligence and its potential power, but whether AlphaGo or self-driving cars, if you want to make any sophisticated algorithm is implemented, it is based on the computing power of hardware: that is to say, can develop a high computing power and meet the market demand of chip becomes the key to the artificial intelligence platform game.
As a result, there is no doubt that in 2016 became the chip companies and Internet giants in the field of chip full deployment of the year: first CPU chip giant Intel years three big artificial intelligence and GPU field enterprise; Google announced the development of its own processing system, while apple, Microsoft, facebook and amazon joined in.
And in this among them, the leader Nvidia (Nvidia) because of its advantages in the field of artificial intelligence to make it to the absolute favorite of the capital market: in the past year, once in the game chips Nvidia shares from $30 years of solid soared to $120.
When capital markets are hesitant about artificial intelligence tuyere has nvidia prices artificially high, on February 10, nvidia released earnings in the fourth quarter of 2016, according to its revenue growth of 55%, year-on-year net profit to $655 million, up 216% from a year earlier.
"Just as Intel, Microsoft and other giant investment of chips on the basis of artificial intelligence technology, nvidia has to Q4 earnings, according to the already nearly 12 years of investment in the field of artificial intelligence chip enterprises have begun to gain considerable profit." Senior technology critic Therese Poletti pointed out after the earnings release.
Research institutions Tractica LLC estimated, due to the deep learning hardware cost resulting from the project will be from $2015 in 43.6 million, rising to $2024 in 4.1 billion, and the expenditure of related software companies will be in the same period rose from $109 million to $10 billion.
It is this huge market that has attracted the likes of Google, facebook, Microsoft, amazon and baidu to announce a shift in technology to artificial intelligence. "On the artificial intelligence technology, the nvidia remained absolutely lead, but with the TPU, including Google, such as technology constantly to the market, the future of AI hardware pattern still unanswered." A senior European employee who could not be named has told the 21st century economic report.
Nvidia has a significant lead in gpus
According to nvidia's latest annual report, there are double-digit growth in its main businesses. In addition to the growth of the gaming business, which has been the leading edge, more of its gains are in fact coming from the data center business and self-driving two new business segments.
According to the annual report, there was a 138% increase in the data center business and a 52% increase in autonomous driving.
"In fact, this is the whole nvidia results in most of the content, because the data business and automated driving the growth of the root is the development of artificial intelligence and deep learning inspired." An American computer hardware analyst told the 21st century business herald.
In the current field of deep learning, it takes two stages to put the neural network into practice: first, the training, followed by execution. From the point of the current environment, the stage of training is very need to deal with a large amount of data of GPU (graphics processor, similarly hereinafter), which is in the game and highly graphical application image rendering of nvidia's leading domain; In the transition period, the CPU needed to deal with complex procedures, which is Microsoft's leading area for more than a decade.
"Nvidia's current success in fact represents the success of GPU, which is one of the earliest GPU leaders." The industry analysts said.
Deep learning neural network especially the hundreds of thousands of layer of neural network, a very high demand for high performance computing, and the GPU to deal with complex operation has a natural advantage: it has excellent parallel matrix computation ability, for neural network training and classification can provide significant speedup.
Researchers, for example, there is no artificial defines a face from the start, but can be millions of people face image display, let your computer you define what face should be. When learning such examples, gpus can be faster than traditional processors and speed up the training process.
As a result, the GPU supercomputer has become an option for training all kinds of deep neural networks, such as the early Google brain, which USES Nvidia's gpus for deep learning. "We're building a camera with a tracking feature, so we need to find the best chip, and the GPU is our first choice." In January, Gunleik Groven, CEO of Quine, an eu AR start-up, told our reporter at the CES.
Currently, Google and Twitter, Facebook, Microsoft and baidu Internet giants, such as used in this is called the GPU chip, allow the server to learn vast amounts of photo, video, audio files, as well as information on social media, to improve the search and automation of various software functions such as photo tag. Some carmakers are also using the technology to develop driverless cars that can sense their surroundings and avoid dangerous areas.
In addition to its long lead in GPU and graphics computing, nvidia is also the first tech company to invest in artificial intelligence. In 2008, Mr. Ng, who was in Stanford at the time, published a paper using CUDA on a GPU for neural network training. In 2012, one of the "three giants" deep learning Geoff Hilton students Alex Krizhevsky with nvidia GeForce graphics image recognition accuracy in ImageNet, this is also the nvidia CEO jen-hsun huang often mentioned nvidia pays attention to the beginning of deep learning.
There are now about 3,000 AI startups in the world, most of them using the hardware platform provided by Nvidia, according to the report.
"Deep learning is proven to be very effective." Huang renxun said in a news conference on February 10. In GPU computing platforms are currently listed, cloud computing, artificial intelligence games and automatic driving fast expansion application at the same time, huang said that in the next few years, deep learning will become the core of computer calculation of a kind of basic tools.
AMD and Intel giant AI evolution
Investors and chipmakers watch every move of the Internet giants. Nvidia's data center business, for example, has been providing data services for Google for a long time.
Nvidia is not the only leader in the GPU, and both Intel and AMD have different strengths in this area.
In November 2016, Intel released an AI processor called Nervana, which they announced will test the prototype by the middle of next year. If all goes well, the final form of the Nervana chip will be available by the end of 2017. The chip is based on a company called Nervana, which Intel bought earlier. According to Intel, the company is the first company in the world to build chips for AI.
Intel has revealed some details about the chip, which they say is "Lake Crest" and will use the Nervana Engine and Neon DNN software. This chip speeds up various neural networks, such as the Google TensorFlow framework.
Chips are composed of so-called "processing clusters" arrays, which deal with simplified mathematical operations called "activity points". This method requires less data than floating-point operations, so it brings about 10 times better performance.
Lake Crest USES proprietary data connections to create larger, faster clusters of topologies that are circular or otherwise. This helps users create larger, more diverse models of neural networks. This data connection contains a two-way connection of 12 100Gbps, whose physical layer is based on the 28G string and convert.
TPU and FPGA can be reversed
More companies are trying to trigger a full-blown reversal of the chip giant's rise in gpus. On behalf of Google, it announced in 2016 that it would independently develop a new processing system called TPU.
TPU is a dedicated chip designed specifically for machine learning applications. By reducing the calculation precision of the chip, reduced the number of transistors needed for each operation to calculate, which can make the chip number higher running operations per second, so after fine tuning machine learning model can run faster in the chip, and then let the user more quickly get the result of more intelligent. Google embedded TPU accelerator chip embedded in the circuit board, using the existing hard disk pci-e interface to access the data center server.
According to Urs Holzle, senior vice President of Google, the current Google TPU and GPU are still going on for a while, but he says GPU can perform graphic operations and use multiple purposes. TPU belongs to ASIC, which is the special specification logic IC designed for specific USES, which is faster but has a high cost because it only performs single work.
In addition to the Google mentioned above, Microsoft is also using a new type of processor called field variable programming gate array (FPGA).
According to introducing, the FPGA has support for Microsoft Bing, the future they will drive the depth based on neural network, on the basis of the human brain structure modeling of artificial intelligence, a new search algorithm, a few commands, the artificial intelligence in the implementation of the speed is quicker than ordinary chip several orders of magnitude. With it, your computer screen will only have an empty screen of 23 milliseconds instead of 4 seconds.
In the third generation, the chips are located on the edge of each server and are inserted directly into the network, but still create FPGA pools that can be accessed by any machine. This starts to look like something available for Office 365. Finally, Project Catapult is ready to go online. In addition, the cost of the Catapult hardware accounts for only 30% of the total cost of all the other parts of the server, and less than 10% of the energy required to run it, but the speed is twice as fast.
In addition, there are companies, such as Nervada and Movidius, that mimic the parallel model of GPU, but focus on moving the data more quickly and omitting the functionality required by images. Other companies, including IBM, which USES chips called True North, have developed chip designs inspired by other brain features such as neurons and synapses.
Because of the huge future of deep learning and artificial intelligence, the big players are trying to win technical advantages. If one of those companies, such as Google, replaces the existing chips with a new type of chip, it is basically the equivalent of disrupting the entire chip industry.
"Whether it's nvidia, Intel or Google or baidu, it's looking for a basis for a broad application of artificial intelligence in the future." Therese Poletti says.
And there are also many people hold and vice President of Google Urs Holzle the same point of view, think in the distant future of artificial intelligence, the GPU no instead of CPU, and TPU also won't replace the GPU, chip market will be greater demand and prosperity.