Mr. Huang Renxun, Chief Executive Officer of NVIDIA, explained in detail the new chip developed by the company, which can improve the speed of the artificial intelligence algorithm, at the CEO's annual developer conference in San Jose on May 10th. At the CEO of San Jose, Mr. Huang Renxun, Chief Executive Officer of NVIDIA, explained Sensor in detail the new chip developed by the company, which can improve the speed of artificial intelligence algorithms. And the president's performance can be described as cautious.

  Over the past few years, NVIDIA to improve the game and image production and development of the graphics processor has a strong birth to the machine learning industry, many breakthroughs, its market value and the stock and thus soared all the way. With the addition of more funds to the artificial intelligence industry, Intel, Google Fuel Rail Pressure Sensor and other R \u0026 D AI chip business competition is also increasingly fierce.

Mr. Huang at the meeting introduced the latest chip Tesla V100 is also particularly careful to avoid mentioning any competitor's name. For example, what he called 'some people' actually refers to Google. But he specifically on the opponent's technology has made clear criticism, in particular, to discuss the AI ​​chip in the application of cloud computing has broad prospects, the yellow criticism of the opponent is particularly direct.

A variety of industries such as health care, finance, etc. have invested in machine learning infrastructure. Google, Amazon and Microsoft, the leading providers of cloud computing services, want these companies to buy services that run AI software from them and plan to focus on investing in new hardware to enhance these services.

The company has been able to take the lead in the vibrant AI chip market because it has caught a chance. In fact, computer graphics and artificial neural network to achieve the basic mathematical operation of the machine is no difference between the operation, the British researchers began research fuel metering valve  and development in 2012, found that with this new technology to improve the software graphics processor, the software's intelligence Improve, can more accurately read the image or translation language and other tasks.

With the AI ​​market continues to grow, NVIDIA is to make the chip more suitable for application in the neural network and constantly adjust the chip design. V100 chip is equipped with a new kernel, is to accelerate the depth of learning algorithm dedicated chip, which is the result of continuous improvement of NVIDIA. The chip's powerful power efficiency and energy efficiency can greatly enhance the ability of enterprises and cloud service providers to apply AI. But NVIDIA's new rivals said that you want to improve the speed and efficiency of hardware running AI software, they can design a new chip from scratch, no need to adjust the graphics chip technology. The second half of this year will be launched last year, Nervana technology companies based on the introduction of a new type of depth learning chip.

Intel is also preparing to release an accelerated depth of the chip, the chip's technology is based on its $ 16.7 billion acquisition of Altera Corporation, Altera's FPGA chip can be configured to support some specific algorithms Pressure Sensor after reconfiguration. Microsoft has invested heavily in research and development to strengthen machine learning software FPGA, FPGA will also be regarded as cloud platform Azure core components.

At the same time, Google said last year that it had used a Tensor Processing Unit (TPU) chip, which was specifically designed for artificial intelligence technology, which made money last year ABS Sensor when AlphaGo beat the chess champion. While these chips are not for sale, companies that use the company's cloud computing services will benefit from the power and efficiency of the chip itself.

However, several involved in the construction of the chip Google engineers have left the venture, financing 10 million US dollars after the establishment of a company called Groq, R \u0026 D is also a dedicated machine to learn the machine. Similar projects include Wave Computing, which also said customers have begun testing their R \u0026 D hardware. British Weida technology Throttle Position Sensor has a unique advantage. Google's TPU and other custom chips are too flexible and difficult to keep pace with many other types of neural networks at work - it's a huge disadvantage in constantly testing and applying the new AI industry, and Microsoft and Intel Note the drawback of FPGA is too much energy consumption.