The name of NVIDIA will always be linked with the industry of gaming and graphics. They were responsible for a revolution: the first GPUs or best known as graphics cards. Realism in the graphics 2D and 3D, the ability to show more and more polygons in scene remember the GeForce 256 capable of moving more than 10 million polygons? Now that seems insignificant.
But the technological evolution of NVIDIA did not stop there.
There are many who keep getting surprised when NVIDIA, Jen-Hsun Huang, CEO appears on any Conference proclaiming that much of the company’s revenues now come from other sources: from the Deep Learning, Computing in the cloud or the development of systems for cars (some of them self-employed). They have managed to enter and drive many other industries as they already did with video games. Deep Learning and Machine Learning is one of its main focuses at present.
The fact is that NVIDIA is not just upload to this new revolution as a beginner with the mere interest of diversifying income. NVIDIA is the engine of all of these technologies from a few years ago. And like many analysts say: this has only just begun.
2006, the year when it began the revolution beyond NVIDIA’s video games
2006 is a turning point for NVIDIA, in that year launched the development kit, CUDA (Compute Unified Device Architecture, computation unified device architecture), which would mark a before and an after in how programming on GPUs. Simplifying the concept, what was I wanted to get was to facilitate independent calculations necessary to render each pixel. As the rendering of shadows, reflections, lighting, or transparencies.
Until then, it was unthinkable that scientists will use GPUs to their work, but from that moment. CUDA enables to use high level Python or C++ languages to program complex calculations and algorithms on GPUs. Enabling you to schedule jobs in parallel and with large amounts of data.
Currently, the CUDA platform is used in thousands of applications on the GPU-accelerated and has been a driving force behind thousands of research articles. Some time ago in Engadget we had the opportunity to speak with Manuel Ujaldón already advancing us the possibilities of the use of the graphics cards as a subject matter expert.
This new computational paradigm allows a “co processing” divided between the CPU and the GPU. CUDA is included in the GPUs GeForce, ION Quadro and Tesla GPUs. Developers can make use of the various solutions for programming on CUDA, NVIDIA has a vast amount of tools and platforms supported within the ecosystem.
I have always liked this explanation in the form of video between a CPU vs a GPU.
When the Deep Learning knocked at the door of NVIDIA
The NVIDIA GPUs have enabled work in parallel reducing the processes of training models in Machine Learning of week days
Andrew Ng, of which we already did an extensive profile in Engadget, predicted the use of GPUs in the field of artificial intelligence and how this would make it possible to accelerate the Deep Learning. It was in 2008 when he published a paper discussing the topic, but it was not until after a few years when several experiments using the Nvidia GeForce confirmed it.
As for example, by Alex Krizhervsky in 2012, PhD student at the University of Toronto which got using 2 graphic card NVIDIA GeForce common to process nearly 1.2 million images with an accuracy of 15%, much more than what anyone had achieved to date.
By that date, Google Brain driven by Andrew Ng got the first milestones in Deep Learning: able to recognize a cat among the more than 10 million videos from Youtube, but with the disadvantage of needing practically a data center with more than 2,000 CPUs. Bryan Catanzaro, a researcher of NVIDIA Research, later got a similar experiment by replacing those 2,000 CPUs by only 12 GPUs from NVIDIA.
Currently companies such as Google, Facebook, Microsoft or Amazon-based infrastructure developed by NVIDIA GPUs. And is calculated with the boom of the Artificial Intelligence there are around 3,000 startups all over the world working on NVIDIA platform.
The NVIDIA GPUs they have enabled the work in parallel reducing the processes of training models in Machine Learning of week days. But the acceleration of this process exceeds the forecasts of Moore’s famous law, since these same neural networks have reached a 50 x performance relative to in just three years.
Building Data center and the plates smaller IA
NVIDIA has a profitable line of ready-to-integrate into the most demanding Data Center components. It is not necessary to assemble GPUs based on each one of them separately, but it NVIDIA sells one of the systems more complex that currently exist as the NVIDIA DGX-1. It is a super-computer with a configuration of eight GPUs Nvidia Tesla P100, dual Xenon processors and SSD storage 7TB, which gives us a 170 teraflops performance, something equivalent to the power of 250 conventional servers, all within a box the size of a small desk.
Capable of representing all a best gift for projects as Open IA driven by Elon Musk who a few months ago was quite an event to receive the pileup at the hands of the CEO of the company. It will serve to foster the development of tools ranging from basic tasks to advanced developments related to the learning of languages, recognition of images, as well as the interpretation of expressions.
Robots, drones, or any device IoT connected by NVIDIA
In 2015 NVIDIA released the Jetson TX1 development kit integrating an ARM processor of 64-bit with a GPU architecture NVIDIA Maxwell. Entering full devices smaller thanks to this Board. Mainly, drones, small robots or any device connected to the “internet of things”.
He recently made a fabulous Jetson TX2 evolution of the size of a credit card. Doubling its power and consume just 7.5 w. Integrated Gigabit Ethernet, WiFi wireless connectivity ac and Bluetooth, and plenty of memory: 8GB of RAM and 32GB eMMC format.
Focus on working with two streams of video 4K or the management of up to 6 cameras at the same time, hence this card is a good invention for intelligent security systems. Soon we will see devices that include this hardware, or what we want to build as makers, since it is designed to experiment and build things with it between your specifications.
The democratization of Deep Learning through Cloud computing
The democratization of the tools of development in the cloud has also led to the use of these GPUs of calculation within such systems. With the possibility of climbing and until the last hole in processing thanks to management of resources that enable these platforms and balancing.
Is no longer necessary to have an impressive infrastructure, just take a look to TensorFlow allowing to apply Deep Learning and other techniques of Machine learning in a very powerful way or other platforms such as IBM Watson Developer Cloud, Amazon Machine Learning or Azure Machine Learning.
Consulting Google Cloud Computing We realize how pride themselves on having GPUs such as the K80 Tesla and Tesla PS100 to automate large loads of data to analyze, announced to great fanfare in its home service.
We also see it in Azure, where NVIDIA and Microsoft signed a strong alliance.
Well-taught autonomous cars thanks to the Deep Learning
One of the most exciting technologies is autonomous cars. Perhaps talk always of Tesla as the innovator in this field, but NVIDIA is doing very interesting technical advances. Some in collaboration with Tesla and independently with other manufacturers.
Agreements such as the Mercedes-Benz to develop digital system of the brand, through the operations center of Artificial Intelligence with those who want to give each vehicle. And it is not the only one, Honda, Audi, or BMW are also integrated NVIDIA technology.
It also boasts alliances as with very focused Bosh in IoT. And where you see the use of GPUs from NVidia a differentiating factor for the construction of a supercomputer on board able to identify pedestrians or cyclists, alert in seconds when we have carried out maneuvers risky for our safety.
NVIDIA DRIVE PX2 is its AI platform to accelerate the production of autonomous cars. The size of the Palm of the hand begins to become much innovative models. With a fuel consumption of just 10 watts, add a complex neural network of computation. It adds an enormous amount of functionality as the autocruise, analysis of HD maps updated in milliseconds with the information of the environment, etc…
The risks of the future that more manufacturers want to enter
Success in this blue ocean by NVIDIA has not gone unnoticed. Deep Learning is currently desired technology, which dominate it will get a big advantage.
Success in this blue ocean by NVIDIA has not gone unnoticed. Due especially to the commitment of the future of technology in the coming years through the Deep Learning. Dozens of startups focused on the development of applications have emerged thanks to this new chip architecture.
Allies in the past and the present as Google, walk more and more obsessed to build its own hardware around tensioner Flow and, of course, their algorithms for textual search and maps. After years learning about NVIDIA hardware, we soon see something like Tensor Processor Unit exclusively built by Google.
And let’s not forget of Intel and AMD CPUs but you see how GPUs crowned by NVIDIA during these years market provides enormous sums of money and moves them in processing units preferred by the major players in the technology. Intel is after chip Xeon Phi optimized for Deep Learning.
But in the meantime does not hurt to remember a point black in the history of the company not being able to endure with its Tegra in that first batch of smartphones, where he was a strong ally of Android, you might have to wait to see smartphone chip again. The AI and VR will be a good reason for his return.
Who knows if they can wrest to NVIDIA a piece of the pie of Deep Learning It is being built around herself.