Root NationArticlesCompaniesFrom CUDA to AI: the secrets of NVIDIA's success

From CUDA to AI: the secrets of NVIDIA’s success

-

NVIDIA is the first company in the history of the chip industry to exceed a trillion dollars in capitalisation. What is the secret of its success?

I am sure that many of you have heard of NVIDIA, and most of you associate it with graphics processors, because almost everyone has heard the phrase “NVIDIA GeForce”.

NVIDIA

NVIDIA has recently made financial history in the IT industry. It is the first integrated circuit company to exceed a trillion dollars in market value. It is also the fifth technology-related company in history to achieve such a large success (in terms of market capitalisation). Previously, only Apple, Microsoft, Alphabet (owner of Google) and Amazon could boast such a high valuation, which is why financiers sometimes called it the “Club of Four”, which has now been joined by NVIDIA.

Moreover, in terms of market capitalisation, it leaves AMD, Intel, Qualcomm and other technology companies far behind. This would not have been possible without the company’s visionary policy introduced a decade ago.

Read also: Does Elon Musk’s TruthGPT have a future?

Incredible demand for NVIDIA H100 Tensor Core

What is the secret behind this increase in capitalisation? First of all, this is the stock exchange’s reaction to the success of the NVIDIA H100 Tensor Core chip, which is in high demand among leading cloud infrastructure and online service providers. Amazon, Meta, and Microsoft are buying these chips (for their own needs and those of their partner OpenAI). They are particularly energy-efficient in accelerating calculations typical of generative artificial intelligence, such as ChatGPT or Dall-E. This is an incredible leap in the order of magnitude for accelerated computing. We get unprecedented performance, scalability, and security for any workload with the NVIDIA H100 Tensor Core GPU.

NVIDIA-H100-Tensor CoreUp to 256 H100 GPUs can be connected using NVIDIA NVLink switching to accelerate exam-scale workloads. The GPU also includes a dedicated Transformer Engine to solve language models with trillions of parameters. The combined technology innovations of the H100 can speed up Large Language Models (LLMs) by an incredible 30 times compared to the previous generation, delivering industry-leading conversational AI that developers consider nearly perfect for machine learning.

However, the H100 did not appear out of thin air. And to be honest, it’s not particularly revolutionary. NVIDIA, more than any other company, has been investing huge resources in artificial intelligence for many years. As a result, the company, which is mainly associated with the GeForce graphics card brand, can treat the consumer market almost as a hobby. This builds real power in the market of IT giants, because NVIDIA can already talk to them on equal terms.

Read also: What are 6G networks and why are they needed?

Is artificial intelligence the future?

Today, almost everyone is convinced of this, even sceptical experts in this field. It’s almost an axiom, a truism. Although NViDIA knew about it 20 years ago. Did I surprise you?

Technically, NVIDIA’s first close contact with artificial intelligence was in 1999, when the GeForce 256 processor was introduced to the market, capable of accelerating machine learning computations. However, it wasn’t until 2006 that NVIDIA began to invest heavily in artificial intelligence, when it introduced the CUDA architecture, which enabled the parallel processing capabilities of GPUs to be used for training and research.

NVIDIA-CUDAWhat is CUDA? It’s best defined as a parallel computing platform and application programming interface (API) that allows software to use general-purpose graphics processing units (GPUs) for processing. This approach is called general-purpose GPU computing. In addition, CUDA is a software layer that provides direct access to the virtual instruction set and parallel computing elements of the GPU. It is designed to work with programming languages such as C, C++ and Fortran.

It is this accessibility that makes it easier for parallel developers to use GPU resources, unlike previous APIs such as Direct3D and OpenGL, which required advanced graphics programming skills.

NVIDIA-CUDA

An important breakthrough was the provision of computing power by NVIDIA for the innovative AlexNet neural network. This is a convolutional neural network (CNN) developed by Ukrainian Alex Kryzhevsky in collaboration with Ilya Sutskever and Jeffrey Hinton.

Convolutional Neural Networks (CNNs) have always been a staple model for object recognition – they are powerful models that are easy to control and even easier to train. They do not experience overfitting on any alarming scale when used on millions of images. Their performance is almost identical to standard feed-forward neural networks of the same size. The only problem is that they are difficult to apply to high-resolution images. The scale of ImageNet required innovations that would be optimised for GPUs and reduce training time while improving performance.

AlexNet

On 30 September 2012, AlexNet took part in the ImageNet Large Scale Visual Recognition Challenge. The network achieved a score of 15.3% in the top five error rate test, which is more than 10.8% lower than the second place result.

The main conclusion from the original work was that the complexity of the model was due to its high performance, which was also very expensive in terms of computing, but made possible by the use of graphics processing units (GPUs) during the training process.

The AlexNet convolutional neural network itself consists of eight layers; the first five are convolutional layers, some of them are preceded by layers with maximum connectivity, and the last three are fully connected layers. The network, with the exception of the last layer, is divided into two copies, each running on a single GPU.

That is, thanks to NVIDIA, most experts and scientists still believe that AlexNet is an incredibly powerful model capable of achieving high accuracy on very complex data sets. AlexNet is the leading architecture for any object detection task and can have very broad applications in the computer vision sector for artificial intelligence problems. In the future, AlexNet may be used more than CNN in the field of imaging.

Read also: Bluesky phenomenon: what is the service and how long will it last?

Artificial intelligence not only in laboratories and data centres

NVIDIA also sees great promise for AI in consumer device technology and the Internet of Things. While competitors are just beginning to consider investing more heavily in this new type of integrated circuit, NVIDIA is already working on miniaturising them. The Tegra K1 chip, developed in collaboration with Tesla and other automotive companies, is perhaps the most important.

NVIDIA-Tegra-K1The Tegra K1 processor is one of the first NVIDIA processors designed specifically for AI applications in mobile and embedded devices. The Tegra K1 uses the same GPU architecture as NVIDIA’s GeForce, Quadro, and Tesla series of graphics cards and systems, which provides high performance and compatibility with graphics and computing standards such as OpenGL 4.4, DirectX 11.2, CUDA 6.5, and OpenCL 1.2. As a result, the Tegra K1 processor can support advanced artificial intelligence algorithms such as deep neural networks, reinforcement learning, image and speech recognition, and data analytics. Tegra K1 has 192 CUDA cores.

In 2016, NVIDIA released a series of Pascal processors optimised to support deep neural networks and other artificial intelligence models. A year later, the Volta series of processors for AI applications was introduced, which are even more efficient and energy-efficient. In 2019, NVIDIA acquires Mellanox Technologies, a manufacturer of high-performance computer networks for data centres and supercomputers.

NVIDIA

As a result, they all use NVIDIA processors. In the consumer market, for example, gamers use the revolutionary DLSS image reconstruction algorithm, which allows them to enjoy sharper graphics in games without spending a lot of money on a graphics card. It is recognised in the business market that NVIDIA chips go far beyond what competitors offer. However, it is not as if Intel and AMD have completely slept through the intellectual revolution.

Read also: The best tools based on artificial intelligence

Intel and AMD in the field of artificial intelligence

Let’s talk about NVIDIA’s direct competitors in this market segment. Intel and AMD are working more and more actively here, but with a long delay.

Intel has acquired several artificial intelligence companies, such as Nervana Systems, Movidius, Mobileye and Habana Labs, to strengthen its portfolio of artificial intelligence technologies and solutions. Intel also offers hardware and software platforms for artificial intelligence, such as Xeon processors, FPGAs, NNP chips, and optimisation libraries. Intel also collaborates with public and private sector partners to advance AI innovation and education.

Intel and AMD

AMD has developed a series of Epyc processors and Radeon Instinct graphics cards that are optimised for AI and deep learning applications. AMD has also partnered with companies such as Google, Microsoft, IBM, and Amazon to provide cloud-based AI solutions. AMD is also trying to participate in AI research and development through partnerships with academic institutions and industry organisations. This is all very well, although NVIDIA is already well ahead of them and has had incomparably more success in developing and supporting AI algorithms.

NVIDIA has been involved with video games for decades

This is also something that should not be forgotten. NVIDIA does not provide a precise breakdown of its revenues between the consumer and business markets, but they can be estimated based on the operating segments that the company discloses in its financial statements. NVIDIA has four operating segments: gaming, professional visualisation, data centres and automotive.

NVIDIA

The gaming segment is primarily focused on the consumer market, as it includes sales of GeForce graphics cards and Tegra chips for gaming consoles. The professional visualisation segment is primarily focused on the business market, as it includes sales of Quadro graphics cards and RTX chips for workstations and professional applications. The data centre segment is also primarily focused on the business market, as it includes sales of GPUs and NPUs (i.e. next-generation chips – no longer GPUs, but designed exclusively for AI) for servers and cloud services. The Automotive segment targets both the consumer and business markets, as it includes sales of Tegra and Drive systems for infotainment and autonomous driving.

NVIDIA

Based on these assumptions, we can estimate the share of revenues from the consumer and business markets in NVIDIA’s total revenues. According to the latest financial report for 2022, NVIDIA’s revenues by operating segment were as follows:

  • Games: $12.9 billion
  • Professional visualisation: $1.3 billion
  • Data centres: $9.7 billion
  • Cars: $0.8 billion
  • All other segments: $8.7 billion

NVIDIA’s total revenue was $33.4 billion. Assuming that the automotive segment is split roughly equally between the consumer and business markets, the following proportions can be calculated:

  • Revenue from the consumer market: (12.9 + 0.4) / 33.4 = 0.4 (40%)
  • Revenue from the business market: (1.3 + 9.7 + 0.4 + 8.7) / 33.4 = 0.6 (60%)

This means that about 40% of NVIDIA’s revenue comes from the consumer market, and about 60% from the business market. That is, the business segment is the main focus. But the gaming industry also brings in quite good revenues. The most important thing is that they are growing every year.

What will the future bring us?

Obviously, NVIDIA already has a plan to participate in the development of artificial intelligence algorithms. And it is much broader and more promising than any of its direct competitors.

In the last month alone, NVIDIA has announced numerous new investments in artificial intelligence. One of them is the GET3D engine, which is capable of generating complex three-dimensional models of various objects and characters that accurately reflect reality. GET3D can generate about 20 objects per second using a single graphics chip.

Another interesting project should also be mentioned. This is Israel-1, a supercomputer for artificial intelligence applications that NVIDIA is creating in cooperation with the Israeli Ministry of Science and Technology and Mellanox. The machine is expected to have a computing power of more than 7 petaflops and use more than 1,000 NVIDIA A100 Tensor Core GPUs. Israel-1 will be used for research and development in areas such as medicine, biology, chemistry, physics, and cybersecurity. And this is a very promising investment, given the long-term prospects.

NVIDIAWe also have another project in the pipeline, NVIDIA ACE. This is a new technology that is set to revolutionise the gaming industry by allowing players to interact with non-player characters (NPCs) in a natural and realistic way. These characters will be able to engage in open dialogue with the player, react to their emotions and gestures, and even express their own feelings and thoughts. NVIDIA ACE uses advanced language models and AI-powered image generators.

NVIDIA’s first trillion dollars. It looks like there will be more soon. We will definitely follow the company’s success and tell you about it.

Read also:

Yuri Svitlyk
Yuri Svitlyk
Son of the Carpathian Mountains, unrecognized genius of mathematics, Microsoft "lawyer", practical altruist, levopravosek
- Advertisement -
Subscribe
Notify of
guest

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments