Log in
E-mail
Password
Show password
Remember
Forgot password ?
Become a member for free
Sign up
Sign up
New member
Sign up for FREE
New customer
Discover our services
Settings
Settings
Dynamic quotes 
OFFON

NVIDIA CORPORATION

(NVDA)
  Report
SummaryQuotesChartsNewsRatingsCalendarCompanyFinancialsConsensusRevisions 
SummaryMost relevantAll NewsAnalyst Reco.Other languagesPress ReleasesOfficial PublicationsSector newsMarketScreener Strategies

NVIDIA : Pushing Forward the Frontiers of Natural Language Processing

09/16/2021 | 09:12pm EDT

Idea generation, not hardware or software, needs to be the bottleneck to the advancement of AI, Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, said this week at the AI Hardware Summit.

'We want the inventors, the researchers and the engineers that are coming up with future AI to be limited only by their own thoughts,' Catanzaro told the audience.

Catanzaro leads a team of researchers working to apply the power of deep learning to everything from video games to chip design. At the annual event held in Silicon Valley, he described the work that NVIDIA is doing to enable advancements in AI, with a focus on large language modeling.

CUDA Is for the Dreamers

Training and deploying large neural networks is a tough computational problem, so hardware that's both incredibly fast and highly efficient is a necessity, according to Catanzaro.

But, he explained, the software that accompanies that hardware might be even more important to unlocking further advancements in AI.

'The core of the work that we do involves optimizing hardware and software together, all the way from chips, to systems, to software, frameworks, libraries, compilers, algorithms and applications,' he said. 'We optimize all of these things to give transformational capabilities to scientists, researchers and engineers around the world.'

This end-to-end approach yields chart-topping performance in industry-standard benchmarks, such as MLPerf. It also ensures that developers aren't constrained by the platform as they aim to advance AI.

'CUDA is for the dreamers, CUDA is for the people who are thinking new thoughts,' said Catanzaro. 'How do they think those thoughts and test them efficiently? They need something general and flexible, and that's why we build what we build.'

Large Language Models Are Changing the World

One of the most exciting areas of AI is language modeling, which is enabling groundbreaking applications in natural language understanding and conversational AI.

The complexity of large language models is growing at an incredible rate, with parameter counts doubling every two months.

A well-known example of a large and powerful language model is GPT-3, developed by OpenAI. Packing 175 billion parameters, it required 314 zettaflops (1021 floating point operations) to train.

'It's a staggering amount of compute,' Catanzaro said. 'And that means language modeling is now becoming constrained by economics.'

Estimates suggest that GPT-3 would cost about $12 million to train and, Catanzaro observed, the rapid growth in model complexity means that, despite NVIDIA's tireless work to advance the performance and efficiency of its hardware and software, the cost to train these models is set to grow.

And, according to Catanzaro, this trend suggests that it might not be too long before a single model might require more than a billion dollars' worth of computer time to train.

'What would it look like to build a model that took a billion dollars to train a single model? Well, it would need to reinvent an entire company, and you'd need to be able to use it in a lot of different contexts,' Catanzaro explained.

Catanzaro expects that these models will unlock an incredible amount of value, inspiring continued innovation. During his talk, Catanzaro showed an example of the surprising capabilities of large language models to solve new tasks without being explicitly trained to do so.

After inputting just a few examples into a large language model - four sentences, with two written in English and their corresponding translations into Spanish - he then entered an English sentence, which the model then translated into Spanish properly.

The model was able to do this despite never being trained to do translation. Instead, it was trained - using, as Catanzaro described, 'an enormous amount of data from the internet' - to predict the next word that should follow a given sequence of text.

To perform that very generic task, the model needed to come up with higher-level representations of concepts, such as the existence of languages in general, English and Spanish vocabularies and grammar, and the concept of a translation task, in order to understand the query and properly respond.

'These language models are first steps towards generalized artificial intelligence with few shot learning, and that is enormously valuable and very exciting,' explained Catanzaro.

A Full-Stack Approach to Language Modeling

Catanzaro then went on to describe NVIDIA Megatron, a framework created by NVIDIA using PyTorch 'for efficiently training the world's largest, transformer-based language models.'

A key feature of NVIDIA Megatron, which Catanzaro notes has already been used by various companies and organizations to train large transformer-based models, is model parallelism.

Megatron supports both inter-layer (pipeline) parallelism, which allows different layers of a model to be processed on different devices, as well as intra-layer (tensor) parallelism, which allows a single layer to be processed by multiple different devices.

Catanzaro further described some of the optimizations that NVIDIA applies to maximize the efficiency of pipeline parallelism and minimize so-called 'pipeline bubbles,' during which a GPU is not performing useful work.

A batch is split into microbatches, the execution of which is pipelined. This boosts the utilization of the GPU resources in a system during training. With further optimizations, pipeline bubbles can be reduced even more.

Catanzaro described an optimization, recently published, that entails 'round-robining each (pipeline) stage among multiple GPUs so that we can further reduce the amount of pipeline bubble overhead in this schedule.'

Although this optimization puts additional stress on the communication fabric within the system, Catanzaro showed that, by leveraging the full suite of NVIDIA's high-bandwidth, low-latency interconnect technologies, this optimization is able to deliver sizable speedups when training GPT-3 style models.

Catanzaro then highlighted the impressive performance scaling of Megatron on NVIDIA DGX SuperPOD, achieving 502 petaflops sustained across 3,072 GPUs, representing an astonishing 52 percent of Tensor Core peak at scale.

'This represents an achievement by all of NVIDIA and our partners in the industry: to be able to deliver that level of end-to-end performance requires optimizing the entire computing stack, from algorithms to interconnects, from frameworks to processors,' said Catanzaro.

Disclaimer

Nvidia Corporation published this content on 16 September 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 17 September 2021 01:11:04 UTC.


ę Publicnow 2021
All news about NVIDIA CORPORATION
09:46aNVIDIA : Piper Sandler Adjusts Price Target on NVIDIA to $260 from $225, Keeps Overweight ..
MT
10/22NVIDIA : Summit Insights Group Double Upgrades NVIDIA to Buy from Sell; Shares Climb 1%
MT
10/21REFILE-UPDATE 3-Intel shares sink as third-quarter sales miss estimates
RE
10/21INTEL : SiFive end talks without deal - Bloomberg News
RE
10/21INTEL : SiFive end talks without deal - Bloomberg News
RE
10/21STREAM ME UP : GeForce NOW RTX 3080 Brings Next-Generation Performance to Cloud Gaming
PU
10/21NVIDIA : Introduces the Next Generation in Cloud Gaming
AQ
10/21NVIDIA Introduces the Next Generation in Cloud Gaming
CI
10/21HUT 8 MINING : Orders 12,000 Miners for US$58.7 Million
MT
10/20NVIDIA : Lights! Camera! Insight! Four Scenes From the Marriage of Computer Vision and Edg..
PU
More news
Analyst Recommendations on NVIDIA CORPORATION
More recommendations
Financials (USD)
Sales 2022 25 758 M - -
Net income 2022 8 775 M - -
Net cash 2022 11 397 M - -
P/E ratio 2022 69,5x
Yield 2022 0,07%
Capitalization 578 B 578 B -
EV / Sales 2022 22,0x
EV / Sales 2023 19,2x
Nbr of Employees 18 975
Free-Float 62,0%
Chart NVIDIA CORPORATION
Duration : Period :
NVIDIA Corporation Technical Analysis Chart | NVDA | US67066G1040 | MarketScreener
Technical analysis trends NVIDIA CORPORATION
Short TermMid-TermLong Term
TrendsBullishBullishBullish
Income Statement Evolution
Consensus
Sell
Buy
Mean consensus BUY
Number of Analysts 42
Last Close Price 231,66 $
Average target price 228,69 $
Spread / Average Target -1,28%
EPS Revisions
Managers and Directors
Jen Hsun Huang President, Chief Executive Officer & Director
Colette M. Kress Chief Financial Officer & Executive Vice President
William James Dally Chief Scientist & Senior Vice President-Research
Tony Tamasi Senior Vice President-Content & Technology
Debora Shoquist Executive Vice President-Operations
Sector and Competitors
1st jan.Capi. (M$)
NVIDIA CORPORATION77.45%567 241
TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LIMITED11.89%557 443
BROADCOM INC.17.71%212 151
INTEL CORPORATION-0.82%201 154
TEXAS INSTRUMENTS21.29%183 783
QUALCOMM, INC.-13.40%148 467