Just a couple of months ago, California-based microchip maker Nvidia became a $1 trillion company as demand for its advanced microprocessors continues to climb, driven in large part by artificial intelligence technology developers and the sector’s insatiable hunger for ever more processing power.
On Wednesday, the company released another blockbuster earnings report, riding a wave of market dynamics that has driven its stock price up over 200% since the first of the year.
Nvidia announced it had brought in $13.51 billion in its second quarter ending on July 31, up 101% from a year ago and up 88% from the previous quarter. Earnings on that revenue came in at $6.19 billion, up from $656 million in the same quarter last year — an 854% leap.
Nvidia founder and CEO Jensen Huang made no bones about what’s propelling his company to new heights.
“A new computing era has begun,” Huang said in a press release. “Companies worldwide are transitioning from general-purpose to accelerated computing and generative AI.
“NVIDIA GPUs connected by our Mellanox networking and switch technologies and running our CUDA AI software stack make up the computing infrastructure of generative AI.”
Nvidia’s flagship product is its H100 graphic processing unit, or GPU. The mega-powerful chip contains some 80 billion transistors, about 13 million more than Apple’s latest high-end processor for its MacBook Pro laptop, according to a report from Fortune.
The microprocessor easily outpaces its closest competitors and is the preferred engine to drive the massive amounts of data processing and calculations needed to perform the tasks required of emerging artificial intelligence tools like OpenAI’s popular ChatGPT and Google’s Bard.
The market demand for Nvidia’s advanced chips has pushed pricing for the H100 as high as $40,000, according to The New York Times. The high costs have put smaller companies at a disadvantage when it comes to advancing their own AI developments as better-capitalized teams have snapped up limited supplies of Nvidia products. The company continues to ramp up production, which is done by third-party advanced chip manufacturers, and is aiming to triple its chip output in the coming year.
Nvidia’s current market dominance has its roots in Huang’s early prediction that AI would emerge as a significant market player when it came to elevating the need for ultra-advanced chips.
By 2018, Huang was convinced that AI would lead to a technology market shift as significant as Apple’s 2007 introduction of the iPhone, which ignited a mobile computing revolution, according to The Associated Press. Huang was so confident in his vision of an AI-driven future that he was willing to wager the future of Nvidia on the idea and doubled down on AI. At the time, Nvidia’s market value stood at about $120 billion.
“I think it’s safe to say it was worth it to bet the company” on AI, Huang, 60, said during a presentation earlier this month, per AP.
About the same time Huang was having his epiphany about the kind of chips AI developers would be most desirous of, an analysis by OpenAI found that the amount of computational power used to train the largest AI models had doubled every 3.4 months since 2012. From 2012-2018 computer usage for AI development increased by 300,000-fold, according to the report.
In January of last year, Georgetown University’s Center for Security and Emerging Technology released a report noting breakthroughs in artificial intelligence have come like clockwork, driven to a significant extent by an exponentially growing demand for computing power. One of the largest models, released in 2020, used 600,000 times more computing power than the noteworthy 2012 model that first popularized deep learning, according to the report.
Authors of the report also predicted the rate of advancements in artificial intelligence could be slowed down by a number of emerging factors including the cost of training AI models, a limited supply of advanced chips and issues raised by the traffic jams generated by training large models across multiple processors.
“Experts may not agree about which of these is the most pressing, but it is almost certain that they cannot all be managed enough to maintain the last decade’s rate of growth in computing,” the report reads.

