By Emmanuel Ogbodo
Nvidia CEO Jensen Huang believes that the only effective way to fight AI abuse is by deploying more AI. Speaking at an event hosted by the Bipartisan Policy Center in Washington, Huang emphasized that AI’s ability to generate fake data and misinformation at unprecedented speeds means that only advanced AI systems can keep up with and counter these threats.
Huang compared the current situation to cybersecurity, highlighting that “almost every single company” faces potential attacks, and defending against these threats requires AI-driven systems. In a similar manner, combating harmful AI will necessitate the use of even more sophisticated AI tools.
The concern about AI misuse is especially acute in the U.S. as the country approaches its upcoming federal elections in November. With the rise of AI-generated misinformation, many fear its potential influence on democracy. A recent Pew Research Center survey revealed that nearly 60% of Americans are “extremely” or “very” concerned about AI spreading false information about candidates, with both Democrats and Republicans sharing this anxiety. Additionally, around 40% believe AI will be mostly used for harmful purposes during the elections, while only 5% are optimistic about AI’s potential.
Huang urged the U.S. government to take AI seriously by becoming a practitioner of AI technology, suggesting that every department, especially those in Energy and Defense, should adopt AI solutions. He also proposed the idea of building an AI supercomputer to advance the country’s AI capabilities.
Huang warned that AI’s growth will require much more power in the future. Currently, AI data centers consume about 1.5% of the world’s electricity, but he predicts this could increase by 10 to 20 times as AI models start teaching each other, further driving up energy use. To address this, Huang suggested building data centers near sources of excess energy, such as remote locations with abundant resources.
Meanwhile, a debate over AI regulation is intensifying. In California, Governor Gavin Newsom recently vetoed Senate Bill 1047 (SB 1047), which aimed to impose mandatory safety measures for AI systems. The bill, authored by Democratic Senator Scott Wiener, would have required AI developers to implement a “kill switch” for their models and publish plans for mitigating extreme risks, with developers potentially facing legal action if their systems posed ongoing threats. However, Newsom argued that the bill’s stringent standards would stifle innovation and were not the most effective way to tackle AI dangers, prompting resistance from major tech companies like OpenAI, Meta, and Google.