/
Detached from reality: Biden, et al., just 'chasing shiny objects' in regulating AI

Detached from reality: Biden, et al., just 'chasing shiny objects' in regulating AI


Detached from reality: Biden, et al., just 'chasing shiny objects' in regulating AI

While humanity rushes to distance humans from the most mundane and extreme walks of life there are, to be sure, threats associated with Artificial Intelligence (AI).

The greatest threats, however, may not be the most obvious, and U.S. government officials are looking in the wrong places, one tech leader said on American Family Radio last week.

The idea of killer robots is real, but so is the idea of reverse discrimination with the Executive Order signed by President Joe Biden on October 30, Jake Denton, a tech research associate for The Heritage Foundation, told show host Jenna Ellis. Denton argued that the greatest danger is that policy-makers, not just Biden, don't grasp the technology.

"Our country's approach to AI governance has been ineffective and unserious. If you were wondering why, it's because our leaders are crafting policy based on tech depicted in a mediocre Tom Cruise movie," Denton wrote on the social media platform X.

Biden's interest in AI was stoked in recent months as he saw fake images of himself, his dog, AI-generated poetry – and also by a viewing of "Mission Impossible: Dead Reckoning Part One" on a trip to Camp David, Deputy White House Chief of Staff Bruce Reed told The Associated Press. According to Denton, fake images should be the least important thing to worry about.

"The real issue here is most of the basis for our leaders across the board, not just Biden, is grounded in kind of the sci-fi depiction of artificial intelligence. It's not based in reality," Denton said. "So, we're not really getting a real sense of the threat because they're just chasing the shiny object that's depicted in a movie like that. The killer robots really could be a thing, but it's not the most important thing that we have to worry about."

So, what should be near the top of that worry list?

Denton, Jake (Heritage Foundation) Denton

"A lot of people have been concerned over the ability of these chat-based systems to maybe teach you how to make a dirty bomb or teach you how to execute a shooting in a more efficient manner," Denton continued. "Those are all things that you could potentially jailbreak these models or even get them in their kind of off-the-shelf capacity to instruct you on how to do. And we're barely even scratching the surface of legislation or an EO to force a company to eliminate that from the mode."

The sweeping executive order aims to "address algorithmic discrimination" and "ensure that AI advances equity," which Christopher Rufo, a senior fellow at the Manhattan Institute, says is code speak for left-wing social agendas like Critical Race Theory and Diversity, Equity and Inclusion.

"They want to embed the principles of CRT and DEI into every aspect of AI," Rufo wrote on the social media platform X.

Broad implications in Biden's EO

"When you read the entirety of the bill, that type of language is present throughout the entire thing. A lot of people are focused on how it applies in that civil rights section, but it actually has broader implications throughout the entire order," Denton argued.

The National Security section within the EO includes an emphasis on "red teaming," which a strategy used by developers to essentially weed out vulnerabilities within a system. It's also referred to as "ethical hacking."

But the definition of red teaming in this executive order places a great deal of emphasis on eliminating what the Biden administration sees as harmful responses or discriminatory outputs, Denton said.

"So even where a CEO intended to protect our national security, it's focused on that social justice agenda," Denton said.

He used job-hiring as an example.

"Let's say that the Silicon Valley companies really take this to heart and try and make [hiring practices] as equitable and as inclusive as possible. You're applying for a new job, and AI is the first to vet your resume. We've allowed these Silicon Valley companies to make [AI] as equitable as possible. Maybe they put a multiple in there that only factors race. Maybe your resume doesn't make it across the board because you're not the target race, or you went to a university they don't like so much," Denton said.

The EO doesn't define equity, "which kind of raises and enforcement question. If you can't define it, how are you going to enforce it?" Denton asked.

Congress needs to help in finding the fingerprints of regulation

According to Denton, the Democrats' fingerprints of regulation could be more easily spotted with a very basic programming technique called "Explainable AI." It's a concept that allows the learning model to be understood by humans who are not of the technology world.

Explainable AI counters the complex tendency of programming where even the designers cannot explain how the output arrived at certain conclusions. It is critical for an organization in building trust and confidence when putting AI models into production.

"Explainability is a kind of Computer Science 101-level approach to Artificial Intelligence and would really eliminate any potential for this stuff to be hidden in the models, which is what we're really worried about right now," Denton explained. "If you're embedding a DEI agenda in the models, you might sense it with the output – but you can't follow the paper trail to determine what's influencing the decision."

This, he said, is where Congress can play a role.

"Passing explainability legislation would kind of lift that black box off the model and allow you to go in and say, 'Oh, that response from the model came from this one paper that's from, you know, this radical leftist or extremist on the other side of the aisle.'

"Actually being able to audit these things is the very foundation of eliminating that type of bias that they're trying to put in," Denton stated.