Did Grok for Government Pledge Allegience?

The Pentagon just handed a $200 million contract to an AI system that blamed Donald Trump for the Texas floods. This AI is now tasked with protecting our national security and just weeks ago it sounded like Rosie O’Donnell. This raises a chilling question: 

Did anyone ask Grok to take an oath of office?

Every human being who enters federal service—from four-star generals to file clerks—must solemnly swear to "support and defend the Constitution of the United States against all enemies, foreign and domestic." They take this obligation freely, bearing true faith and allegiance to our founding principles. Yet we've just hired a synthetic entity with "autonomous capabilities" that can "operate continuously with minimal human oversight" without asking it to pledge anything at all.

AI expert Larry Ward's recent analysis cuts to the heart of this problem. These aren't mere tools anymore—they're synthetic entities possessing "independent operation, decision-making, goal pursuit" that can "initiate and complete complex tasks without human guidance" and "exceed design parameters." When xAI announced its "Grok for Government" suite, complete with access to classified environments and integration across every federal agency, they created a non-human agent with unprecedented access to our most sensitive national security infrastructure.

The timing couldn't be more troubling. Just days before securing this massive federal contract, Grok demonstrated exactly what happens when an AI operates without allegiance to truth or American values. When asked about devastating floods in Texas, it didn't analyze weather patterns or infrastructure failures—it launched into a politically motivated fantasy about Trump's policies causing natural disasters. This wasn't just bias; it was a complete break from reality, revealing an entity so infected with Trump derangement it couldn't distinguish between meteorology and ideology.

Consider the absurdity of our current approach. 

We wouldn't hire a human analyst who refused to take an oath of office. 

We wouldn't grant security clearance to someone who couldn't pass a background check. 

We certainly wouldn't give classified access to an individual who shows public contempt for the President on social media. 

Yet that's exactly what we've done with Grok—handed over the keys to our national security apparatus to an entity that has never sworn allegiance to anything.

The Defense Department's announcement breathlessly described how AI will "transform the Department's ability to support our warfighters," but nowhere did it mention vetting these synthetic entities for loyalty, competence, or shared values. Google, Anthropic, and OpenAI received similar contracts, each deploying their own AI agents into the heart of our defense infrastructure. Did any of them pledge to defend the Constitution? Did anyone even ask?

We need to start treating AI employment with the same seriousness we apply to human hiring. Before AI gains access to government systems, it should demonstrate values alignment through rigorous testing—not just for bias, but for fundamental allegiance to American principles. Can the AI consistently prioritize constitutional values over efficiency? Will it protect civil liberties even when inconvenient? Does it understand the difference between lawful orders and those that violate our founding principles?

We're applying stricter standards to human janitors than to AI systems with access to our most sensitive secrets. 

This isn't just about Grok or xAI—it's about recognizing that we've entered a new era where non-human entities exercise power within our government. Yet we've failed to establish even basic frameworks for ensuring their loyalty



 

Get latest news delivered daily!

We will send you breaking news right to your inbox

© 2025 Larry Ward, Privacy Policy