GS.AI: Elon's Half-Baked Attempt at "Government Efficiency"

Just ask the model how to govern better!! Surely nothing bad can happen if generative AI is telling government employees what to do.

First post on beehiiv!

What a time since Jan 20th, huh? To think the fall of American democracy would truly kickoff on MLK day in 2025 is something I’m sure nobody had on their bingo card.

Rather than provide general political commentary regarding Elon Musk, the unconfirmed, unelected foreign national getting control of all of the United States’ systems with no security clearance and providing said credentials to other 19-24 year old members of his “cabinet” that also have no security clearance on this fine Monday morning…. I’d like to hone in on a recent announcement made by the new Department of Government Efficiency (DOGE) on utilizing a custom chatbot AI in the U.S. General Services Administration (GSA).

We’ll be talking about the announcement itself, the implications, and my thoughts on why this is a dangerous move from DOGE and the GSA and a step closer towards total tech bro oligarchy.

Let’s dive in!

DOGE Announces GS.AI

In an effort to promote “government efficiency,” Elon and his DOGE crew has been seeking to implement a chatbot referred to as GS.AI. You can read more about it on this post from the Silicon Angle.

Assuming that this chatbot gets implemented, it would be assisting over 12,000 GSA employees in their daily duties. The actual model is still TBD, as is the exact tasks GS.AI would be helping with, but some ideas were noted by Silicon Angle:

  • Improve staffer productivity

  • Analyze the GSA’s procurement agreements and other contracts

  • GS.AI would serve as a “centralized place for contracts so we can run analysis on them,” Wired quotes Thomas Shedd, former Tesla employee and current GSA tech lead.

All sounds good on paper, right? Let’s sign that blank check over and get these chatbots moving!

Digging in deeper

In all actuality, a lot of what DOGE is trying to implement with GS.AI, as you can tell, is intentionally vague. What exactly does “improve staffer productivity” mean? When we’re looking at analysis of procurement agreements and other contracts, are there examples of what these tools might be used for in this capacity? Of course not.

This move by DOGE capitalizes on a bipartisan notion that the government is inefficient. I think many people agree on government being slow and cumbersome. Where the government is inefficient is what is contentious and potentially problematic long-term. Social Security and Medicare are examples of what one side thinks is unnecessary and cumbersome.

In regards to analysis of data, does the GSA really need generative AI to make this happen? Maybe! It is likely a slew of unstructured data that is a pain in the ass to make pretty for querying and generating reports of. Can’t just import everything into Snowflake and get an average for predicted cost vs. average actual cost. If a model can help make these things happen, that’s saving some serious time.

There’s no bias in GSA.

My biggest worry with implementing generative AI into government processes isn’t the implementation itself, but what biases does the models they use have and how do they impact government contracts?

LLMs, as many are aware, act like very precise parrots. They regurgitate the information provided to them very very well. What happens when someone with no possibility of being biased, like Elon Musk, trains and/or directs training of a model for use in a government setting? Those biases are very likely to come out of the woodwork. We’re already seeing this with new NSF guidance on words that might get a paper flagged. This might be useful if it was for solely ensuring contracts fall within government regulation, but it is extremely unlikely this will be the case. Especially if, this extremely unbiased Elon Musk that doesn’t have any government contracts and monetary incentive to bias said contract awards, uses this to ensure his companies like Tesla, Starlink, and X all get ahead. Surely this is a conflict of interest having the head of DOGE also own these companies?

Another major concern: let’s assume a big player like Microsoft “wins” this contract to implement GS.AI for the GSA. More than likely, they’d be leveraging OpenAI’s models via some sort of Microsoft Azure. Silicon Angle predicts as much in their post. We know this to be the case, and I’ve given some information regarding this topic in the past.

Now OpenAI’s biases are also affecting the output being given to government employees. Could they bias results to favor OpenAI in the context of what an “efficient” government contract looks like? Elon doesn’t like Sam Altman, so maybe not. However, it opens a door of possibility to where conflicts of interest can run rampant in a government agency. These clashes of tech oligarch ego surely cannot affect the GSA, or any of the agencies monitored by DOGE, in any way.

Is there a possibility of this being a good thing?

I think so! You can implement these tools efficiently and actually get what Elon is claiming on paper. There’s a possibility that, given the benefit of doubt, that Elon could actually make the GSA unilaterally more efficient. I would advise that some key things would need to happen:

  • The criteria that DOGE uses to gauge contract efficiency within the GSA is government regulation and publicly available.

  • What tasks GSA employees utilize AI with is readily available in some sort of policy that DOGE puts out.

  • These GS.AI tools are shown to be unbiased with some sort of report regarding the contract evaluation process and examples / evidence.

  • Tesla, Starlink, X, and any other company Elon either owns or has a stake in does not gain an unfair advantage as a result in securing government contracts.

Do I realistically think these will happen? Absolutely not. However, they’re certainly steps that can be done to make this whole implementation a lot less scary for the general public and non-GOP aligned lawmakers and employees.

Conclusion

Don’t take everything regarding Elon’s efficiency at face value. There are futures where these implementations will be better for us, but even Elon has shown us that bias and conflict of interest can seep into AI usage.

For those trying to implement AI in their organizations, be mindful of bias! Consider using frameworks like these to evaluate and improve:

Happy inaugural beehiv post! Talk with you all soon.

New platform, same promise. Cha Cha pictures are here to stay!

Reply

or to participate.