
Trump’s ‘anti-woke AI’ order could reshape how US tech companies train their models
A recent executive order signed by President Donald Trump, banning “woke AI” and models not deemed “ideologically neutral” from government contracts, is poised to significantly impact how U.S. technology companies develop and train their artificial intelligence systems.
This move comes amidst growing concerns over ideological bias in AI models. Previously, Western researchers noted that AI tools from Chinese firms like DeepSeek and Alibaba avoided questions critical of the Chinese Communist Party, a stance later confirmed by U.S. officials who stated these tools are engineered to reflect Beijing’s official narratives. This situation has been cited by American AI leaders, including OpenAI’s chief global affairs officer Chris Lehane, as justification for rapid technological advancement to counter what he termed “Communist-led China’s autocratic AI.”
President Trump’s order explicitly targets Diversity, Equity, and Inclusion (DEI) initiatives, labeling them a “pervasive and destructive” ideology that can “distort the quality and accuracy of the output.” Specifically, it references information related to race, sex, critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism as problematic areas for AI outputs.
Industry experts caution that this directive could foster a chilling effect among developers. Companies might feel compelled to align their AI model outputs and datasets with White House rhetoric in order to secure lucrative federal contracts, which are vital for their often cash-intensive businesses. This executive order was issued on the same day the White House unveiled Trump’s “AI Action Plan,” which prioritizes building AI infrastructure and cutting red tape over societal risk, with a strong focus on national security and competition with China.
The order mandates that the director of the Office of Management and Budget, along with other key federal administrators, issue guidance to agencies on compliance. During an AI event, Trump declared, “Once and for all, we are getting rid of woke. I will be signing an order banning the federal government from procuring AI technology that has been infused with partisan bias or ideological agendas, such as critical race theory, which is ridiculous. And from now on the U.S. government will deal only with AI that pursues truth, fairness, and strict impartiality.”
However, defining what constitutes “impartial” or “objective” AI presents a significant challenge. Philip Seargeant, a senior lecturer in applied linguistics at the Open University, argues that true objectivity in language, and by extension in AI, is a “fantasy.” Moreover, the administration’s interpretation of “woke” has been broad, encompassing various social and scientific initiatives. Rumman Chowdhury, a data scientist and former U.S. science envoy for AI, noted that anything disliked by the Trump administration is often summarily dismissed as “woke.”
While the order defines “truth-seeking” as LLMs that “prioritize historical accuracy, scientific inquiry, and objectivity,” “ideological neutrality” is specified as LLMs that are “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.” These definitions offer ample room for interpretation and potential pressure on AI firms, even though an executive order doesn’t carry the full force of legislation.
Recently, major AI players like OpenAI, Anthropic, Google, and xAI secured contracts totaling up to $200 million each with the Department of Defense to develop AI solutions for national security. It remains uncertain how these companies will navigate the new “anti-woke” directives, or which, if any, are best positioned to benefit.
xAI, particularly with its Grok chatbot, appears to be the most aligned with the order’s stated goals. Elon Musk has positioned Grok as an “anti-woke,” “less biased,” and truth-seeking AI. Grok’s internal prompts reportedly encourage it to challenge mainstream narratives, seek contrarian views, and even reference Musk’s own opinions on contentious subjects. Despite this, Grok has faced criticism for generating antisemitic remarks and praising historical figures like Hitler, raising questions about its impartiality. Stanford law professor Mark Lemley described the executive order as “clearly intended as viewpoint discrimination,” particularly given the government’s recent contract with xAI for “Grok for Government.”
The debate over AI bias isn’t new. Google’s Gemini chatbot, for instance, drew criticism last year for generating historically inaccurate images, such as a black George Washington or racially diverse Nazis, which Trump’s order highlights as examples of DEI-influenced AI models. This highlights how both developer caution and training data can lead to distorted outputs.
Chowdhury expressed concern that AI companies might actively manipulate training data to conform to political lines. She cited Elon Musk’s previous statements about xAI’s intent to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors,” which could empower Musk to unilaterally determine truth, with profound implications for information access.
The challenge, experts emphasize, is the inherent subjectivity of truth and impartiality in today’s polarized environment. David Sacks, an entrepreneur and investor appointed as Trump’s AI czar, has been a vocal critic of “woke AI,” framing his arguments as a defense of free speech against centralized ideological control. However, as Seargeant points out, even seemingly factual information, like climate science, can be politicized, making true objectivity an elusive goal.



