- OMB releases two directives for AI in government entities
- Mandatory investments in AI products made in the US
- Updating policies and procedures within 270 days
- Stargate is the center of these initiatives
OMB has issued two directives for AI in government entities, which include mandatory investments in US-made AI products and updating policies and procedures within 270 days. The release of Meta’s Llama 4 as a direct response to DeepSeek brings the US back into the AI race, and the Stargate project is taking an increasingly key place in these initiatives.
More About OMB Directives for AI
So, two directives we’ve seen take a pretty thorough approach to AI development and adoption, continuing the Stargate project.
OMB Memorandum M-25-21, accelerating Federal Use of AI through innovation, governance, and public trust. To elaborate, it directs federal agencies to remove bureaucratic barriers to AI adoption and appoint a Chief AI Officer. It also instructs Agencies to invest in US AI products and services and update their policies within 270 days. There is a big focus on implementing risk management practices for high-performance AI systems, especially those that impact civil rights and liberties.
OMB Memorandum M-25-22, driving efficient acquisition of Artificial Intelligence in government. This same one focuses more on the economic side of the issue, namely, to optimize the processes for procuring AI technologies for federal agencies, ensuring their efficiency, security, and reliability. This includes creating procedures for efficient acquisition of AI systems, ensuring competition in the AI marketplace and avoiding dependence on a single vendor, adding cross-functional collaboration (IT, security, procurement, legal issues) for comprehensive evaluation and decision-making, and supporting US-based AI developers and US-made products.
Also, an important complementary event was the release of Meta’s Llama 4, which marked another new bar in the development of open language models. More specifically, we have 3 models – Scout, Maverick, and Behemoth.
All models are multimodal – natively perceive text, images, and video. Trained on 30 trillion tokens, with tokens from other languages now 10x more compared to Llama 3. Comes in three sizes:
- Scout (109B) – a model with 10 million context tokens, which is a record for a release model. Beats Gemma 3 and Gemini 2.0 Flash Lite in terms of tokens, falling slightly short of full Flash 2.0. This is a MoE model with 16 EAs, and 109B parameters (17B active). With quantization, it fits into a single GPU.
- Maverick (400B) – Better than Gemini 2.0 Flash with GPT 4o, about on par with the updated DeepSeek V3, but the model is multimodal and noticeably smaller in size. Context is 1 million tokens, less than Scout but much better than other competitors. Active parameters are still the same 17B, but experts are 128, so 400B parameters. The model can be run in fp8 on a single node with 8xH100.
- Behemoth – is a giant model with two trillion parameters (288B active, 16 experts). It beats all Instruct models by a significant margin. Behemoth is still being trained, but its early versions have already been distilled into Scout and Maverick, which has bumped up their performance quite a bit.
It’s still an Instruct release, but Llama 4 Reasoning is coming.
Conclusion
The race for advanced technology amidst a forceful trade war and falling stocks of all markets and tech companies particularly. Things seem to be escalating in all areas, but key priorities on Crypto and AI – still remain high for key players.
It is crucial to keep a very close eye on very rapidly developing events. Be aware, and always assess the situation comprehensively, diversify risks, and adapt your strategy to daily changes.