“Some prospective customers have paused negotiations while assessing the implications of the designation. Others have asked for stronger contractual protections, including broader termination rights, in case regulatory conditions change,” Gogia noted.
This is being largely driven, Gogia added, by the structure of the AI solutions market: “AI models are rarely used in isolation. They power developer tools, enterprise platforms, automation systems, and customer-facing applications. When the supplier of a core model receives a risk designation, organizations that depend on that technology may reconsider how much exposure they want.”
Ethical red lines at the center of the dispute
The filing further goes on to extend Microsoft’s full support to Anthropic’s ethical red lines, such as avoiding the use of frontier AI models for mass domestic surveillance and in weapons systems, which have been the flashpoint of the disagreement between the AI startup and DoD that started in February.
Read the full article here

