The actual infrastructure spending lags a little bit, Lenovo’s Ashley Gorakhpurwalla, executive vice president and president of infrastructure solutions group, told Computerworld. “When you train foundational models, you start big, and you put all the capital in up front,” he said.
But when enterprises deploy AI, such as a chatbot, they start small and slowly scale up. “People deploy, iterate, and move forward.,” Gorakhpurwalla said. “When you first deploy a chatbot, it’s a small expense.”
However, even on the spending side, 2026 will be a big inflection year for inference, according to a December report by the Futurum Group. “We’re seeing a clear shift,” Futurum analyst Nick Patience said in the report. “Inference workloads are set to overtake training revenue by 2026.”
Read the full article here

