Cybersecurity consultant Brian Levine, executive director of FormerGov, said that the move to delay major AI Act restrictions until 2027 “leaves CIOs in a regulatory limbo, but it doesn’t change the underlying reality: enterprises still own the risk their AI systems create.”
“Whether Brussels enforces the rules next year or two years from now, the operational, legal, and reputational exposure from poorly governed AI is already here. CIOs shouldn’t treat the delay as a reprieve,” Levine said. “The organizations that wait for perfect regulatory clarity are the ones most likely to discover that their models have been quietly generating compliance, privacy, or safety liabilities long before any enforcement clock started ticking.”
Parliament proposed that “for high-risk AI systems specifically listed in the regulation–including those involving biometrics, and those used in critical infrastructure, education, employment, essential services, law enforcement, justice and border management,” the regulation would be applied on Dec. 2, 2027. For AI systems “that are “covered by EU sectoral legislation on safety and market surveillance,” it set a date of Aug. 2, 2028. The statement also noted that members are “in favor of giving providers until November 2, 2026 to comply with rules on watermarking AI-created audio, image, video or text content to indicate its origin.”
Read the full article here

