Reproducing Vulnerabilities in Test Reports

Thurs 20th Feb 14

Three times in the past few months I've been asked by clients to retest previous findings to see if they have been successfully fixed. One of the reports I was given was one I'd written, the other two were by other testers.

For my report I couldn't remember anything about the test, reading the report gave me some clues but I was really lucky and found that I'd left myself a test harness in the client's folder fully set up to test the vulnerability. One of the other two was testing for a vulnerability I'd never heard of and couldn't find anything about on Google. I finally tracked down the original tester and it turns out there is a simple tool which tests for the issue and one command line script later the retest was over. The final issue was one that I knew about but had a really good write up that, even if I'd not heard of it, had a full walk through on how to reproduce the test.

So what is my point? Having struggled myself to reproduce two of the three issues I started wondering, what did the client do when they got the reports? Did they manage to reproduce the issue, did they even understand the issue? If these clients hadn't had a free retest option would they have implemented the suggested fix (one didn't have one) then left it at that assuming that the fix, as they followed the instructions, was perfect and solved their issue. Knowing how hard it is for some fixers to get time allocated to implement the fix I doubt many of them are going to go out of their way to track down ways to test the issues and then have the tools available to do the testing.

What can we do to make this better? The obvious answer is to include full steps to reproduce the issue. This is fine for something like basic XSS where you give them an injection string and tell them which box to enter it in but what about more obscure vulnerabilities such as a host accepting ICMP redirects or, bear with me on this, weak SSL ciphers.

The more obscure vulnerabilities are usually things that have been picked up by tools such as Nessus and, lets be honest, we as testers often don't know how to manually test them, we just take what Nessus says as being the truth. I find this especially true with low level issues which, in a normal test, there isn't time to manually verify and which aren't interesting enough to go home and work on in our own time. If you are lucky your client may have a vulnerability scanner and you can tell them to re-run it till the issue goes away but from my experience clients with scanners are often few and far between. This means we have to find ways for them to be able to reproduce the issue in a simple way with the least reliance on tools which aren't found in a standard IT department. Imagine trying to include in a report how to get a Linux live CD booted, install the missing dependencies, clone a GitHub repo then finally run the tool, it isn't going to happen, especially for a CVSS 2.0. So what can we do? Write a custom test script which includes everything required in a single Windows app and can be installed without dependencies? If you have time and budget to do this you are very lucky, especially for the dozen obscure low level issues that usually come up on larger scans.

Back to weak SSL ciphers, as testers, this is a fairly easy one to test, our automated scanners will pick them up and give us a nice list of which are enabled or, if we want to run a test just looking for SSL problems, the Qualys SSL Labs site does a great job. Again though, we can't just tell them to run the automated scanner and how many employers would drag their testers in for a roasting for including links to a competitors site in their report? Is there an easy to install Windows tool which will do this test? There may be but I've never bothered looking for it as I've never needed it.

I wish this was the point where I could give an amazing answer and tell you exactly how to solve this problem and include perfect scripts which will test everything in your report without you having to lift a finger, but it isn't. I've only just started thinking about this problem so I have very few answers. For the more common vulnerabilities I'm going to start writing instructions on how to test for them which I can add to my issue library. Some of these will require me to write some scripts which I can give to the fixers to help them and, where possible, I'll publish these. For the more obscure vulnerabilities, I'll take it on an issue by issue basis. The problem I think I might find though is that, once I include steps to reproduce some of the issues in the report, it is going to make the ones that don't have the steps stand out and dedicated fixers who want to run through and prove to themselves that they have fixed everything are going to come back to me asking for the missing steps. This means that if I don't document everything during the allotted time slot for that test I'm going to have to try to write these steps while on the clock for a different client and, possibly more seriously, without access to the vulnerable system needed to come up with the steps. If I have to build a system vulnerable to ICMP redirects before then coming up with the steps to manually verify it the workload is going to be huge and, in cases where the vulnerability is in software I can't get hold, of it will be impossible.

I'd love to know how other people handle this and how many of you include this kind of information in your reports. I'm going to start a discussion on the Pauls Security Weekly mailing list so if you aren't already a member please join and add your comments. If I get some good feedback I'll do a follow up article on what has been discussed.

Recent Archive

Support The Site

I don't get paid for any of the projects on this site so if you'd like to support my work you can do so by using the affiliate links below where I either get account credits or cash back. Usually only pennies, but they all add up.