Interactive Pentesting

Friday 8th April 2016

If you asked any gathering of testers whether they would like to do their tests and not have to write the reports afterwards then I'd bet most would jump at the chance. I've been lucky recently to have done two tests like this and thought I'd share the experience as both have been very successful jobs.

Whenever I quote for a job, I always break down testing time and reporting time and give the client an offer of different types of reporting. Most take the "full report" option but these last couple of tests have been different. The first said they were doing the testing purely as an in-house thing and so didn't need any kind of formal report, the guy I was working with suggested using a shared Google Document as a way for me to record my findings so we had a good record of what was going on but I didn't need to spend time doing formal writing up at the end. This worked really well, I noted down my findings, dropped in screenshots and added remediation advice as I would in a full report but not having to worry about formatting, creating tables of issue severities or any the other things that a full report includes, meant that I could spend a lot more time on the testing. As I wrote things, the client was watching the document and added comments and asked questions. I was also able to ask questions of him, such as the expected path of data through a certain process, which he then wrote into the document. At the end we had a full record of what had been done, issues found and a good set of notes which we both agreed on. I mentioned I was doing this on Twitter and quite a few people said that they wouldn't enjoy having the client basically watching over their shoulder as they logged everything but I found it made me think more about what I was writing and the immediate feedback prevented misunderstandings and I think made for a much smoother test.

The second test was also part of an in-house only web application assessment. This time the client suggested using Slack which I'd dabbled with before but never really used and didn't fully understand the implications of but I went for it anyway. For those who haven't used it, Slack is a modern version of IRC. I was given my own "Pentesting" channel, which was open to all the developers, and I got on with testing. Every time I found an issue or had a question I wrote it up and got pretty much immediate feedback. This was great and, by being able to address all the development team at once, I usually got a response by the person who had written the code rather than having to have things relayed around. I think it helps that the company use Slack for all their internal chatter so are used to getting notifications from it. The reporting wasn't as smooth as with the Google Doc as I couldn't easily go back and edit things or add formatting but the short feedback loop made up for that. At the end of the test, I made sure to scroll back through the chat and copy the whole thing into a local document so I had a record of everything just in case it was needed.

I feel that both of these tests delivered a higher quality of test than a normal test would have as the feedback really helped me understand what was going on and the buy-in from the client meant that things were addressed rather than covered up. Both were also able to spend their entire budged on testing time rather than having to put some of it aside for the reporting which obviously helps.

Another good interactive experience I had was with a client who was up to the deadline on getting tested for compliance. The test was of internal infrastructure but was being done remotely over a VPN through a VM that the client had set up for me. The client had a large number of internal subnets, each with a lot of hosts, and so each Nessus scan took around 3 hours. The client security team were all fairly new and so there was a lot of tidying up to do but they were on the ball and were fixing things as fast as I was reporting them. In the end I was the bottleneck as I was having to finish one job before going back to retest their fixes. To get round this I gave them login details to Nessus, showed them how to run the scans they needed and let them get on with it, this way they could do their fixes and check them without having to disturb me from what I was looking at. I also ended up teaching them other tools which I'd been using to false positive check some of the Nessus findings. The client loved it and, by the end of the test, only had one issue outstanding which was on an embedded box they couldn't do anything with. Now before people start shouting "The test was under resourced, I should have had more days or more people" that wasn't the case. Doing a "normal" test, I could have comfortably got got through the whole test in the time allotted and then submitted the report for the client to work on but, as the client was keen, and really interested in securing their network, working with them in this way prevented the usual lag generated by doing the test, writing the report, passing it through QA then finally sending it on, which meant that at the end of the test window they were happy that there wasn't going to be a shock and a mad panic followed by a re-test/re-report loop. Another question I'd expect now, "Did you trust the client scan results?", no I didn't. When they were happy they had cleared a subnet of all issues I rescanned it myself to confirm it.

I really enjoyed this test and I think the client did as well as they learnt new tools and techniques and didn't get the shock of a report full of issues being dumped on them at the end. I also learned quite a bit as I was having to explain in real time some vulnerabilities that I'd not come across before and then discuss possible fixes and mitigations or why the risk should be accepted. Finally I felt a lot more was achieved than on a normal test as I could see the results from my being there as the network was locked down around me, it beats the test where the year two report is the same as year one with only additions and no removals.

I hope these few case studies help others see beyond the "normal" test which a lot of people deliver and which have very little interaction with the client. I've always believed that the client should be fully involved in any testing that is being done but never managed to get it to this extent before. Now that I have, I'll definitely be offering, even pushing, for it to happen on future tests. Whether I'll get much take up I don't know but if I don't try it definitely won't happen.

Some final ideas, when testing on site, ask if the client has anyone who wants to sit with you during the test. I've had this happen a few times and I've really enjoyed it. Watching an admin or a developer seeing ways their systems can be abused is really good and once they start to participate, they are often much better than I am at pointing out all the dirty little holes they know are there but would not normally admit to. When testing on, or off site, daily updates are a must. Let the client know what is going on, what you have found, where you have looked and what they need to be fixing. I know a lot of people already do this but similarly, I know a lot of people who store up all the issues to the end of test debrief. If the client ignores the mails, that is up to them, but if you keep them informed then you give them the option. Finally, be enthusiastic about what you are doing. It isn't always easy, but if you show interest and enjoyment in what you are doing then it can rub off on the client and help to get them to engage which always makes for better results.

Recent Archive

Support The Site

I don't get paid for any of the projects on this site so if you'd like to support my work you can do so by using the affiliate links below where I either get account credits or cash back. Usually only pennies, but they all add up.