Tracker Tracking
When doing reconnaissance on clients it is often useful to try to identify other websites or companies who are related to your target. One way to do this is to look at who is managing the Google Analytics traffic for them and then find who else they manage.
There are a few online services which do this, the probably best known being ewhois, but whenever you use someone else's resources you are at their mercy over things like accuracy of the data and coverage, especially if you are working for a small client who hasn't been scanned by them then you won't get any results.
This is where my tracker tracking tool comes in. The tool is in two parts, the first uses the power of the nmap engine to scan all the domains you are interested in and pull back tracking codes, these are then output in the standard nmap format along with the page title. I've then written a second script which takes the output and generates a grouped and sorted CSV file which you can then analyse.
Here it is the nmap part in action:
nmap --script http-tracker_tracking.nse -p 80 -T 4 zonetransfer.me digininja.org -oA tracking
Starting Nmap 6.00 ( http://nmap.org ) at 2013-03-01 13:46 GMT
Nmap scan report for zonetransfer.me (217.147.180.162)
Host is up (0.024s latency).
PORT STATE SERVICE
80/tcp open http
| http-tracker_tracking:
| Tracking code: 7503551
|_ Page title: ZoneTransfer.me - DigiNinja
Nmap scan report for digininja.org (217.147.180.164)
Host is up (0.025s latency).
rDNS record for 217.147.180.164: www.digininja.org
PORT STATE SERVICE
80/tcp open http
| http-tracker_tracking:
| Tracking code: 7503551
|_ Page title: DigiNinja
Nmap done: 2 IP addresses (2 hosts up) scanned in 0.30 seconds
This shows that both digininja.org and zonetransfer.me share the same tracking code.
You then take the .nmap file which is created and pass that to the second script:
./parse_tracking.rb tracking.nmap tracking.csv
7503551
zonetransfer.me
ZoneTransfer.me - DigiNinja
digininja.org
DigiNinja
As well as creating the csv file this outputs the results grouped by code. The final csv file looks like this:
cat tracking.csv
7503551,zonetransfer.me,ZoneTransfer.me - DigiNinja
7503551,digininja.org,DigiNinja
You can then open this in a spreadsheet and start your analysis. I'd like to output it in a better way to more show off the groupings, if you can suggest a way please get in touch.
Where does the initial list of domains to check come from? That is up to you, you could generate a list based on the market sector your client is in or maybe geographical location. For testing I've grabbed a list from Alexa.
Download and Samples
Some sample data - This is the result of scanning the top 10,000 entries in Alexa. This produced 5650 tracking codes of which 5149 were unique.
The largest groupings were:
Description | Code | Number of sites |
---|---|---|
Wordpress/template site | 11834194 | 9 |
Porn | 28822266 | 9 |
South American shopping sites | 8863458 | 9 |
Usage Tips
Nothing special needs to be set up to use either parts of this tool, the nmap script can run from the current directory and is simply referenced with the --script arguement as shown above. The Ruby script doesn't require any gems and should run on any Ruby install.
If you want to merge multiple nmap scans then because of the way I parse the .nmap file you can simply cat them all together into a single one ready to pass to the parser. That is what I did to generate the sample output.
If you get the list from Alexa then you need to strip the leading position field from it, this sed command will do that:
sed -i "s/^[0-9]*,//" top-1m.csv
To see the largest groupings from an output csv file:
cut -f 1 -d "," top_10000.csv | uniq -c|sort
While building this tool I came across a couple of issues in nmap that are worth mentioning, the first is the way they parse HTTP redirects. There are a few sites which don't fully abide by the RFC but, because nmap does, these sites don't redirect properly within the script. Twitter.com is the first site I found this on but there are others. See this mailing list thread for more information.
The second is a normal bug in nmap where it fails if given a header location field which can't be correctly parsed, I've reported this and hopefully it will be fixed soon. This explains the occasional error that appears in the sample output.
Finally, before someone points out there is a Ruby gem to parse the XML file created by nmap, I know. Parsing the plain text file however was easier as I already had the code available and it doesn't require people to install a new gem.
Thanks BruCON
This is the first of my tools sponsored by the BruCON 5x5 award.