Tracker Tracking

When doing reconnaissance on clients it is often useful to try to identify other websites or companies who are related to your target. One way to do this is to look at who is managing the Google Analytics traffic for them and then find who else they manage.

There are a few online services which do this, the probably best known being ewhois, but whenever you use someone else's resources you are at their mercy over things like accuracy of the data and coverage, especially if you are working for a small client who hasn't been scanned by them then you won't get any results.

This is where my tracker tracking tool comes in. The tool is in two parts, the first uses the power of the nmap engine to scan all the domains you are interested in and pull back tracking codes, these are then output in the standard nmap format along with the page title. I've then written a second script which takes the output and generates a grouped and sorted CSV file which you can then analyse.

Here it is the nmap part in action:

nmap --script http-tracker_tracking.nse -p 80 -T 4 -oA tracking

Starting Nmap 6.00 ( ) at 2013-03-01 13:46 GMT
Nmap scan report for (
Host is up (0.024s latency).
80/tcp open  http
| http-tracker_tracking: 
|   Tracking code: 7503551
|_  Page title: - DigiNinja

Nmap scan report for (
Host is up (0.025s latency).
rDNS record for
80/tcp open  http
| http-tracker_tracking: 
|   Tracking code: 7503551
|_  Page title: DigiNinja

Nmap done: 2 IP addresses (2 hosts up) scanned in 0.30 seconds

This shows that both and share the same tracking code.

You then take the .nmap file which is created and pass that to the second script:

./parse_tracking.rb tracking.nmap tracking.csv
7503551 - DigiNinja

As well as creating the csv file this outputs the results grouped by code. The final csv file looks like this:

cat tracking.csv 
7503551,, - DigiNinja

You can then open this in a spreadsheet and start your analysis. I'd like to output it in a better way to more show off the groupings, if you can suggest a way please get in touch.

Where does the initial list of domains to check come from? That is up to you, you could generate a list based on the market sector your client is in or maybe geographical location. For testing I've grabbed a list from Alexa.

Download and Samples

Tracker Tracking 1.0

Some sample data - This is the result of scanning the top 10,000 entries in Alexa. This produced 5650 tracking codes of which 5149 were unique.

The largest groupings were:

Description Code Number of sites
Wordpress/template site 11834194 9
Porn 28822266 9
South American shopping sites 8863458 9

Usage Tips

Nothing special needs to be set up to use either parts of this tool, the nmap script can run from the current directory and is simply referenced with the --script arguement as shown above. The Ruby script doesn't require any gems and should run on any Ruby install.

If you want to merge multiple nmap scans then because of the way I parse the .nmap file you can simply cat them all together into a single one ready to pass to the parser. That is what I did to generate the sample output.

If you get the list from Alexa then you need to strip the leading position field from it, this sed command will do that:

sed -i "s/^[0-9]*,//" top-1m.csv

To see the largest groupings from an output csv file:

cut -f 1 -d "," top_10000.csv | uniq -c|sort

While building this tool I came across a couple of issues in nmap that are worth mentioning, the first is the way they parse HTTP redirects. There are a few sites which don't fully abide by the RFC but, because nmap does, these sites don't redirect properly within the script. is the first site I found this on but there are others. See this mailing list thread for more information.

The second is a normal bug in nmap where it fails if given a header location field which can't be correctly parsed, I've reported this and hopefully it will be fixed soon. This explains the occasional error that appears in the sample output.

Finally, before someone points out there is a Ruby gem to parse the XML file created by nmap, I know. Parsing the plain text file however was easier as I already had the code available and it doesn't require people to install a new gem.

Thanks BruCON

This is the first of my tools sponsored by the BruCON 5x5 award.

Support The Site

I don't get paid for any of the projects on this site so if you'd like to support my work you can do so by using the affiliate links below where I either get account credits or cash back. Usually only pennies, but they all add up.