Trap not sprung

I know it’s been only 12 hours, but another 100meg has gone, and nothing from the trap. To that end, I’ve added a directory, renamed the php file to index, and dropped it in there. I have also put a href link inside one of my pages that a bot will follow unless it has read robots.txt. Now in theory, that should get pretty much every spider banned – I hope. Google reckon their bot updates the robots.txt every week or so, but I can WHOIS all the IP’s anyway and allow googlebot back in anytime, and a week of no spidering won’t exactly harm me.

The other issue is the 403. If you get script banned, you see the generic 403. If I ban, you see a custom 403 – which I still need to update. I can’t seem to get my 403 to display in the former instance, despite the redirects being at the top of the htaccess. Don’t suppose that matters too much, and if you have been digging around pages, viewing source to see hidden links and then following them, you deserve to get banned anyway.

One issue I can’t seem to solve is hotlinking – and I have tried. If I use the standard hotlinking .htaccess stuff, it works fine. If though I then put up a screenshot for someone, or someone gets to a page of mine from another referrer, they see a 403 and that’s not ideal. They have to access a page here with images from a page that has no images. Or something like that. One way round it is to paste the address into the browser, but that’s an ugly solution – anyone got any ideas ?

One thought on “Trap not sprung

  1. give the bot trapping time. when i first set up my own traps, it took a few weeks to catch a bad bot. i’m thinking of banning any bots going to a directory called images. nowadays, the in thing for bots to do is go after images. you could also set up a ban for the image file types. another strat i’ve been pondering is to set up traps for a fake movable type set up. for instance, ban bots who scan for mt-comments.cgi or mt-tb.cgi. people like to search for that sort of thing.

Comments are closed.