How to prevent a website to be downloaded by an of

Results 1 to 3 of 3

Thread: How to prevent a website to be downloaded by an of

  1. #1
    Stefan Walther Guest

    Default How to prevent a website to be downloaded by an of

    How can I prevent, that my whole website can be downloaded with an offline-browser? Has anybody made experiences with this issue??<BR><BR>Thanks in advance<BR>Stefan

  2. #2
    Chrace Guest

    Default RE: How to prevent a website to be downloaded by a

    As long as you want the user to be able to see the pages they can be downloaded and attempted to be used offline.<BR><BR>Since programs like Teleport Pro can emulate IE it is not possible to detect browser version and use this as a blocker.<BR><BR>If interesting you can set the http-equiv "expires" in order to make the pages outdated, but of course this can also be altered or interpreted differently clientside.<BR><BR>In other words, since everything we as developers send to the clients is dependant on the setup of the client machine, it is not possible for us to be 100% sure of how it will act on the client. After all, the client computer is the users property to fiddle with even if we&#039d like to have more control of it.

  3. #3
    PratQ Guest

    Default However...

    You can eliminate all but the most sophisticated products from downloading your site by using programatic code to process the links. This is a lot more work in terms of development, and depending how much you want to protect your site, you may have performance considerations to deal with as well. <BR><BR>Most spiders enter a page and grab the URLS of the next page by sniffing out the HREF found in the anchor tags. Then they manually deliver the GET or even POST commands directly to the server. However, if you set the links to call javascript which programatically builds the URL and navigates programatically, a lot of Website snatchers will fail. They fail because they are not running a javascript engine to do the programatic processing to determine the results of a javascript routine.<BR><BR>If you really want to prevent snatching, never put the actual URL anywhere on the page. Use codes generated in javascript and passed on the querystring or in the form to an asp page that would determine the next URL to deliver. This of course could have severe performance repercussions on even medium sized sites, but the web snatch programs would be pretty much lost.<BR><BR>As the last poster stated, nothing is 100%. A page available to the web can most always be grabbed by the client. So if a user wanted to click through every page of the site manually, they could capture the site. But most users want the quick method using some sort of program. The above methods should prevent *most* programs from grabbing sites. The only exception would be programs that actually fully emulate a user using a browser automatically, this would include realtime javascript processing.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts