Tip of the Day...

Contrary to popular belief, HTTPS connections to untrusted sites do not enhance privacy.

Transferring information to an untrusted site over a secure connection does not ensure privacy, because, per definition, an untrusted site is known to reveal confidential information to others.

As a consquence, prudent users should carefully limit communications with untrusted sites.

The prime concern of the prudent user is privacy. Secure connections make it very difficult for network firewalls to detect leakage of private information from networked computers. This is because firewalls cannot analyse encrypted traffic in order to detect virus, malware, and information leakage activities on networked computers.

So why are organisations such as the Electronic Frontier Foundation pushing for all Web communications to be encrypted? The answer probably lies in their assumption that encryption ensures privacy. Unfortunately, that assumption is only true if the organisation with which we exchange encrypted traffic can be trusted. Users generally trust organisations with which they have a business relationship, such as banks and product vendors, and it goes without saying that such traffic should always be encrypted. If such organisations ever break that trust, users should terminate the business relationship without delay.

An organisation with which a user has no trust relationship has no right to collect and use private information from that user.

Update (8. June, 2013) As the PRISM revelations show, sites run by several big players in the computing industry cannot be trusted to contain user details within the boundaries of their organisation.

Update (8. April, 2014) As the OpenSSL Vulnerability shows, sites using open source software cannot be trusted to contain user details within the boundaries of their organisation.

Contrary to popular belief, sites running open source software cannot necessarily be trusted.

Open source software can be modified at will, re-compiled and then run on a web server. The site owner is under no obligation to publish such modifications. A web client has no guarantee that the version info reported by the server is accurate. Therefore, sites running open source software cannot necessarily be trusted. Of course, this in no way implies that sites running closed source software are necessarily trustworthy.

Many web sites are trying to force encryption onto their visitors. They even go as far as silently redirecting HTTP traffic to HTTPS sites. This act of impudence should be blocked at all costs, because it is equivalent to saying: "You must trust us, even though you have no idea who we are or what our privacy policies are". By allowing such connections, users are giving the site unhindered passage though their firewall, thus exposing their computers to potentially harmful content.

To ensure both security and privacy, NAT32's honeypot can be used to block all HTTPS access to untrusted sites. Sites to which HTTPS access is allowed are added to a list that includes the names of all business partners. NAT32 can also block all access to black-listed sites. A black-listed site is one that we know is not to be trusted, or one that we know serves only irrelevant, rubbish content (e.g. many forms of advertising). Pages can also be fetched using GET requests that omit cookies and referrers, thus providing a degree of anonymity.

The NAT32 Honeypot is superior to other solutions because unwanted content (including HTML, Javascript, CSS and images) is not downloaded from the Internet in the first place. Even DNS requests for black-listed names are resolved locally and never reach the Internet. This saves bandwidth and ensures that pages load and render rapidly.

Update (14. July, 2017) Many web sites are now tracking users with their own versions of the PIWIK analytics software. Which data those sites are collecting and how they are using it is completely unknown to users. Because the tracking scripts are self-hosted, their download cannot be blocked via DNS checking techniques. The only reliable way to detect such tracking is to download the scripts to a honeypot machine for analysis. A decision on whether to block further access to the site can then be made.

Update (24. April, 2018) Users who have opted to block known tracking sites on their computers often wonder why they are still receiving spam email for their active accounts, even though those accounts reside on their own Server and are accessed only via a trusted email client (e.g. Thunderbird). The reason for this is that the email recipient is probably using untrusted software to read such email. For example, if user Alice sends a secure email to user Bob, and Bob uses an untrusted email client or browser to read that email, its content becomes visible to an untrusted third party. Spammers who buy active email addresses from such third parties then spam Alice's account.

Interestingly, if user Alice has been very careful to block her tracks in the Internet, spammers have only very little additional information about her and so the generated spam is "generic", in the sense that it doesn't relate to her interests or the websites she visits. As a result, the spam often contains offensive content that (hopefully) her email client's spam filter will detect and remove.

Back