I won't rehash to much of the previous post- It's relinked below and is also accessible at
In the comments section, Gallier2 left comments reporting on the results of his search engine testing. It was already clear to me from my stats that censoring had been occurring. Gallier's testing seals the deal. Yes, Google Algorithms are being employed to censor this blog.
The question is how can these algorithms be circumvented?
What can be done to get around them?
Anyone have any ideas?
Gallier explained further
1. the first search was with all the words of the title. So google will take these words, remove the so called stop-words, i.e. the small words that are so frequent that they have no discriminating power. Every document in english on the Internet contains 'the', 'a' or 'is', so there's no real point in using them in the filtering code. Then google will search its index for the documents containing all the other words. The importance of the words will depend on several criteria, for instance
- if they appear several times in the document,
- if they appear in the title or paragraph header like here
<h3 class='post-title entry-title' itemprop='name'> I’ve Got the Google Algorithm Blues- Project Owl </h3>as these words are in a header they're more relevant than if they're in the alone.
- if they have been put in a semantic section (html5) or a keyword list in the header (old html).
- if they appear in the same order as in the search string
- etc. any other secret sauce like the number of links and such things
2. the second search had all the word between quotes ". This means that I wanted to find the documents that contain exactly that sentence. So the search criterium is much more narrow and the number of results is then obviously very restricted, the longer the sentence, the lower the probability to have random text being chosen. Of course if the phrase is a common expression, the number of results will be bigger. In our case, there are not many pages that contain exactly that phrase and the possibilities to play games with the ranking, which is a criterion used when you have to choose some documents from a given set of results. If your index returns 500000 pages and you only present the 100 "best candidates", it's easier to bury the result of an unliked page than if you got only 25 pages from the index.
3. The third pics was the same search as the first one but only at duckduckgo instead of google, and as we can see, it put your page on second position (yesterday you were first). So duckduckgo didn't discriminate your blog.
Thanks Gallier2! Good to know duckduckgo is not discriminating- I've been using it for a while myself- Hubby uses it exclusively- So that's one way to get around Google's censorship.
Again, I ask... Anyone have any other suggestions? Because this is a problem that will only get worse if we do nothing!
Muddy Waters: I am the Blues