You know, it wouldn’t be necessary for continuous search engine analytic improvements if people focused on providing relevant key words that really do describe the nature of their content, so people can find answers to their questions using search, instead of trying to game the system with web rings, and link exchanges, and keywords that are carefully selected by popularity to drive traffic to the owner’s web site “to generate advertising revenue” but have absolutely no connection to the real content on that web site. Bravo #Google!
I feel that the only thing worse than “optimizing” a web site to manipulate the search process instead of helping the search process correctly categorize the site content, is scraping. Scrapers are those people who are so lame they cannot even bother to write their own blog post, whose only posts are “scraped”, that is plagiarized content, generally without proper attribution, from someone of higher moral character who actually bothered to work.
There should be a special Hell for scrapers: and a public domain black list available so all search engines can exclude them may be just the payback that they deserve.
I’m not talking about people who write about something that they found interesting and then pass on a link so that we can read more. Scrapers copy/paste entire content from other sites into their own usually without attribution, and rarely write anything themselves — they exist to maximize profit by running large numbers of web sites exclusively to grab advertising revenue.
Those people who just scrape should be collated into a special blacklist of scrapers that citizens seeking information can specify to exclude from all search results. There should be a special Hell for scrapers: and a public domain black list available so all search engines can exclude them may be just the payback that they deserve.