Nov 252005
 

Most people who have businesses on the Internet know about Google and Yahoo / Overture PPC programs. These are considered first tier advertising programs for those who need to receive targeted traffic to their website. Google and Yahoo / Overture also have publisher programs so that web publishers can place ads on their websites and earn revenue from the clicks on the ads.

Google’s AdSense program accounted for record profits in the third quarter of 2005 ($957 million total for all programs). The Yahoo / Overture publisher program is still in beta but has been a welcomed alternative to many who would like to diversify off from Google, are disenchanted with Google or who have been dropped by Google for various reasons.

As a second tier publisher program, Miva (formerly FindWhat.com) also offers an alternative for the same reasons and there is a shorter wait in the application process than for Yahoo / Overture. Miva offers higher click rates than the third tier programs like Searchfeed or Kanoodle and features something quite unique that Google AdSense refugees will appreciate.

It’s been well circulated on the message boards that Google will ban many publishers for ‘invalid clicks’ whether they were the source for these clicks or not. This arbitrary banning of publishers have led many to look for alternatives that are much more publisher-friendly.

In Miva’s Partner Center, invalid clicks are accounted for and integrated with the click results pages. Clicks are categorized as ‘clicks’ and ‘screened’ and neither the publisher or Miva gets paid for the screened clicks. Miva will work with publishers by giving out the IP addresses of click robots that need to be banned through the Webmaster’s host. This matter-of-fact, “Yeah, we know invalid clicks happen and this is just part of business on the web” approach will be refreshing for those who cower in fear of being dropped by their current programs for clicks that they cannot control. With this publisher-friendly approach, Miva cannot help but succeed when it comes to courting publishers who want more alternatives for their web properties.

p5rn7vb
Nov 132005
 

First there was PageRank, then TrustRank and now there is Spam Mass. Because today’s algorithms rely more upon off-page factors such as back linking, spammers have taken advantage of this by putting up tens, hundreds and even thousands of spam site all aimed at delivering links back to a host site, improving its PageRank and its rankings in the SERP’s.

A paper out of Stanford University, authored by Zoltan Gyongyi, Pavel Berkhin, Hector Garcia-Molina, and Jan Pedersen suggest that a unique method called “Spam Mass” can help weed out websites that are profiting from having a large amount of spam sites link to them.

Spam Mass can identify a major case of spamming by using two kinds of PageRank, regular PageRank and a biased version of PageRank based on reputable pages that link to a webpage (TrustRank). Where TrustRank tries to calculate the reputation of a particular webpage or website such as governmental pages and makes the assumption that reputable websites only link to other reputable websites, Spam Mass focuses on the untrustworthy sites and how these contribute to a designated host site or the recipient of the PageRank that is being passed along to increase that site’s rankings.

Spam Mass is another method that may be employed in future algorithm updates that will be useful in thwarting spammers who build bogus websites for the sole purpose of boosting their main website’s rankings. The combination of PageRank, TrustRank (which may have just been deployed with the last Jagger updates) and future Spam Mass update will undoubtedly bury many spammers websites in the rankings, which in turn will mean more relevant results for visitors to the search engines.

Here will be the new search engine tag line when this occurs: Better search with less than half the spam. Then again perhaps you can think of a better one?

Nov 072005
 

With the latest Google and Yahoo updates I’ve heard many people say they think reciprocal linking is dead and it’s a waste of time to trade links now. I’ve even heard that it may be unsafe to trade links. I think differently.

Reciprocal linking may now have been devalued somewhat but that does not mean it is no longer valuable. When site A links to site B which has valuable content for site A’s visitors, how can this be a waste of time or unsafe? This has been a tried and true method for years for putting your visitor first.

Now for those who are trying to pander to the search engines first instead of their visitors, then, yes this may be trouble. There have been many linking schemes afloat for sometime that are intended to offer nothing to a website’s visitor and are intended to trick the search engines into inflating a websites SERP’s only. These, I believe are now being targeted.

Link farms, redirects to remote servers that carry false linking information, paid text links and a few other methods are most likely being targeted with the latest update. But let’s not throw the baby out with the bath water on this one and say that reciprocal linking is bad or has lost all value or is even unsafe.

Reciprocal linking is valuable and safe, if you just keep your visitors in mind every time you make a trade. Will you visitor find the site and the content on the site that you are linking to valuable? If you can answer yes, then link to it. One should never be afraid to put their visitors first.

Nov 032005
 

The Little Engine That Could, AskJeeves has upgraded their desktop search application. One of the most interesting aspects of this upgrade is not the upgrade itself but how they define search on the desktop versus search on the web.

According to Jeeves, search on the web involves some sort of “social relevance” to put results in perspective. Search on the desktop can be a much harder task for the search engine companies since it usually involves some sort of “personal relevance” to the user. What this means is that on the web, results are refined due to a feedback loop and input from many users who form a social group.

For desktop search, there is not this kind of social group as it may be only one person and the feedback loop is limited. Delivering results on a desktop search can be quite a task considering a search application and algorithm has to guess at what is personally relevant to you.

Time is also a consideration in desktop search. Are you searching for a recent document, or the same document over and over or one from sometime in the past? It will be interesting if one day the search engines (if they’re not already doing this) use some sort of behavior profiling on the desktop, where users search behaviors are stored, mapped and over time, the search appliance reacts to a user’s standard set of behaviors in order to deliver personally relevant results.

Anyway, it looks like AskJeeves is on the right track with this upgrade, especially with the Folder Indexing Preferences feature where the user gets to choose what to index. They have a few other enhancements that Jeeves fans will enjoy as well.