Google Says ...

An unofficial, unaffiliated source of comment and opinion on statements from Google, Google employees, and Google representatives. In no way is this site owned by, operated by, or representative of Google, Google's point of view, policies, or statements.

My Photo
Name:
Location: California, United States

Use your imagination. It's more entertaining.

Friday, September 08, 2006

Who does Google trust now?

What SEOs and Search Engines say about TrustRank and PageRank


Let me say up front that, so far as I am concerned, no one outside of Google is in a position to say definitively or authoritatively how Google determines trust. Nonetheless, many SEOs have been making very ignorant comments about Google and "trust" over the past 18 months or so. The problem began with everyone commenting on Google's listing TrustRank as a service mark. This was a curious situation because the expression TrustRank was coined by Yahoo!, who published a paper in conjunction with Stanford University introducing the TrustRank methodology for calculating PageRank more reliably.

Many of those SEOs have wrongly assumed (and stated repeatedly) that PageRank serves as the basis for Google's search results rankings. PageRank has apparently always been factored into the algorithm wherever possible, but Apostolous Gerasoulis of Ask has long claimed that Google never fully implemented PageRank anyway.

Matt Cutts has indicated that Google's internal PageRank drives their crawling priorities. I think this is probably for the main index only, but maybe it also drives the Supplemental Index crawling as well.

Google's apparent historical trust in sub-domains


It became apparent to me by early 2005 that Google had begun shifting its priorities in late 2004 (and perhaps earlier that year) to favor pages from older domains that I called Trusted Content Domains. I coined that expression to distinguish those domains from Spam Domains. Spam domains typically fall into one of two groups: 1-page doorway domains that redirect to primary content domains and domains that host a lot of worthless content.

I found that I could add content to an existing domain and see it rank well within a week to a few weeks, while people creating new domains were making no progress after several months. This was a marked change from the way my new-page content achieved rankings a year before. However, it very closely resembled the behavior of sub-domains coming off of primary domains going back to 2001 (see my comment on Aaron's post). I have complained in numerous public forums since 2001 that Google would automatically trust sub-domains. They never seemed to care, and a lot of sub-domain spam has been around for years because of that oversight.

In essence, Google has always seemed to confer without question to sub-domains the ability to achieve high rankings in search results. For technical reasons, I long resisted the temptation to hang sub-domains off Xenite.Org. I found that sub-directories often served my purposes, even though they took a little longer to establish relevance. However, though I am now starting to work with more sub-domains, I am concerned that Google may now be implementing serious sub-domain analysis and filtration. I may or may not inadvertently trip some filters simply through inexperience and experimentation.

How Google Determines Search Results


Because of SEOs' ridiculous infatuation with link-bombing based "optimization", the importance of relevance has long gone unheeded in the SEO community. Sergey Brin and Larry Page established that determining relevance was the core factor of their ranking methodology in their original paper about Google, but the inconvenient fact has been swept under the rug of ranking-through-link-spam.

In January 2006, Matt Cutts published an article in Google's newsletter for Librarians in which he recapped Google's basic ranking strategy. Matt naturally discussed the PageRank algorithm because it is so often referred to, but he emphasized that PageRank is not the key to ranking in Google's search results. In fact, Matt literally wrote that, "in order to present and score" results for a query, Google picks pages that "include the user's query somewhere" and then ranks "the matching pages in order of relevance".

The SEO community continues to look in the wrong direction


Despite this apocalyptic revelation, SEOs have continued to pound the podium in favor of link building. And I will admit to helping them pound the podium with all my link-building articles, although I have tried to point out that links are important for other reasons.

I write about link-building for one reason: since I know how to do it better than most SEOs, I felt it might help to establish my linking credentials in a community obsessed with links. Most of the more popular link schemes owe something to my research over the years anyway -- it's just that the young SEOs are too consumed with their snide tirades to do the research to find out where all their cherished strategies came from.

I didn't invent these linking schemes, but I helped test and prove their effectiveness back in the day when they could truly be efficient and effective. And, sad to say, I probably am one of the grand-daddies of link farming. But you can blame Inktomi for being so darned frustrating. Most of you have no idea of what it really means to have to rank on the basis of linkage. I do. I hope we never have to return to those kinds of search engines.

The consequences of all the bad SEO practices since 2001


When Adam Mathes coined the expression "Google bombing", he was only giving a bad name to a practice that actually went back to the days before Google. Adam noticed how effective the technique worked for bloggers, but spammers had been link bombing both Google and Inktomi for years. Well, after the media had their day with the new buzz word, a new generation of SEOs began building their business models on the foundation of link building.

After four years of thousands of SEOs blogging, writing articles, and sharing link-based ranking techniques in forums, FAQs, and eBooks, a large community of business decision-makers has been misled into believing that linkage is the key to ranking on Google. And what is truly sad is that it appears to be more true today than it was two years ago only because Google had to react to the massive onslought of manipulative linking that has mangled its relevance scoring.

All "white hat" SEOs who practice link-building are as guilty as all "black hat" SEOs and spammers of burning down the trees in our forest and destroying the environment in which we optimize. It will be years before SEOs take responsibility for their ill-considered practices. Black hats at least snicker at the idea of ethical optimization and shamelessly promote their Web sites in whatever way they can. They work on volume and build their networks and just adapt to the algorithm changes.

But the rest of the community has bogged itself down in a blind tradition that was a terrible solution to a non-existing problem in the first place. Now they are chained to the link-building treadmill because even the SEOs who realize there is more to search engine optimization have to deal with unrealistic client demands and expectations. The machine has lurched into high gear and tumbled out of control. Maybe a few of the operators notice they are no longer in charge, but most still mindlessly wade through SEO forums blathering about PR (Toolbar PageRank), "quality links", sending out reciprocal and 1-way link requests, and now TrustRank.

How Important Has Trust Become?


Because of SEO "best practices" based on link-building, Google has gradually gone into high anti-link building gear. Since early 2004 the so-called Sandbox Effect has been debated and tested and evaluated in six thousand directions. Consensus now seems to be settling on the idea that new domains are sandboxed because they lack links from Trusted Content Domains. I credit John Scott with being the first to offer the most reasonable explanation, though he now feels somewhat differently about what causes the effect (things do change).

Since mid-2005, Google has implemented filters against fake link directories, scraped content sites, and RSS-feed driven sites. When I warned Danny Sullivan about these kinds of sites in early 2005, he expressed complete and total ignorance of the problem. Swept up in the fake link directory blitz, however, were many "low quality" SEO directories -- directories set up by people for various reasons, including accruing PageRank, helping other sites build up linkage, and gaming Google.

Another problem that began to get attention from SEOs in late 2004, and which has gradually increased in severity, is the transfer of many legitimate content sites to the Supplemental Index. Only over the past few weeks have I found enough bits and pieces from Google to assemble a coherent idea of what the Supplemental Index may be.

With the rollout of Big Daddy in early 2006, Google exacerbated Webmaster frustrations by increasing main index crawling and decreasing supplemental index crawling. Suddenly, everyone started talking about trust as if they knew what was going on. Remember that I said at the beginning of this post that I don't believe anyone outside Google knows what is going on.

How can trust be algorithmically determined?


But several of us have tried to guess what is happening. Todd Mailcoat suggests that it's a trust filter based on Web site age, number and age of backlinks, and total "trustscore" of those backlinks. He adds: "Most trust criteria revolve around some dependence on age, which is actually a pretty good signal of quality". However, we know that Google ignores identified paid links among others, so "total number of backlinks" isn't helpful. Nor do I believe that age really matters as much as I once did.

Neither age of site nor age of links pointing to the site should really matter to how much a site can be trusted. A spammy link that sits around for 3 years is still a spammy link. A spammy site that sits around for 5 years is still a spammy site. I think Todd's third point is closer to the truth, and is really the only one required to explain what Google is doing.

But is Google scoring by trust or is it just trusting pages to confer PageRank and Link Anchor Text? In a follow up to his earlier Google Librarian article, Matt Cutts wrote "if more people trust your site, your site is more valuable" (implying that PageRank is used to help determine trustworthiness) and "we examine the content of neighboring pages, which can provide more clues as to whether the page we're looking at is trusted".

Another point Matt recently made was that the sudden appearance of hundreds of thousands of pages can trip a trust filter. That's a high threshold, but I'm sure it's that high for a reason.

Looking for trust in all the wrong neighborhoods


But what constitutes a "neighboring page" for a new domain? Any new page on an existing domain already has neighbors in its sibling pages (found in the same physical folder or directory) and cousins (found in other folders and directories on the same domain or sub-domain). New domains have to be placed into neighborhoods before they can have neighbors. Such neighborhoods are most likely only defined by linkage.

One simple possibility is that if a trusted "expert" or "hub" page links to a new domain, that expert/hub can be used to determine who the neighbors are. But even one expert's opinion isn't very informative. I think that Google looks for a variety of trusted expert opinions. These experts will include well-known human-edited directories with clear, definitive categories, but I think the expert votes also will come from some of the second-tier content sites. Any Web page that links to a group of related Web pages is usually considered to be an expert.

Until Google can form a collective opinion about where a new domain's "neighborhood" is, it isn't in much of a position to determine if that domain can be trusted. So, while many SEOs might be quick to say, "See? We do need to submit links to directories!" Maybe, but would you as a surfer want to trust a site only listed in directories? Why does no one else link to the site? You need more than one kind of expert opinion, in my opinion. Dan Thies suggested as much in late 2005 at the Highrankings Forum (and perhaps elsewhere).

"Well then," some hardcore reciprocators might say, "We just need to submit to directories and get reciprocal links from related pages."

Moving into the wrong neighborhood


But the problem is that Google looks for "excessive reciprocation". Some reciprocation is expected and tolerated. This is the World Wide Web, after all, where sites are expected to link to each other. But if you can only get links from directories and reciprocating sites, you're still not collecting independent opinions or votes of confidence from true authorities.

Authority pages has become another SEO buzzword, and I have seldom seen anyone in the SEO community use the expression in a way that conveyed a clear meaning to me. I am sure most people who speak of authority pages have a clear idea of what they mean, and can probably articulate that idea. But I have found no real consensus on what the SEO community collectively means.

I'll go with the traditional HITS definition: an authority page is linked to by many experts. But some experts are more trustworthy than others, and those experts are often linked to by many authority pages. It's all very circular, of course, but I think it's important that new domains be linked from authority pages in clear context. That is, a reciprocal link won't do the trick. You need to have content surrounding or adjoining the link that is relevannt to the link anchor text.

But let's back up a moment. Is it not possible that there are sham experts and authorities? Absolutely. So you need to ask if Google hasn't found a way to favor some neighborhoods over others. One potential trust-impacting factor is who you link to. Matt Cutts has been reluctant to explain why spammy-looking links on one page may be trouble and why similar appearing links on another page seem okay.

Neighborhoods must be bubbles of tightly connected Web sites, and the neighborhoods that are most trustworthy are probably linked to by many other neighborhoods. So now we're venturing into the realm of speculation with the concept of NeighborhoodRank. Does Google tag neighborhoods as being more or less trustworthy? If so, then it may be that an entire neighborhood has to gain trust before its member pages earn trust.

Why link baiting works


This may explain why Rand Fishkin of SEOMoz is able to boost sites past the Sandbox Effect so quickly. When he creates Link Bait, his sites draw linkage from both new neighborhoods and old neighborhoods, and the old neighborhoods undoubtedly include a lot of trusted neighborhoods. His Link Bait domains are therefore drawn into the better neighborhoods because of where they link to and from whence their inbound linkage comes.

In other words, successful Link Bait doesn't have to wait for its neighborhood to be approved for trust. It simply joins one or more already established good, trusted neighborhoods.

Why reciprocation sometimes fails


And that may explain why link reciprocation doesn't always work. Some people complain that after gaining several hundred reciprocal links, they still seem to be sandboxed. In evaluating the backlinks for many such sites, I often find they link out to and receve links from what I personally would deem to be low quality sites, many of which appear not to be trusted.

I have my own test for deducing whch sites may be trusted and which sites may not be. I don't disclose the test publicly because I don't know how accurate it is and I don't want to give away a possibly useful idea to people whom I don't want to help. My test is quick and simple, but even if it's on the right track I doubt it is 100% reliable. I am developing a couple of other tests to see if I can establish a consensus of results.

In the meantime, the continued emphasis on building links in quantity probably only maginifies the problem for most SEO'd Web sites. The more links the SEOs seek out from "tried and true" sources, probably the longer it takes to get sites to move past the Sandbox Effect. There will be differing degrees of success. Some SEOs most likely have very good sources of linkages. Most prbably do not.

Are Supplemental Index Pages 'bad neighborhoods'?


I don't believe so. I think these pages represent documents that have not yet earned trust, but that doesn't mean they are considered to be 'bad'. Matt suggested to one person on his blog that "the best way I know of to move sites from more supplemental to normal is to get high-quality links (don’t bother to get low-quality links just for links’ sake)".

I have more to say about Google's Supplemental Index at my SEO Web site.

Final Word


The bottom line is that we still don't know what Google is doing, but we all agree that they are now being strongly influenced by a need to distinguish which sites can be trusted from those that cannot be trusted. I think there are some highly implausible and convoluted theories being proposed by other people right now. The more complicated a proposed explanation becomes, the less likely it is to be correct. For now, I think Google is looking at aggregate linking relationships to determine where community trust really exists. It's very, very difficult to fake trust from a broad variety of sources.

Simply getting links from free directories, article submission sites, reciprocal links, and other popular link sources will probably gradually extend the length of time new sites require to earn trust if for no other reason than that they will only very slowly naturally attract links from trusted neighborhoods.

The real question comes down to this: if I am correct, or close to correct, in my analysis, how long will it take for spammers and SEOs to develop methodologies that effectively poison the "good" (trusted) neighborhoods and force Google to develop some filtration methodology?

I think maybe a year, perhaps 18 months. Until then, those SEOs who have inventories of trusted link sources will hoard their wealth and be very, very reluctant to share the gold. After all, the more people who know where to get the good links from, the less likely those link source will continue to be valuable.

Thursday, September 07, 2006

Google shares the love...

The more I read about Google's activities, the more impressed I have become with TSETSB. There are some things Google does that I don't approve of.

Two brief rants before we get to the Google raves


For example, I will never forgive them for Web Accelerator, which I continue to block from my network at the server level. Web Accelerator eats up bandwidth and Google has yet to offer an effective means of compensating Webmasters for needlessly wasted bandwidth. In Google's defense, I will say that their technology only incorporates an incredibly stupid standard that was proposed by people who should have known better than to come up with such a dumb idea. Maybe that's why it's so popular, I don't know. But I fear the day is coming when I'll be blocking all FireFox users from Xenite.Org. Unless they want to pay for subscriptions to our content.

I also complained about Google's removing the fetch date from their cache. I get so many hits from Googlebot in my server logs, figuring out when they actually fetched a file is not easy for me. And when other people ask me to do research on their sites, I am now almost completely blinded in one eye thanks to this ridiculous "improvement" from Google.

Google snuggles with the guv'mint


But enough whining and moaning. There are a lot of great things Google has done, is doing, that I can talk about. For example, I notice they are snuggling up with the U.S. Government these days. Adam Lasnik is teaching a class on search optimization to government Web designers at Washington University. And Google (Enterprise) has been named most influential commercial company providing technology to the Federal IT market. It was only a matter of time, I suppose.

But the ever impressive and highly innovative Google Book Search people (who, theoretically, should be on my list of demons because I have published books) have now announced that DIANE Publications has made its entire inventory of reprinted government publications available on Google Book Search. You know, we taxpayers paid for all that data collection and reporting, so it's about time we get access to it. This is actually a public service from Google that should help historians and people who are curious about what sort of publications the government has spent their money on in the past.

Lesson for Business: Share what you do!


What can smaller businesses learn from Google Book Search and Google Enterprise? I'd say that if you have partnerships with larger entities where your services or products are playing a significant role, you should be writing about those relationships on your corporate blogs. Put feature articles on your Web site. Mention your hallmark accmoplishments in your company history page. Tell people what you are doing for others, so they can get an idea of what you may be able to do for them.

After hours with Googlers, innovation, and invention


Innovation, of course, doesn't have to come from the corporate production process. One Googler offers a tip for organizing temporarily necessary cell phone numbers. Leave it to someone associated with search to think of prefixing names in a phone list.

Another Googler provides a fantastic report on Maker Faire, where innovation comes to life. The report will take a week for anyone to evaluate, but it's loaded with details, pictures, and video. Oh, my!

Significant revelations from Google


But now we're getting down to today's good stuff.

Google revives Tesseract OCR


First up, Google Code recently announced that Google had revived Hewlett-Packard's OCR technology (HP retired Tesseract in the mid-1990s). Think this is how Google has been scanning all those books? It doesn't matter, because as I pondered over the meaning of this post for the umpteenth time, today it hit me: Google may eventually be able to read all those graphics people use on their front pages. You know what I mean: the huge image files that say, "Michael Martinez is the best SEO in the world and you really should be paying him to help you rank at Google".

How many SEOs have complained about having to work around those Greeting Card images? Well, prognosticating what Google will do with its technology is not very productive, but if they are not thinking about how to scan Web greeting images and masthead graphics, they should be. Because there are just too many people who don't understand that a search engine cannot index the text embedded in a .GIF or .JPG.

Vanessa explains SiteLinks


The Webmaster Central Blog explains one of those curious SERP features that have puzzled, bemused, and bedazzled SEOs for years: SiteLinks (love the name, btw). SiteLinks are those sets of tightly compacted deep links that occasionally are included in a site's listing.

Google provides four levels of recognition for a Web site:
  1. A simple listing for a single page in relation to the user's query.

  2. Two listed pages, one indented under the first, in relation to to the user's query.

  3. Two listed pages as above, but with an additional tag offering "More pages URL"

  4. One or two listed pages as above, but with a compact list of SiteLinks providing quick access to deeper content


SEOs have lusted after those impressive SiteLinks results ever since they first started appearing. My most important site, Xenite.Org, has so far only achieved level three recognition despite many deep links and deep referrals. A lot of my pages come up in Google SERPs. But it takes more than what I've got so far to hit level four recognition.

NOTE: Some people might argue that having pictures from your site featured above search results, such as for Lucy Lawless, is a fifth level of recognition.

In any event, Google says that SiteLinks are completely automated. Maybe they are, but if any SEOs can figure out how to trigger their generation in SERPs, I think those SEOs will make even more money than before. Frankly, I haven't really tried to figure out the process.

And now, for the gold: Sharding


Do you know what shards are? I only have the vaguest idea, myself. I've watched a number of videos of Googlers making presentations. I've read some technical stuff. But I've never seen a shard in action. Google's database is so large it cannot all be contained on one server. Google reportedly uses up to 1,000 PCs to resolve any query. The database is spread out across some or all of those PCs in what Google calls "shards".

Last month, a Google went to a BarCamp and made a presentation called Scaling Data On The Cheap. Yup, he talked about shards.

Slide shows don't tell you a great deal when you cannot hear what the speaker has to say. But we can infer a few (possibly very incorrect) ideas from the slides. For example, it appears from one slide that a table could be replicated in multiple shards, split across multiple shards, or comprise a single shard by itself.

Google's original architecture (most likely no longer in use, at least going back to the January 2006 Big Daddy update, if not earlier) used many tables. There would have to be one or more master tables just to tell the various programs where all the other tables are. The paper says they had identified about 14 million words. Each word would have to have its own index. Rare words (occuring in the fewest documents) would have the smallest tables.

I can envision some programmatic advantages to replicating rare word tables across multiple shards, pairing some rare words with others in specific shards. And obviously large tables for the most common words would probably have to be split across multiple shards.

There is probably only minimal value to an SEO in knowing and understanding how shards actually work, but the slide show implies a great deal of redundancy has been built into Google's system architecture. It's like they have a lot of floppy database thingees that they lay partially across each other like blankets.

Well, it's food for thought, but I've already spent too much time on this post.

Wednesday, September 06, 2006

Getting down to Google Base icks...

Google's AdWords Blog actually shares some interesting information. I don't mean that to sound like a bad thing, but since I don't run AdWords campaigns I don't really read the blog. Shame on me.

So, in Why is the location under my ad?, they explain that if you target your ads by location, then users who are identified by the Google system as coming from that area will be told the ad is relevant to their community. I like that. Can't imagine why anyone would complain, but sometimes business people have a very different set of expectations from me.

In their Get your products into our search results with Google Base post, they share the following advice:
Your site may already be included in our crawl index, but we want to ensure that you also know how you can supplement these results with Google Base - you can submit the products or services that you offer directly to Google Base making them eligible to show on Google.com when a user searches on a relevant query.

What the heck? Seeing as I'm now promoting my SEO Consulting services on a full-time basis, I thought I'd give the system a try.

Unfortunately, the user interface burped. I created a few attributes for my ad, picking from their list suggested attributes. When I clicked on PUBLISH, the system came back and said there was a problem. They lost the label for two of the attributes and combined their data into one unnamed field.

I didn't feel like trying again, so I edited the surviving data, put in a new label, and clicked on PUBLISH again. This time the ad went through safely and I'm good to go for 30 days.

I appreciate the Googlers' giving me advice on how to promote my consulting services, but as a programmer with many years' experience I couldn't help but cringe when I saw the bug. I hate it when I find bugs in my own software after it's been deployed. That's just one of the risks programmers face, but it's still annoying.

Browsing further through the blog, I noticed their Printable coupons for local businesses post. OH..MY..GOD.

Why hasn't anyone in the SEO community made a big fuss over this feature? I know some people who need to take advantage of this service.

Hm. I wonder if I can do that....

Anyway, the AdWords blog has turned out to be very useful and interesting to me just in a few minutes' time. That ain't bad for clicking on a previously unvisited link.

Well, in other useful Google blogging, Vanessa Fox discusses better details about when Googlebot last visited a page. She says that Google Cache will now reflect when Googlebot last sought information about a page, rather than when it was actually downloaded.

Um...that's not very helpful to me. I can see how some Webmasters may be pleased with knowing that Googlebot stopped by on September 1, but it won't explain to them that they are looking at a page copy from April 26. Let me explain why this can be a problem.

Googlebot comes by on April 25, fetches my page, and then I update it on June 12. Googlebot dutifully grabs the page on June 12 and then my server crashes. I restore from a backup made on June 11 and my server will think the page hasn't changed when Googlebot comes back on June 15.

Now, ideally, I want my June 12 version of the page. But for reasons beyond my control I cannot reproduce that page until, say, August 15. If Google dutifully indexes and caches the page in a matter of days, they are out of sync with my restored Web page.

Does this scenario happen? Well, server crashes happen all the time. It's anyone's guess as to how backups are restored and how the servers figure out whether to send a code 304 (Not Modified) or not. But it's a hole in the methodology and the blog doesn't address it to allay my fears and concerns.

I can also tweak my server and screw up its ability to send a code 304 at the right time. What if I accidentally configure my server to send a code 304 every time? Now Google's cache is telling me it visited the page on August 18 but I'm still seeing the restored June 11 backup. What's up with that? After August 15, I think I should be seeing my August 15 update, but because I've misconfigured my server, Google says it visited the page on August 18 and grabbed the pre-August copy from the restored June 11 backup (in truth, it grabbed nothing but I don't know that).

My point is that most Webmasters don't read the Google blogs and they are not going to understand what they are seeing with these dates.

In my opinion, the reported date needs to be the date the file was pulled. If Google really feels anyone needs to see from Google's side when Googlebot last dropped by, the ideal thing to do is show both dates (in my humble opinion).

Sorry, Vanessa, but this latest improvement is an "ick" in my book.

Tuesday, September 05, 2006

Introducing...Google User (or not)

Seems like Google is trying to bend over backwards to prove its services are really all about the user experience.

A few days ago, they invited Philipp Lenssen to write about '55 Ways To Have Fun With Google'. In one fell swoop, they pretty much promoted everything about their business through a custom-written user testimonial. Will Philipp sell a few more copies of his book? Probably.

The Google Enterprise Blog featured a Map of Whole Foods. Talk about some major exposure. Okay, maybe a lot of people don't read the Enterprise Blog, but it's the flagship of the Google star fleet (how could I let an opportunity for a pun like that pass me by?).

The Whole Foods app entry should drive some curious traffic to the company's Web site. Maybe a lot of those people will be interested in organic foods. I don't know. But it's exposure that is hard to get in today's search-dominated Web.

Google Book Search, possibly the most innovative blog in the Google stable, has announced that you can now add Google Booksearch to your site. Folks, this is a significant tool that many hobbyists will latch on to. Business sites will eventually figure out ways to use it, too. I've already got some ideas rolling around my head. I just need time to site down and play with it ("site down" is not a typo).

An earlier blog post from Google Books a couple of weeks ago also has me thinking. They announced Authors@Google. You know, anyone with a video camera can now create a featured speakers program that is hosted by YouTube, Google Video, and similar services. Just stick your company logo on a wall somewhere, stand in for a minute to introduce your guest speaker, and then let him or her plug a book, business, or concept.

Did I say that Google Book Search is the most innovative of the Google blogs? Let me put this as delicately as possible: if you're an SEO and you don't read this blog, you're an idiot. There, I've said it. Many SEOs think I believe all SEOs are idiots. Well, that's not true. Just SEOs who think PageRank converges to an average of 1 and SEOs who don't have sense enough to read Google Book Search. You don't stay ahead of the pack by running with the crowd. Get out there in front and take some chances. Read things the other people don't read.

Speaking of Google Book Search, Philipp Lenssen's guest post on the official Google blog got me to thinking about Google's Public domain treasures, where you can download public domain books. A savvy Web marketer would publish a book and make it freely available for download from Google Books. It should not be long before we see books promoting "Buy my services!" on every page becoming available on Google Books. If only I were as smarmy as some of the other Web marketing gurus out there.

When Google renamed Site Maps as Webmaster Central, I noticed a lot of snickering among SEOs because it just didn't seem like a Webmaster Central type station. Nonetheless, Vanessa Fox is posting some great content on the Webmaster Central blog. Her article on how accented characters and interface languages impact search is a must read for anyone dealing with international language sites (and custom language sites, assuming you want to optimized for constructed languages that use accented characters).

A lot of recent Google blog posts have emphasized the user experience and how users can benefit from Google's services. What we can take away from this sampling of posts is that any business with a service or product can enhance its visibility and traffic by providing insightful, innovative, and intriguing tips and suggestions on how to utilize those services and products.

And for those of you still living in the SEO dark ages: such content produces a lot of linkage.

Remember: it's all about the user experience. Make that a good experience, and the users will love you.