Banner Ad: Please Fix Your Pacing Algorithm
Decorative Header Image

Archive for The Junk Drawer

Is Google out to destroy the secondary keyword market (before it has even matured)?

It didn’t take long to wrap my head around the collateral damage that would occur when Google suppresses their outbound referring URL information when patrons are using the search engine while they’re logged in. An article forwarded to me by a friend in the business laid out some of the damage that was expected. My friend went on to point out that some of his competition might be impacted by Google’s actions, he was not entirely displeased.

Impact

Google Luigi smashes the query string, removing it from the secondary keyword market

I’m trying to think of the impact so I looked around for some back of the napkin numbers, using the Google search engine of course. Be mindful, these are Internet numbers so they are obviously without reproach. I started with the assumption that most folks that have a Google account are probably using Gmail, so that’d be the most impactful user base. According to Panda algorithm chosen web page at the top of my search results it seems that Gmail had about 193 million users at the end of 2010. If Google trusts itself then this is probably accurate information.

That’s not everybody. I asked Amazon and its sorta saying that there are 1.7 billion unique Internet users. So Google’s got 11% that could potentially be logged in while user the search engine. I don’t log in at work, so let’s assume that I’m half of all people and I’m at work for half of my searches. So 11% drops to 8.25% of Internet searches impacted by this. Maybe we’ll give Google+ a little love and just say 10%. Sound good? . . .Yeah, I think so too. You know what 10% looks like to me, it looks like an experiment. Read more

This web page is fat

Tim Berners-Lee

Tim Berners-Lee, Creator of the World Wide Web.Photo Attribution: http://www.flickr.com/photos/captsolo/ / CC BY-SA 2.0

Do we really need to keep calling them web pages? What started out as an open document format for the exchange of data between researchers has quickly evolved into a new application platform where most of the computing happens in the cloud instead of on your desktop computer. The WorldWideWeb was created in 1990 by Tim Berners-Lee while he was working at CERN. His objective at the time was to enable his scientific colleagues to exchange research information electronically in a commonly consumable format. The Web allowed researchers to link their documents to one another via a hyperlink. This acted as an active cross-reference within the document itself. No longer would a person reading a paper by one researcher need to dig up another paper referenced in the document, they could just click on the link and go straight to it.

For a couple years the Web developed quietly. Pages were small and purposeful; browsers were few and eventually dominated by Mosaic. But in 1993 Netscape Navigator showed up at the party with some seed money and a business plan. It would take a year or two to unseat Mosaic, but eventually it enjoyed the widest user base on the Internet. The party, for Navigator, lasted until 1997. By that time the average web page size was about 44 Kilobytes according to a survey conducted at Georgia Tech. Netscape introduced an on-the-fly load style during its reign, which rendered the page as elements were downloaded as opposed to waiting until it was completely downloaded. This would allow a user to read the text of the page while the graphics were loading, a major breakthrough for the mostly dial-up user base. Read more