Google+ has not overtaken social networking sites like Facebook and Twitter, but it is turning heads. Already hundreds of Facebook users from India are on Google+ via a beta testing phase.
Only a few months after accusing Microsoft’s Bing of copying their search results, Google was found to be indexing millions of image thumbnails from Microsoft’s Bing search engine.
A recent TED presentation is by Eli Pariser. He is the author of “The Filter Bubble: What the Internet Is Hiding From You.” A new and very interesting book. His talk is a synopsis of how the Google personalization algorithms effect search results. Google results are influenced by your own search history and other online activity. Any system such as Amazon, Yahoo, Bing ebay shopping systems depend heavily on personalization to serve you results. Traditional databases do not use profiles (yet) but they are often based on Verity, Vivisimo, Autonomy, Fast and other mathematically based search software so they could and they do serve up different results whenever the vectors are reset - that is every time additional data is added to the system with updates or metadata enrichment.
In a familiar story, we learned that Lucid Imagination released an update to their LucidWorks Enterprise product this week that includes a way of connecting the search tool directly to SharePoint repositories.
Universal search – useful tool for making Web searches more contextual or too complex for overall search engine optimization? Getting a good ranking in universal search results can help boost a company's overall search visibility and brand awareness, but many b2b marketers aren't taking advantage of it. Why?
After operating their own CSI-like sting, Google claims that Microsoft’s search product, Bing is copying their indexed search results as their own. Microsoft says they aren’t doing anything wrong.
Tracking security and privacy issues are in every headline you read these days. Firefox started the buzz with their announcement of a “do not track” tool that empowers consumers to comprehensively request not to be followed wherever you visit on the Web. This came from a Federal Trade Commission request.
The economics of the Web have reversed the original business model for online information upon which businesses like LexisNexis and Dialog were built. Through those services, users paid up to $4 for individual articles from daily newspapers that originally cost 25 cents on the newsstand. That model is obviously dead today, where the cost of an individual article – even articles from leading trade magazines and scholarly journals – is effectively zero. Does that mean that publishers, aggregators, and other content owners should police the Web to insure their content is not freely distributed? Not at all – one needs only look at the recent case of Wikileaks to see that it will be impossible to keep any content from showing up freely on the Web. As they say, the Genie is already out of the bottle, so the only logical step is figuring out how to make money in the current environment. This is where taxonomies can add value – by enabling the creation of new information products that connect disparate pieces of content with high-value applications and new markets.