Google has awarded over $1.2 million to support research in several areas of several natural language understanding that relate to Google's concept of the Knowledge Graph. This substantial investment in machine learning is motivated by their need to further search technology.
A webinar is scheduled to provide an update on Google's products and services, including what is on the horizon. This webinar is scheduled for September 26, 2012 and is being offered by National Federation for Advanced Information Services (NFAIS).
Yippy, Inc. has entered into an agreement to merge with MuseGlobal, a leading provider of content integration and data virtualization services. This combination of search and content integration professionals has the potential of providing a range of resources, few others, including big players like Google or Microsoft, possess. The combined companies will create an information cloud that could represent a significant shift in the business of enterprise search.
At the beginning of the year, one author predicted that 2012 would be the year of the semantic web. At the half-way point he is revisiting that to see where those predictions stand now.
Indexing enables accurate, consistent retrieval to the full depth and breadth of the collection. This does not mean that the statistics-based systems the government loves so much, will go away but, they are learning to embrace the addition of taxonomy terms as indexing.
Only a few months after accusing Microsoft’s Bing of copying their search results, Google was found to be indexing millions of image thumbnails from Microsoft’s Bing search engine.
A recent TED presentation is by Eli Pariser. He is the author of “The Filter Bubble: What the Internet Is Hiding From You.” A new and very interesting book. His talk is a synopsis of how the Google personalization algorithms effect search results. Google results are influenced by your own search history and other online activity. Any system such as Amazon, Yahoo, Bing ebay shopping systems depend heavily on personalization to serve you results. Traditional databases do not use profiles (yet) but they are often based on Verity, Vivisimo, Autonomy, Fast and other mathematically based search software so they could and they do serve up different results whenever the vectors are reset - that is every time additional data is added to the system with updates or metadata enrichment.