October 7, 2010 – The biggest challenge for computer intelligence is unraveling semantics, understanding the meaning of language. Beyond the actual definition, context impacts everything. A team of researchers at Carnegie Mellon University has been working since the beginning of the year on fine-tuning a computer system that is trying to master semantics by learning more like a human.
This interesting piece of news was found in The New York Times article, “Aiming to Learn as We Do, a Machine Teaches Itself.” The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself. The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others.
NELL also learns facts that are relations between members of two categories and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.
This is exciting and a very interesting article; the size of that database leaves me in awe. It will be interesting to see the outcome of this semantic project.
Melody K. Smith
Sponsored by Data Harmony, a unit of Access Innovations, the world leader in indexing and making content findable.