Google’s Knowledge Graph Expands Worldwide but in English-Only
Amit Singhal of Google in his blog post “Building the search engine of the future, one baby step at a time” outlined three key points of working towards a “Star Trek-like” computer able to answer any question. This time, let us focus on “1. Understanding the world” with the expansion and rolling out of the Google Knowledge Graph (a database covering over 500 million people, places and things with some 3.5 billion facts) to other English-speaking populations outside of the United States of America. This is a further extension of Google’s drive towards a semantic web in which analysis of word, phrase and concept meanings outweighs simple keyword recognition (in keeping with what Google and other search engines have done before). In addition to purchasing Freebase in 2010 to form the bulk of the Knowledge Graph, Mr. Singhal stated that content was also added from Wikipedia and the CIA World Factbook.
As of August 11, 2012, the Google Inside Search feature on the Knowledge Graph noted that collections and lists from the Knowledge Graph database are being rolled out over the next few days. Once users have access, they will be able to take advantage of the visual Knowledge Graph Carousel feature (towards the top of the results page) that one will be able to scroll through and make choices. The Knowledge Graph will also take into account regional contexts of words and phrases, for example, AC/DC (an Australian music group) when searching Google Australia or Prairie Oyster (a Canadian music group) when searching Google Canada or “Kings” bringing up information on California-based sports teams the Sacramento Kings (basketball) and the Los Angeles Kings (hockey) and an American television series called Kings when searching Google.com.
Dr. Stephen Wolfram, the creator of the Wolfram Alpha computational knowledge engine, panned Google’s recent search updates and was not impressed with the inclusion of Wikipedia information in Google’s Knowledge Graph as Google strives to work towards a natural language framework which Wolfram Alpha already uses. Dr. Wolfram cited 25 years of work and research leading to what Wolfram Alpha is today. Wolfram Alpha obtained data from many databases containing different information such as daily weather records, financial and economic information, United Nations data, motion pictures, fictional characters, astronomical, geographical, and biographical information. A million lines of mathematical code are the backbone of the mathematical algorithms and models used for data interpretation. Wolfram Alpha’s linguistics was developed to understand natural language, especially queries in the form of a question. Computation and interpolation of data are also used to predict future answers. Dr. Wolfram said that Wolfram Alpha has searched for multi-layered answers to queries that go beyond one singular response.
Next time, we will take a look at the final of three key points of Amit Singhal…
See also the related blog posts: