Google Ngram Viewer
The Google Ngram Viewer or Google Books Ngram Viewer is an online search engine that charts the frequencies of any set of search strings using a yearly count of n-grams found in sources printed between 1500 and 2019[1][2][3][4][5] in Google's text corpora in English, Chinese (simplified), French, German, Hebrew, Italian, Russian, or Spanish.[2][6] There are also some specialized English corpora, such as American English, British English, and English Fiction.[7]
The program can search for a word or a phrase, including misspellings or gibberish.[6] The n-grams are matched with the text within the selected corpus, optionally using case-sensitive spelling (which compares the exact use of uppercase letters),[8] and, if found in 40 or more books, are then displayed as a graph.[9]
The Google Ngram Viewer supports searches for parts of speech and wildcards.[7] It is routinely used in research.[10][11]
History
The program was developed by Jon Orwant and Will Brockman and released in mid-December 2010.[2][3] It was inspired by a prototype called "Bookworm" created by Jean-Baptiste Michel and Erez Aiden from Harvard's Cultural Observatory and Yuan Shen from MIT and Steven Pinker.[12]
The Ngram Viewer was initially based on the 2009 edition of the Google Books Ngram Corpus. As of July 2020, the program supports 2009, 2012, and 2019 corpora.
Operation and restrictions
Commas delimit user-entered search-terms, indicating each separate word or phrase to find.[9] The Ngram Viewer returns a plotted line chart within seconds of the user pressing the Enter key or the "Search" button on the screen.
As an adjustment for more books having been published during some years, the data is normalized, as a relative level, by the number of books published in each year.[9]
Due to limitations on the size of the Ngram database, only matches found in at least 40 books are indexed in the database; otherwise the database could not have stored all possible combinations.[9]
Typically, search terms cannot end with punctuation, although a separate full stop (a period) can be searched.[9] Also, an ending question mark (as in "Why?") will cause a second search for the question mark separately.[9]
Omitting the periods in abbreviations will allow a form of matching, such as using "R M S" to search for "R.M.S." versus "RMS".
Corpora
The corpora used for the search are composed of total_counts, 1-grams, 2-grams, 3-grams, 4-grams, and 5-grams files for each language. The file format of each of the files is tab-separated data. Each line has the following format:[13]
- total_counts file
- year TAB match_count TAB page_count TAB volume_count NEWLINE
- Version 1 ngram file (generated in July 2009)
- ngram TAB year TAB match_count TAB page_count TAB volume_count NEWLINE
- Version 2 ngram file (generated in July 2012)
- ngram TAB year TAB match_count TAB volume_count NEWLINE
The Google Ngram Viewer uses match_count to plot the graph.
As an example, a word "Wikipedia" from the Version 2 file of the English 1-grams is stored as follows:[14]
ngram | year | match_count | volume_count |
---|---|---|---|
Wikipedia | 1904 | 1 | 1 |
Wikipedia | 1912 | 11 | 1 |
Wikipedia | 1924 | 1 | 1 |
Wikipedia | 1925 | 11 | 1 |
Wikipedia | 1929 | 11 | 1 |
Wikipedia | 1943 | 11 | 1 |
Wikipedia | 1946 | 11 | 1 |
Wikipedia | 1947 | 11 | 1 |
Wikipedia | 1949 | 11 | 1 |
Wikipedia | 1951 | 11 | 1 |
Wikipedia | 1953 | 22 | 2 |
Wikipedia | 1955 | 11 | 1 |
Wikipedia | 1958 | 1 | 1 |
Wikipedia | 1961 | 22 | 2 |
Wikipedia | 1964 | 22 | 2 |
Wikipedia | 1965 | 11 | 1 |
Wikipedia | 1966 | 15 | 2 |
Wikipedia | 1969 | 33 | 3 |
Wikipedia | 1970 | 129 | 4 |
Wikipedia | 1971 | 44 | 4 |
Wikipedia | 1972 | 22 | 2 |
Wikipedia | 1973 | 1 | 1 |
Wikipedia | 1974 | 2 | 1 |
Wikipedia | 1975 | 33 | 3 |
Wikipedia | 1976 | 11 | 1 |
Wikipedia | 1977 | 13 | 3 |
Wikipedia | 1978 | 11 | 1 |
Wikipedia | 1979 | 112 | 12 |
Wikipedia | 1980 | 13 | 4 |
Wikipedia | 1982 | 11 | 1 |
Wikipedia | 1983 | 3 | 2 |
Wikipedia | 1984 | 48 | 3 |
Wikipedia | 1985 | 37 | 3 |
Wikipedia | 1986 | 6 | 4 |
Wikipedia | 1987 | 13 | 2 |
Wikipedia | 1988 | 14 | 3 |
Wikipedia | 1990 | 12 | 2 |
Wikipedia | 1991 | 8 | 5 |
Wikipedia | 1992 | 1 | 1 |
Wikipedia | 1993 | 1 | 1 |
Wikipedia | 1994 | 23 | 3 |
Wikipedia | 1995 | 4 | 1 |
Wikipedia | 1996 | 23 | 3 |
Wikipedia | 1997 | 6 | 1 |
Wikipedia | 1998 | 32 | 10 |
Wikipedia | 1999 | 39 | 11 |
Wikipedia | 2000 | 43 | 12 |
Wikipedia | 2001 | 59 | 14 |
Wikipedia | 2002 | 105 | 19 |
Wikipedia | 2003 | 149 | 53 |
Wikipedia | 2004 | 803 | 285 |
Wikipedia | 2005 | 2964 | 911 |
Wikipedia | 2006 | 9818 | 2655 |
Wikipedia | 2007 | 20017 | 5400 |
Wikipedia | 2008 | 33722 | 6825 |
The graph plotted by the Google Ngram Viewer using the above data is here:[15]
Criticism
The data set has been criticized for its reliance upon inaccurate OCR, an overabundance of scientific literature, and for including large numbers of incorrectly dated and categorized texts.[16][17] Because of these errors, and because it is uncontrolled for bias[18] (such as the increasing amount of scientific literature, which causes other terms to appear to decline in popularity), it is risky to use this corpus to study language or test theories.[19] Since the data set does not include metadata, it may not reflect general linguistic or cultural change[20] and can only hint at such an effect.
Guidelines for doing research with data from Google Ngram have been proposed that address many of the issues discussed above.[21]
OCR issues
Optical character recognition, or OCR, is not always reliable, and some characters may not be scanned correctly. In particular, systemic errors like the confusion of "s" and "f" in pre-19th century texts (due to the use of the long s which was similar in appearance to "f") can cause systemic bias. Although Google Ngram Viewer claims that the results are reliable from 1800 onwards, poor OCR and insufficient data mean that frequencies given for languages such as Chinese may only be accurate from 1970 onward, with earlier parts of the corpus showing no results at all for common terms, and data for some years containing more than 50% noise.[22][23]
See also
References
- "Quantitative analysis of culture using millions of digitized books" JB Michel et al, Science 2011, DOI: 10.1126/science.1199644
- "Google Ngram Database Tracks Popularity Of 500 Billion Words" Huffington Post, 17 December 2010, webpage: HP8150.
- "Google's Ngram Viewer: A time machine for wordplay", Cnet.com, 17 December 2010, webpage: CN93.
- "A Picture is Worth 500 Billion Words – By Rusty S. Thompson", HarrisburgMagazine.com, 20 September 2011, webpage: HBMag20.
- Google SearchLiaison. "The Google Books Ngram Viewer has now been updated with fresh data through 2019". Twitter. Retrieved 2020-08-11.
- "Google Books Ngram Viewer - University at Buffalo Libraries", Lib.Buffalo.edu, 22 August 2011, webpage: Buf497 Archived 2013-07-02 at the Wayback Machine.
- Google Books Ngram Viewer info page: https://books.google.com/ngrams/info
- "Google Ngram Viewer - Google Books", Books.Google.com, May 2012, webpage: G-Ngrams.
- "Google Ngram Viewer - Google Books" (Information), Books.Google.com, December 16, 2010, webpage: G-Ngrams-info: notes bigrams and use of quotes for words with apostrophes.
- Greenfield P. M. (2013). The changing psychology of culture from 1800 through 2000. Psychological Science, 24(9), 1722–1731. https://doi.org/10.1177/0956797613479387
- Younes, N., & Reips, U.-D. (2018). The changing psychology of culture in Germany: A Google Ngram study. International Journal of Psychology, 53(S1), 53-62. https://doi.org/10.1002/ijop.12428
- The RSA (4 February 2010). "Steven Pinker - The Stuff of Thought: Language as a window into human nature" – via YouTube.
- "Google Books Ngram Viewer".
- googlebooks-eng-all-1gram-20120701-w.gz at http://storage.googleapis.com/books/ngrams/books/datasetsv2.html
- https://books.google.com/ngrams/graph?content=Wikipedia&year_start=1900&year_end=2020&corpus=15&smoothing=0&share=&direct_url=t1%3B%2CWikipedia%3B%2Cc0
- Google Ngrams: OCR and Metadata Archived 2016-04-27 at the Wayback Machine. ResourceShelf, 19 December 2010
- Nunberg, Geoff (16 December 2010). "Humanities research with the Google Books corpus". Archived from the original on 10 March 2016.
- Pechenick, Eitan Adam; Danforth, Christopher M.; Dodds, Peter Sheridan; Barrat, Alain (7 October 2015). "Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution". PLOS ONE. 10 (10): e0137041. arXiv:1501.00960. Bibcode:2015PLoSO..1037041P. doi:10.1371/journal.pone.0137041. PMC 4596490. PMID 26445406.
- Zhang, Sarah. "The Pitfalls of Using Google Ngram to Study Language". WIRED. Retrieved 2017-05-24.
- Koplenig, Alexander (2015-09-02). "The impact of lacking metadata for the measurement of cultural and linguistic change using the Google Ngram data sets—Reconstructing the composition of the German corpus in times of WWII". Digital Scholarship in the Humanities (published 2017-04-01). 32 (1): 169–188. doi:10.1093/llc/fqv037. ISSN 2055-7671.
- Younes, N., & Reips, U.-D. (2019). Guidelines for improving the reliability of Google Ngram studies: Evidence from religious terms. PLoS One, 14(3): e0213554. https://doi.org/10.1371/journal.pone.0213554
- Google n-grams and pre-modern Chinese. digitalsinology.org.
- When n-grams go bad. digitalsinology.org.
Bibliography
- Lin, Yuri; et al. (July 2012). "Syntactic Annotations for the Google Books Ngram Corpus" (PDF). Proceedings of the 50th Annual Meeting. Demo Papers. Jeju, Republic of Korea: Association for Computational Linguistics. 2: 169–174. 2390499.
Whitepaper presenting the 2012 edition of the Google Books Ngram Corpus