
Bible verse may not be the first thing we expect to see when reaching for a newspaper. Today, newspapers expected to be impartial columns of facts. Op-eds and editorials have their place, but that’s not what most people consider news.
However, America has changed a lot in the past 200 years (go figure). America’s Public Bible, a project spear-headed by assistant professor of History and Art History at George Mason University, Lincoln Mullen, shows just how much American newspapers used to rely on a much older publication: the King James Bible.
Biblical verse was (and still is) deeply ingrained in American culture. Mullen’s project strives to give some insight into how the Bible informed and even supplemented American reporting in the 19th and 20th centuries. The Public Bible allows visitors to see how specific verses rose or declined in popularity over time. What were Americans thinking and feeling during important points in the country’s history? The Civil War? World War I? Visualizing which verses resonated with people at different points of upheaval not only lends a unique perspective into their hearts and minds, but it also sheds new light on how the interpretation of scripture has changed over time.
As for the inner workings of the Public Bible itself, prominent Digital Humanist Miriam Posner suggests a useful paradigm for breaking such projects into three crucial parts: sources, processing, and presentation. Mullen used almost 10.7 million pages of newspapers available through the Library of Congress’ Chronicling America API. Through a complex process of machine learning, Mullen trained an algorithm to detect altered quotations of and even allusions to biblical verse by feeding it nearly 1,700 hand-classified quotations. Finally, the website itself offers a simple but powerful graphing tool to present the data.
While Mullen’s graphs aren’t the most visually stunning Digital Humanities project around, the true feat is the analysis that is going on behind the scenes. For anyone familiar with machine learning, the significance of Mullen’s work is immediately apparent. Teaching a computer, a bundle of especially precocious rocks, to recognize a specific range of complex human allusions is no small feat.
This project seems very cool in a way that artificial intelligence is used to teach the machine analyze text. I think it would be way cooler if they could expand the project to 21st century.