In late July 2013, two agents of the British intelligence services entered the headquarters of the Guardian newspaper in London. Under their watchful eye, three Guardian journalists reluctantly armed themselves with power drills and proceeded to, at the order of the British government, destroy beyond recognition the circuit boards, disks, and external hard drives that had once contained the Snowden files. To those who witnessed the destruction of the drives, including former Guardian editor Alan Rusbridger, the scene was “chilling — reminiscent of burned books.” And indeed, it was a book burning, if of a peculiar sort. The Snowden archives were massive, likely containing enough data to fill hundreds if not thousands of printed books if ever committed to paper. When those drills began to cut into circuits, an entire library burned.
Luckily, they just made a copy.
That’s the thing about information, especially digital information: no matter how hard you try, it’s very hard to kill. As a “digital native” I’ve seen that first hand — I spent middle school watching teacher after teacher desperately try (and fail) to play whackamole with Sparknotes users, and, less humorously, I spent high school watching Donald Trump’s Big Lie spread through the internet like wildfire.
Yet even after all I saw, when adults tried to tell me to fear the digital world — pointing out the dangers of the “dark web” and social media addictions and the much-feared boogeyman of mental health — I never quite believed them. Sure, the web was a big place that occasionally tried to sell me on the “proven existence” of lizard people in the sewers, but nevertheless, all I could feel when I opened a search engine, always ready with a question, was awe. Awe, that I, some random kid from the Michigan suburbs, could ask any question, and the world would answer. Awe, that such a vast sea of knowledge, the collected dreaming and wondering and learning of the human race, could be there on the other side of the screen, close enough to touch.
Newspapers of the 20th century were born in a time of information scarcity, when it was both physically and economically hard to find things out. If you wanted to know how the US was faring in Vietnam, for example, the best way to go about it was to get some people together, give them a camera and maybe a little training if you had the money, point them at Vietnam, and then trust them to go there and tell you stuff when they got back. But going to Vietnam was expensive, in both blood and treasure, and most people didn’t have the inclination, resources, or skills to do it. So we trained special people to go places and see things, and we called them reporters, and large groups of consumers, mainly newspaper subscribers, put their weekly paper money into a big hat that subsidized the cost for the professionals. (For more on this idea and what follows, check out David Roberts’ “Advice for Aspiring Explainer Journalists” at Vox).
This worked because most people, informationally speaking, were starving: there wasn’t enough news, and the only way to get it was to pay for it. Is it any wonder, then, that the print newspaper has been so devastated by the digital revolution? In less than a century, we’ve gone from a world in which it took days, if not weeks, for a letter to travel from London to New York, to a world in which I can pull a piece of metal out of my pocket at any time and, in a keystroke or two, learn anything the human race has ever known. We no longer live in an age of information hunger, but information glut — and that poses its own set of problems.
The first, of course, is that in the informational sea — as in any sea — it is possible to drown. The mere fact that we have so much information does not automatically mean that all of it is easy to make sense of, or even that all of it is reliably true.
As a result, the playing field has been suddenly, and some might say apocalyptically, leveled. While it remains expensive to give people cameras and fly them places, nobody is inclined to pay for it because it would be easier to just call someone on the ground, or search Instagram for real-time eyewitness accounts, or build a combat map based on Google Maps traffic-tracking (as the Russian army terrifyingly may have done in Ukraine early last year). Journalists, who built an industry on pooling the resources and skills necessary to obtain scarce information, suddenly have to contend with more free and public information than any human being could consume in a single lifetime. As a result, the amateur journalist, or even simply the reader, has access to the exact same live updates and leaked data sets as the professional.
What, then, are the professionals here to do? The answer, I think, lies in the fact that such an information overload implies a much greater need for analysis. A niche has been created for a new type of reporter — call them investigative computer scientists, or data journalists — who do for databases what reporters do for sources: help them tell their stories. This is certainly not an easy task, and while it lends itself to the traditional skills of reporting (curiosity, pattern matching, and copious amounts of caffeine), it also blurs the lines between journalism, computer science, and the visual arts. In this column, I’ll bring you stories driven by data, from this week’s deep dive into war in the MCU, to an investigation on campaign finance in the US midterms, to an oddly specific analysis of PDX trees. Some of these stories will be beautiful, others just for fun, and some, I hope, will be thought provoking, but they will all bring the art of data to the fore and, in doing so, offer a new perspective on the stories the Quest covers each week. I hope you’re excited to join me.
By Declan Bradley