The value of digital data – Columbia Journalism Review.

Editor’s Note: This is a chapter from Journalism After Snowden: The Future of Free Press in the Surveillance State, a forthcoming book from Columbia University Press. The book is part of the Journalism After Snowden initiative, a yearlong series of events and projects from the Tow Center for Digital Journalism in collaboration with CJR. The initiative is funded by The Tow Foundation and the John S. and James L. Knight Foundation.Edward Snowden’s revelations about the conduct of the NSA don’t just tell us about the past conduct of the government. They tell us something about the future of political journalism. In light of the extraordinary pressure on New York Times reporter James Risen to reveal his sources, and significant movements to restrict journalistic reporting of leaks by the Obama Administration, it’s clear the stories that arose from Snowden’s leak have moved journalistic coverage of the world’s governments, already a fraught endeavor, into a new and more contentious phase.

Before Snowden, we saw the distribution of video and cables from the US State Department, leaked by Chelsea (then Bradley) Manning. That was an extraordinary occurrence, but one of such strangeness—the scale, the involvement of Julian Assange, Manning’s own military history—that it was impossible to know which aspects of that leak were singular occurrences and which indicated larger patterns.

Snowden, a far more knowledgeable and confident source than Manning, and holding far more significant material, has made some of those patterns visible. The leak of the NSA documents provides much information about political journalism in a networked age. The most important patterns are these: Individual sources have improved leverage, transnational news networks are becoming both essential and normal, and digital data is undermining older patterns of journalistic reputation.

Taken together, these changes disrupt the unstated bargain between governments and news outlets. In all but the most extraordinary cases, national news has been published in national outlets, with the borders of reporting, national interest, and national jurisdiction all lining up. After Snowden, that pattern is shredded. As journalistic outlets become more networked, the familiar geographic link between sources, reporters, publications, and subjects will weaken.

The open issue for the world’s investigative journalists is how far the world’s governments will go to restrict these networks. The threat of relatively unconstrained reporting of secrets has prompted extra-judicial attacks on publishing outlets, as with suspension of credit card payments to Wikileaks following Congressional complaint. (Full disclosure: I am a supporter of Wikileaks, both as a philosophical matter and as a donor during the period in which its finances were first under attack. I am also a donor to ProPublica and The Guardian, in large part because of their role in preventing the US from limiting publication of the Snowden revelations.)

We are quite accustomed to autocratic governments like Saudi Arabia and Egypt hampering journalism, but with the rising threat of real transnational reporting, we are seeing authoritarian leaders in South Korea and Turkey push for control of media. Even governments with a constitutional commitment to freedom of speech and of the press, such as the UK and US, have attempted to create de facto restrictions on publishing where the law allows them no direct relief. The essential question is how journalists and publications can strengthen their ability to report important news in an age of increasing interference.
There have always been leaks and leakers. Any discussion of journalism in the US will eventually come around to Watergate and Deep Throat, the code name for Mark Felt, associate director of the FBI and leaker-in-chief. Likewise, digital data made leaking easier long before Snowden; the site Cryptome.org was set up in 1996 to do much of what Wikileaks also does, and Wikileaks itself was roiling national politics long before Manning ever showed up, as with its accusations of corruption by Daniel arap Moi in 2007.

In large bureaucracies, the scarcest resource is not access to data, but individual bravery.

The Manning case, though, was unusual: a massive leak from inside a secured network run by the richest country on earth, one seemingly well equipped to guard its own secrets. It concerned the United States, the world’s sole superpower and most important global actor. And Manning, visibly upset with US conduct and on disciplinary probation, was only allowed continued access because the wars in Iraq and Afghanistan increased the need for technical talent while decreasing the supply.

The cumulative effect was to make the revelations of 2010 seem as if they might be a one-off, rather than a new pattern. Many people commenting on the Manning leak believed that nothing of that magnitude would happen again. This assumption rested on the conviction that national governments and large firms would quickly find ways to limit access to their secrets by insiders who might be willing to leak that information.

The Snowden leak shows us that this organizational adaptation did not happen. The National Security Agency is among the best-funded and most competent group of electronic spies in the world. It had three years after the example set by Manning to limit possible leaks, and it failed, spectacularly. Not only did the agency lose a huge trove of data, officials could not initially identify who had leaked it and, if they are to be believed, still cannot use their own internal controls to discover which documents Snowden had in his possession when he left.

shirky_snowden.jpg

After Edward Snowden, we see how much power now lies with the leaker. (Barton Gellman / Getty Images)

If the NSA cannot secure its own documents, what hope is there for less competent institutions? All large institutions with secrets now face a serious threat to their current practices in making use of digital data (exactly as Assange predicted they would back in 2006.) The value of freeing information from physical containers is that more people can see and use it simultaneously, at lower cost. This is a boon for almost every possible use of this data, but it is in tension with any desire to keep it secret.

This tension is fundamental. Sharing data widely is the principal source of risk to its secrecy, but making secret data harder to share also makes it harder to use, and thus less valuable. This dilemma grows more severe the more is to be kept secret, because large stores of data require increasingly automated processes of indexing and linking, which in turn require reducing barriers between data stores, so as to “connect the dots.” And all this hoped-for dot-connecting requires scores of junior analysts and administrators just to manage basic operations.

From a bureaucratic point of view, there are three obvious solutions to this problem: immediate restrictions on system access for anyone skeptical about the mission; dramatic limits on the number of junior employees given access; and total internal surveillance. Acting on these solutions would indeed lower the number of leaks, but would leave an organization trying to use vast datasets with a skeleton crew of paranoid yes-men, hardly a recipe for effective organizational action.

Some bureaucracies will indeed subject themselves to dramatically increased degrees of internal paranoia over who is to have access to which pieces of data, but most won’t, and the ones that do will find that it hampers their effectiveness. Just as people write down their nominally secret passwords on Post-Its, organizations will re-open their databases to competent administrators and entry-level analysts, because they will have to if they want to make use of the information.

Bureaucracies are permanently vulnerable to a revolt of the clerks. The increased value of digital data comes almost entirely from its improved shareability, and if data is more shareable, there is a greater risk that it will be shared. In a digital world, it no longer takes a senior figure like Mark Felt to leak; it can be anyone who has access to the data. For all Snowden’s genius, he operated far from the levers of power within his organization.

What Snowden (and Manning) show is that in large bureaucracies, the scarcest resource is not access to data, but individual bravery. Brave sources are rare but not vanishingly so; a brave source can accomplish the delivery of information on a scale unimaginable even a decade ago.
One curiosity of the half-millennium since Gutenberg, and especially of those hundred years in which the telegraph, photograph, phonograph, telephone, cinema, radio, and television all appeared, is that for all the innovation, media remained relentlessly national, constrained by local economics and politics.

For physical media—books and newspapers, letters and photographs—international tariffs priced out much border crossing. The cost of building out the infrastructure for the telegraph and later the telephone had the same effect. Even radio and TV, transported as pure energy, first appeared when broadcast engineering was barely adequate to cover a whole city, much less cross national boundaries. Even border-spanning news organizations such as the BBC had to set themselves up country by country.

Through the end of the 20th century, leaks of any importance would be leaked to, and published by, the press in that nation. Profumo was reported in England, Watergate in the US, and so on. Even as entertainment became more global, the news (especially political news) remained nationally sourced, nationally published, and nationally consumed.

Here, too, there are historical precedents before Snowden. It is no exaggeration to say that the current pope got his mitre in part because of The Boston Globe’s coverage of child sexual abuse by priests. The Globe published its series on the horror of Father John Geogahn’s crimes in 2002, just far enough into the internet’s existence for the story to spread outside the US, sparking international scrutiny. Similarly, The Guardian’s correspondent in South Africa told me later in that decade that he had regarded his job as reporting on South Africa to the UK, but had recently discovered that his South African audience was now larger than his British one. The Guardian website had become a platform that allowed South Africans to read about themselves.

Those were 21st-century equivalents of the first English bibles being printed in Antwerp: a way of placing a single publisher out of the reach of the target nation’s government. What’s different today is the “multiple publishers” strategy that Assange improvised and Snowden extended, akin to insisting that every synagogue have two Torahs or every database store information in multiple locations. Having more than one copy of the leaked data and more than one publication working on the story makes the leak more effective.

After Manning, it was easy to believe that organizations like Wikileaks were the hinge on which any such leak would depend. In the aftermath of the State Department leak, Assange rather than Manning was presented as the central figure, not least because he was charismatic, brilliant, and odd—catnip for the press. Given his outsized presence, it was easy to believe that there had to be some organization between the leaker and the press to make any system of international distribution work.

After Snowden, we see how much power now lies with the leaker. Snowden demonstrated that the principle value Wikileaks had provided was not in receiving the source materials, but in coordinating a multi-national network of publishers. Snowden himself took on this function, contacting Laura Poitras and Glenn Greenwald directly.

The potential for a global news network has existed for a few decades, but its practical implementation is unfolding in ours. This normalization of trans-national reporting networks reduces the risk of what engineers call a “single point of failure.” As we saw with Bill Keller’s craven decision not to publish James Risen’s work on the NSA in 2004, neither the importance of a piece of political news nor its existence as a scoop is enough to guarantee that that it will actually see the light of day. The global part is driven by the need for leakers to move their materials outside national jurisdictions. The network part is driven by the advantages of having more than one organization with a stake in publication.

The geographic spread of the information means that there is no one legal regime in which injunctions on publication can be served, while the balance of competition and collaboration between organizations removes the risk of an editor unilaterally killing newsworthy coverage. Now and for the foreseeable future, the likelihood that a leak will appear in a single publication, in the country in which it is most relevant, will be in inverse proportion to the leak’s importance.
These two changes—the heightened leverage of sources and the normalization of trans-national news networks—are threatening even to democratic states with constitutional protections for the press (whether de jure, as in the US, or de facto, as in the UK). Those governments always had significant extra-legal mechanisms for controlling leaks at their disposal, but empowered sources and transnational networks threaten those mechanisms.

This containment of journalistic outlets inside national borders resembled a version of the Prisoner’s Dilemma, a social science thought-experiment in which each of two people is given a strong incentive to pursue significant short-term gain at the other’s expense. At the same time, each participant has a weaker but longer-lasting incentive to create small but mutual, longer-term value. The key to the Prisoner’s Dilemma is what Robert Axelrod, its original theorist, calls “The Shadow of the Future.” The shadow of the future is what keeps people cooperating over the long term—in friendships, businesses, marriages, and other relationships—despite the temptations of short-term defection of all sorts.

News outlets and governments exist in a version of the Prisoner’s Dilemma. Publications have a short-term incentive to publish everything they know, but a long-term incentive to retain access to sources inside the government. Governments have a short-term incentive to prevent news outlets from discovering or publishing anything, but a long-term incentive to be able to bargain for softening, delaying, or killing the stories they really don’t want to see in public (as happened with Keller.)

So long as both institutions have a long time horizon, neither side gets all of what it wants, but neither side suffers the worst of what it fears, and the relationship bumps along, year after year. (There have been a few counter-examples: I.F. Stone did all his work for his weekly newsletter by researching government data, never interviewing politicians or civil servants. He reasoned that the quid pro quo of increased access but reduced ability to publish would end up creating more restrictions than it was worth.)

The shadow of the future has meant that even in nations with significant legal protections for free speech, the press’ behavior is considerably constrained by mutual long-term bargains with the government. Empowered leakers and transnational publication networks disrupt this relationship. A leaker with a single issue—the world should see what the State Department or the NSA are doing, to take the two obvious examples—has no regard for the shadow of the future, while publications outside the US will be not be constrained by legal challenges, threatened loss of insider access, or appeals to patriotism.
There is one final pattern that the Snowden leaks make visible. In the middle of the 20th century, mainstream news both relied on and produced cultural consensus. With the erosion of the belief that mainstream media speaks to and for the general public in an unbiased way, the presumed lack of objectivity of any given news organization has become a central concern. Alongside this change, however, we are witnessing the spread of a new form of objective reporting: reporting done by objects.

There are, of course, precedents to object-based reporting; tape-recorded conversations in Nixon’s White House ended his presidency, as his foul-mouthed, petty vindictiveness became obvious to all. The heroic work of The Washington Post is the stuff of journalistic lore, but the mechanical nature of the tape recorders actually made them the most trusted reporters on the story.

As the quality and range of reporting by objects has increased, it has had the curious effect of making the partisan nature of both reporters and publications a less serious issue. If Mother Jones, predictably liberal, had only been able to report Mitt Romney’s remarks about the 47 percent because a bartender heard and repeated them, the story would have circulated among the magazine’s left-leaning readers, but no farther (as with most stories in that publication). That bartender recorded the conversation, however, and the fact of the recording meant Mother Jones’ reputation didn’t became a serious point of contention. Because people only had to trust the recording, not the publication, the veracity of the remarks was never seriously challenged.

This pattern of objective recording trumping partisan reputation is relatively new. Indeed, in the 47 percent story, otherwise sophisticated political observers like Jonathan Chait predicted that Romney’s remarks would have little real effect, because they didn’t understand that the existence of a recording simply neutralized much of the “out of context” and “he said, she said” posturing that usually follows. Mother Jones no longer had to be mainstream to create a mainstream story, provided its accuracy was vouched for by the bartender’s camera.

In Snowden’s case, many of the early revelations about the NSA, and especially the wholesale copying of data flowing through various telecom networks, had already been reported, but that reporting had surprisingly little effect. The facts of the matter weren’t enough to alter the public conversation. What did have an effect was seeing the documents themselves.

All inter-office PowerPoint decks are bad, but no one does them as poorly as the federal government. The slides describing the Prism program were unfakeably ugly, visibly made by insiders talking to insiders. As with Romney’s remark about the 47 percent, the NSA never made a serious attempt to deny the accuracy of the leak or cast aspersions on the source, the reporters, or the publications.

Like the Nixon tapes and the Romney video, the existence of the Snowden documents also gave Glenn Greenwald, one of the most liberal journalists working today, a bulwark against charges of partisan fabrication. Indeed, he didn’t just publish his work in The Guardian, a liberal UK-based paper; he took the data with him to a startup, The Intercept, believing (correctly) that the documents themselves would act as a kind of portable and surrogate reputation, disarming attempts by the government or partisans elsewhere to deny the accuracy of present or future stories generated from those documents.

In past leaks—the Pentagon Papers, Watergate—it took the combined force of leaked information and a mainstream publication to get the public’s attention, and mainstream publications were, almost by definition, the publications most invested in the shadow of the future. Meanwhile, more partisan publications of the 20th century were regarded with suspicion; even accurate reporting that appeared in them rarely went beyond niche audiences. After Snowden, the world’s governments are often denied even this defense. This creates a novel set of actors: an international partisan press that will be trusted by the broad public, so long as it traffics in documents that announce their own authenticity.
There will be more Snowden-style leaks, because the number of people with access to vital information has proliferated and cannot easily be reduced. Even one-in-a-million odds of a leak start to look likely if a million people have access, as was the case with the State Department’s cables. So what should journalists and publications do to maximize their ability to report newsworthy stories and minimize government interference? Three broad skills are required.

First and most importantly, reporters have to get good at encrypted communication. (It would be useful if news organizations began encrypting even routine communication, to avoid signaling to the governments they cover when something particularly important is happening, and to provide cover to sensitive sources.) Encryption is not an IT function; individual reporters have to become comfortable sending and receiving encrypted email, at a minimum. And, as was the case with both Manning and Snowden, it’s important to recognize—and to get the source to recognize—that encryption is no guarantee that a source won’t eventually be identified. It is a tool for buying time, not guaranteeing anonymity.

Second, journalists and institutions in contact with leakers need to have a plan for involving other journalists or institutions located in a different jurisdiction. While the leaks that get the most attention are national scale, we can expect additional leaks from inside businesses and local governments. It may be valuable to have a New Jersey newspaper holding vital documents about a sheriff in Colorado, to make sure the Colorado paper can’t be successfully pressured to withhold them. (This “doomsday switch” scenario seems to have been used by John McAfee, in his fight with the government of Belize, an indication that the pattern extends beyond journalism.)

And third, both journalists and publications should figure out to whom they might be useful as a third-party recipient of some other journalist’s or publication’s secrets. In moments of crisis (and important leaks tend to precipitate crises), those in need of backup will turn to people they already trust. If you are a journalist, editor, or publisher, ask yourself which other publications, anywhere in the world, would turn to you if they needed backup?

These leaks are far more threatening to secretive organizations when perpetrated by clerks instead of chiefs and distributed outside the bounds of local jurisdiction; they are also harder to question or deny. We are already seeing the world’s democracies behave like autocratic governments in the face of this threat; the Obama administration has become the greatest enemy of press freedom in a generation (a judgment recently made by James Risen, the man whose NSA story Bill Keller quashed).

Leaks will still be relatively rare. But because they can happen at large scale, across transnational networks, and provide documents the public finds trustworthy, they allow publications some relief from extra-legal constraints on publishing material in the public interest.

Brave sources are going to require brave journalists and brave publications. They are also going to require lots of technical expertise on encryption among reporters and lots of cooperation among sometime competitors. The job of publications is to air information of public concern, and that is increasingly going to mean taking steps to ensure that no one government can prevent publication. Nothing says “We won’t back down” like burning your boats on the beach.

Clay Shirky has a joint appointment at New York University, as a Distinguished Writer in Residence at the Arthur L. Carter Journalism Institute and as an assistant arts professor in the Interactive Telecommunications Program. He blogs at shirky.com/weblog. This story was published in the March/April 2015 issue of CJR with the headline, „Revolt of the clerks.“