Atlantic, The

What If Cameras Stopped Telling the Truth?

Cheap smartphones with cameras have brought the power take documentary evidence to just about anyone, and the credibility of phone-shot video has held up in court and in the news. But a patent awarded to Apple in June hints at a future where invisible signals could alter the images that smartphone cameras capture—or even disable smartphone cameras entirely.

Apple filed for the patent in 2011, proposing a smartphone camera that could respond to data streams encoded in invisible infrared signals. The signals could display additional information on the phone’s screen: If a user points his or her camera at a museum exhibit, for example, a transmitter placed nearby could tell the phone to show information about the object in the viewfinder. A different type of data stream, however, could prevent the phone from recording at all. Apple’s patent also proposes using infrared rays to force iPhone cameras to shut off at concerts, where video, photo, and audio recording is often prohibited. Yes, smartphones are the scourge of the modern concert, but using remote camera-blocking technology to curb their use opens up a dangerous potential for abuse. What happens if someone else can use technology to enforce limits on how you use your smartphone camera, or to alter the images that you capture without your consent? In public spaces in the US, that would be illegal: Courts have generally ruled that the First Amendment protects people’s right to take pictures when they’re in a public area like a park, plaza, or street. Private spaces are a different story entirely.

Foreign Hackers Target Thousands of Gmail Users Every Month

Since 2012, Google has been notifying Gmail customers when they come under attack from hackers who may be working for foreign governments. The company has long remained vague about the the way it detects and identifies these hackers—“we can’t reveal the tip-off,” the company tells users—and about the number of notifications it routinely sends. Until now.

When these warnings were introduced, they appeared as thin red bars tacked to the top of users’ inboxes. But just a few months ago, Google redesigned the notifications to be considerably more in-your-face: Now, they take up the entire screen, announcing themselves with an angry red flag. “Government-backed hackers may be trying to steal your password,” the alert reads, advising users to enable two-factor authentication. The new alert says that fewer than one in a thousand Gmail users are targeted by foreign hackers—but for a product with more than a billion active users, that could still be a really big number. (0.1 percent of 1 billion is 1 million.) On July 11, Google provided its most precise estimate ever of the number of cyberattacks it detects that target Gmail users. Google Senior Vice President Diane Greene said the company notifies 4,000 users each month of state-sponsored cyberattacks.

The New Editors of the Internet

[Commentary] In a small number of Silicon Valley conference rooms, decisions are being made about what people should and shouldn't see online -- without the accountability or culture that has long accompanied that responsibility.

This is a pivotal time for our communications ecosystem. As we cede control to governments and corporations -- and as they take it away from us -- we are risking a most fundamental liberty, the ability to freely speak and assemble. Let’s not trade our freedom for convenience.

[Gillmor teaches digital-media literacy and entrepreneurship at Arizona State University]

The Internet's Original Sin

[Commentary] I have come to believe that advertising is the original sin of the web. An ad supported web has at least four downsides as a default business model.

First, while advertising without surveillance is possible, it’s hard to imagine online advertising without surveillance.

Second, not only does advertising lead to surveillance through the “investor storytime” mechanism, it creates incentives to produce and share content that generates pageviews and mouse clicks, but little thoughtful engagement.

Third, the advertising model tends to centralize the web. Advertisers are desperate to reach large audiences as the reach of any individual channel shrinks.

Finally, even attempts to mitigate advertising’s downsides have consequences. To compensate us for our experience of continual surveillance, many websites promise personalization of content to match our interests and tastes. By giving platforms information on our interests, we are, of course, generating more ad targeting information.

[Zuckerman is director of the Center for Civic Media at MIT and principal research scientist at MIT’s Media Lab]

Why Tech Still Hasn't Solved Education's Problems

Remember MOOCs, or massive open online courses? Now, as another school year lurches into gear, those companies have a meek record.

Udacity tried replacing intro courses at San Jose State; it ended in failure. So why has the promised boom in educational technology failed to appear -- and why was the technology that did appear not very good?

Paul Franz, a language arts teacher in California, suggests that education is too complex to tackle by tech alone.

What Good Is All This Tech Diversity Data, Anyway?

[Commentary] The drumbeat of diversity data coming from tech companies like Google, LinkedIn, Facebook, and Twitter has been anticlimactic, not least because it shows what most people already expected: that leaders in technology are overwhelmingly hiring white men.

All the companies say they need to do more. Few are willing to talk about the issue beyond what they've released in charts and blog posts.

As important as it is to get diversity numbers on the record, if what we're interested in is changes to that record, it's worth asking: Does releasing the numbers alone catalyze change? We have some evidence on this question. The answer is no.

The Latest Snowden Leak Is Devastating to NSA Defenders

[Commentary] Consider the latest leak sourced to Edward Snowden from the perspective of his detractors. The National Security Agency's defenders would have us believe that Snowden is a thief and a criminal at best, and perhaps a traitorous Russian spy.

In their telling, the NSA carries out its mission lawfully, honorably, and without unduly compromising the privacy of innocents. For that reason, they regard Snowden's actions as a wrongheaded slur campaign premised on lies and exaggerations.

Snowden defenders see these leaked files as necessary to proving that the NSA does, in fact, massively violate the private lives of American citizens by collecting and storing content -- not "just" metadata -- when they communicate digitally. They'll point out that Snowden turned these files over to journalists who promised to protect the privacy of affected individuals and followed through on that oath.

The NSA collects and stores the full content of extremely sensitive photographs, emails, chat transcripts, and other documents belong to Americans, itself a violation of the Constitution -- but even if you disagree that it's illegal, there's no disputing the fact that the NSA has been proven incapable of safeguarding that data.

There is not the chance the data could leak at some time in the future. It has already been taken and given to reporters. The necessary reform is clear. Unable to safeguard this sensitive data, the NSA shouldn't be allowed to collect and store it.

The Military Doesn't Want You to Quit Facebook and Twitter

Cornell University said the Facebook emotion study received no external funding, but it turns out that the university is currently receiving Defense Department money for some extremely similar-sounding research -- the analysis of social network posts for “sentiment,” i.e. how people are feeling, in the hopes of identifying social “tipping points.”

It’s the sort of work that the US military has been funding for years, most famously via the open-source indicators program, an Intelligence Advanced Research Projects Activity (IARPA) program that looked at Twitter to predict social unrest.

Defense One recently caught up with Lt Gen Michael Flynn, the director of the Defense Intelligence Agency who said the US military has “completely revamped” the way it collects intelligence around the existence of large, openly available data sources and especially social media like Facebook.

“The information that we’re able to extract form social media -- it’s giving us insights that frankly we never had before,” he said. In other words, the head of one of the biggest US military intelligence agencies needs you on Facebook.

Former NSA Chief Clashes With ACLU Head In Debate

Is the National Security Agency keeping us safe? That was the question that MSNBC used to frame a debate at the Aspen Ideas Festival, which The Atlantic co-hosts with The Aspen Institute.

The debate featured General Keith Alexander, former head of the National Security Agency; former Congresswoman Jane Harman; and former solicitor general Neal Katyal spoke in defense of the signals intelligence agency.

Anthony Romero of the ACLU, academic Jeffrey Rosen and former Congressman Mickey Edwards acknowledged the need for the NSA, but argued that it transgresses against our rights with unnecessary programs that violate the Constitution. The two teams also spent time arguing about Edward Snowden and whether his leaks were justified. By the end of the 90 minute session the civil libertarian team handily beat the national security state team in audience voting.

Anthony Romero of the ACLU was at his strongest when pressing the other team to explain why the American people shouldn't have a right to privacy in their metadata, given how revealing it can be. He rejected the notion that the phone dragnet is permissible because, although the NSA keeps records of virtually every phone call made, it only searches that database under a narrow set of conditions.

The Test We Can -- and Should -- Run on Facebook

[Commentary] For a widely criticized study, the Facebook emotional contagion experiment -- which deployed its own type of new techniques -- managed to make at least one significant contribution. It has triggered the most far-reaching debate we’ve seen on the ethics of large-scale user experimentation: not just in academic research, but in the technology sector at large.

Perhaps we could nudge that process with Silicon Valley’s preferred tool: an experiment. But this time, we request an experiment to run on Facebook and similar platforms. Rather than assuming Terms of Service are equivalent to informed consent, platforms should offer opt-in settings where users can choose to join experimental panels. If they don’t opt in, they aren’t forced to participate.

This could be similar to the array of privacy settings that already exist on these platforms. Platforms could even offer more granular options, to specify what kinds of research a user is prepared to participate in, from design and usability studies through to psychological and behavioral experiments.

Of course, there is no easy technological solution to complex ethical issues, but this would be significant gesture on the part of platforms towards less deception, more ethical research and more agency for users.

[Crawford is a visiting professor at MIT’s Center for Civic Media, a principal researcher at Microsoft Research, and a senior fellow at NYU’s Information Law Institute]