Drawing The Line Against Data Abuse In Open Societies

Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

Algorithms and surveillance based on electronic data are penetrating all aspects of modern society, from medical diagnosis to monitoring domestic violence to illegal immigrants awaiting asylum hearings. As is the case with digital technology generally, the positive intent carries with it the collateral damage of misinformation and invasion of privacy. Where to draw the red lines that separate the benefits of these fast-arriving innovations from their abuse is becoming a central issue for all open societies.

In The WorldPost this week, we look at specific cases that illustrate the challenge of how and where to draw those lines.

Adam Hofmann discusses how algorithmic screening tools that are used to identify depression or prevent self-harm and suicide can misdiagnose mental illness — a label that could then stick with the person. He writes about our online behavior: “A series of emojis, words, actions or even inactions can communicate how you feel at a given moment and when collected over time, comprise your ‘socionome’ — a digital catalogue of your mental health that is similar to how your genome can provide a picture of your physical health.”

“As a physician,” he continues, “I see the risks and consequences of algorithmic overreach in mental health falling into two camps: there are the risks of ‘false positives,’ of labeling healthy people as being mentally ill, and the risks of ‘false negatives,’ of missing cases of mental illness that actively require our attention.”

He concludes: “While the medical application of the socionome holds a great deal of potential, how data science influences medical diagnostics raises thorny issues. Doctors, after all, are not allowed to practice untested diagnostics on patients, never mind on large numbers of unsuspecting social media users, and then go on to share that data with others without their patients’ consent. If social media data is health data, and data scientists are using it to diagnose mental illness, then we must ask ourselves: shouldn’t we hold them to the same privacy and health standards as medical practitioners?”

Criminologists Edna Erez and Peter Ibarra look at court-ordered GPS monitoring of individuals accused of perpetrating domestic violence. “In response to lethal cases of separation assault,” the authors report, “a growing number of state legislatures have passed or are considering passing legislation that mandates or permits the electronic monitoring of domestic violence defendants — most recently with GPS tracking ankle monitors — to ensure they comply with judges’ stay-away, protection or restraining orders. While most defendants comply with such orders, a minority of alleged abusers with a long history of violence regard them as a mere ‘piece of paper.’ Requiring GPS monitoring is hence a way of giving the orders teeth.”

While such monitoring is mostly effective and is reassuring for the victims, according to the authors there can be a downside. “Given the perils of surveillance and the loss of privacy that such individuals face,” Erez and Ibarra ask, “should defendants who are accused of a crime, but have yet to be convicted of one, be placed under electronic monitoring?”

In the end, the authors conclude the benefits outweigh the risks to privacy. “Monitoring is no panacea for domestic violence. It does, however, level a playing field that is often stacked against victims, and it can afford them and their families peace of mind, freedom from abuse and harassment, and importantly, the fairer legal process they deserve.”

GPS monitoring of asylum applicants awaiting a hearing is less clear-cut, says the American Civil Liberties Union’s Ruthie Epstein. “We know almost nothing about the ways ICE handles the data from these devices,” she writes, referring to GPS tracking ankle bracelets, “whether it’s what information is collected, how long it’s retained by the government, who has access to that information or for what purposes it’s used.”

Epstein notes that ICE claims it is using ankle monitors as an alternative to detention. “Instead,” she argues, “it appears that it’s using the devices as an alternative form of detention that enables it to expand the total number of people under its supervision. Since 2004, BI Incorporated — a subsidiary of the GEO Group, the second-largest private prison company in the country — has run ICE’s ‘alternatives to detention’ program, which relies on ankle monitors. At its inception, the program received $14 million from Congress. By 2017, its budget had grown to $183 million, an increase of 13 times, which suggests that the use of ankle monitors has also increased.”

Epstein argues there are other alternatives, from bail to “humanitarian parole” to community-based case management services, that could replace the use of GPS monitoring. “Given the significant downsides, ankle monitors should never be used unless no other alternative short of actual detention or jail will ensure court appearance,” she concludes.

The core reality of the digital age is that where there is connectivity, there is surveillance.

Sorting out how this reality benefits society, where it infringes on personal liberty — either by the state or private corporations — and how to regulate it, is part and parcel of the great technological transformation under way. The best way to weigh the pluses and minuses is by examining actual situations instead of turning to blanket solutions for these complex and multifaceted problems, which extend across so many realms of daily life.

This was produced by The WorldPost, a partnership of the Berggruen Institute and The Washington Post.