Net Neutrality is No More

Today, the FCC voted along party lines to repeal what is known as “net neutrality”. The UK media company The Independent has a short, easy-to-read article on what this means. Here is the most important part:

“Without Net Neutrality, cable and phone companies could carve the internet into fast and slow lanes,” warns Save the Internet, a coalition of organisations that have been calling for the preservation of the rules.

An ISP could slow down its competitors’ content or block political opinions it disagreed with. ISPs could charge extra fees to the few content companies that could afford to pay for preferential treatment – relegating everyone else to a slower tier of service.

This would destroy the open internet.

An ISP is an Internet Service Provider, such as Comcast, or AT&T. Now that net neutrality is gone, Comcast (for example) could charge you more to stream music from iTunes than from Spotify. They could also charge Apple a certain price to be hosted at all, and block them if they don’t pay (much like AT&T refused to carry WTHR earlier this year). They could also make it harder to access a news site like CNN or FoxNews, either to make more money, to promote a certain ideology, or because the two CEOs exchanged words at Mar-A-Lago.

Here is the full article:

Net neutrality repeal: What is it, and why will it make the internet much worse?

After Thursday, Comcast Will Pick Your Web Sites For You

The Federal Communication Commission will vote Thursday to eliminate regulations that stop an Internet service provider, such as Comcast, from slowing down your connection when you visit web sites, or use apps, that they don’t prefer. For example, if Comcast makes a deal with Apple to push iTunes, they can then throttle your data–that is, slow down your connection–to other music sites, such as Spotify. It would also be legal for someone like Comcast to give a web site like Fox News preferential treatment, so that if you accessed (say) CNN or the New York Times, you would have a slower connection.

Sweden has experimented with not enforcing net neutrality: you can read about their experience here:

Net Neutrality’s Holes in Europe May Offer Peek at Future in U.S.

Do You Trust This Face?

Back in my PhD program, I attended a conference on the role of trust in contemporary economies. This was in 2006: the subprime mortgage crisis (and subsequent Recession) hadn’t yet happened. Enron, however, was still newsworthy, with CEO Jeffrey Skilling just sentenced to a relatively brief 24 months in prison for overseeing a decade of fraud. A few years earlier, Skilling was quoted as saying his favorite book was The Selfish Gene, by the eminent biologist Richard Dawkins. His interpretation of Dawkin’s theory was juvenile, at best, but the implication was clear: in business, trust is a fool’s game.

At this conference, several economists–a profession not noted for displays of emotion–were incensed. They discussed how trust was vital when some actors have private knowledge–for example, labor markets, where potential workers knows more about their skills than future employers–and how Adam Smith’s Theory of Moral Sentiments (the other Smith classic) was in fact compatible with the invisible hand of Smith’s The Wealth of Nations. Mostly, though, they discussed the confounding fact people trust each other far more often than they should, even when betrayal is, to use the economic term, in someone’s rational self-interest.

Think about it: how many times a day do you have the opportunity to gain some small benefit by betraying a trust? You might take longer breaks than allowed, or watch videos of adorable cats instead of working. You might steal someone else’s leftover Thai food (that looks deliciously tasty). You might back into someone’s car and not leave a note. For those with Machiavellian leanings, the gains can be substantial: how many times would simply not forwarding an e-mail put a coworker (and potential rival) in a difficult situation? The risk is minimal: at worst, you can say you never received it, knowing that because most people don’t expect that level of betrayal, you won’t be found out.

The point is not that people don’t do these things–they certainly do–but that most people don’t do them most of the time. That’s why emerging technologies that promise to ensure the trustworthiness of the people you interact with–your accountant, your mechanic, your babysitter–are so fascinating. These technologies mine publicly available data to build a “trust profile” of someone you might consider bringing into your life: information on criminal convictions and lawsuits, of course, but also social media posts and forum contributions. We all know not to livestream our joyride in the company Lexus–although again, people still do–but most of us don’t consider whether our postings about Donald Trump, global warming, or the New England Patriots might influence whether someone buys our old Playstation, rents to us through Airbnb, or lets their children accept our Halloween candy.

The question of the hour is not whether people will adopt these technologies, but how they will respond when the algorithm contradicts their own intuition. Humans have evolved a pretty good system for detecting liars, unless the person lying is either very well practiced, or so deficient in empathy that he can mimic the subtle behavioral cues we unconsciously rely on. That’s why we want to meet someone face-to-face before deciding to trust them: in just a few seconds, we can decide whether the relationship is worth the risk. And we’re usually right–but not always. As the algorithms get better, they’ll be more likely to be right than we are–but not always. What then?

Who do you trust? How data is helping us decide

Security Robot Has Comic Mishap, and the Internet Responds

Earlier in 2017, a security robot patrolling a Washington, D.C. technology park accidentally drove itself into a man-made pond. The robot–a K5 Automated Data Machine, manufactured by Knightscope–was patrolling at night and most likely failed to distinguish the steps leading into the retention pond from the surrounding walkway (a mistake I have also made on occasion).

The story quickly went viral, with humorous articles declaring the unfortunate robot had decided to put an end to its monotonous job once and for all: Suicidal robot security guard drowns itself by driving into pond, according to the UK’s Independent, while CNN.com reported the robot was in “critical condition” after nearly drowning. Other posters claimed that “Steps are our best defense against the Robopocalypse”, and that “…today was a win for the humans. robots: 0 humans: 1”.

Clearly, the rather pathetic image of the K5 floating face-down in the water (as near as it can be said to have a “face”) struck a deep chord with the Internet commentariat. And no wonder: research shows that, because modern technology is so sophisticated, we unconsciously relate to machines as if they were, in fact, people. When a machine doesn’t meet our expectations–say, by failing to print a document–we respond as if a cranky coworker is refusing to do his job. It can feel maddening, and–as with our coworkers–may escalate to the type of violence that researchers have named “computer rage.”

However, in this situation, most people weren’t angry at the malfunctioning robot: they were amused, of course, but also compassionate. “It’s ok security robot. It’s a stressful job, we’ve all been there.” writes SparkleOps. Workers at the technology park arranged a makeshift memorial, like those placed on the sidewalk following a tragic accident. The K5 Automated Data Machine may not have been alive, but it’s “passing” nevertheless evoked our collective need to honor, remember, and mourn the fallen. Self-aware machines are still only science fiction, but as we incorporate the technology we do have into all aspects of our lives, perhaps it’s time to consider them, if not alive, then at least fellow travelers.

Security robot ‘in critical condition’ after nearly drowning on job