Do You Trust This Face?

Back in my PhD program, I attended a conference on the role of trust in contemporary economies. This was in 2006: the subprime mortgage crisis (and subsequent Recession) hadn’t yet happened. Enron, however, was still newsworthy, with CEO Jeffrey Skilling just sentenced to a relatively brief 24 months in prison for overseeing a decade of fraud. A few years earlier, Skilling was quoted as saying his favorite book was The Selfish Gene, by the eminent biologist Richard Dawkins. His interpretation of Dawkin’s theory was juvenile, at best, but the implication was clear: in business, trust is a fool’s game.

At this conference, several economists–a profession not noted for displays of emotion–were incensed. They discussed how trust was vital when some actors have private knowledge–for example, labor markets, where potential workers knows more about their skills than future employers–and how Adam Smith’s Theory of Moral Sentiments (the other Smith classic) was in fact compatible with the invisible hand of Smith’s The Wealth of Nations. Mostly, though, they discussed the confounding fact people trust each other far more often than they should, even when betrayal is, to use the economic term, in someone’s rational self-interest.

Think about it: how many times a day do you have the opportunity to gain some small benefit by betraying a trust? You might take longer breaks than allowed, or watch videos of adorable cats instead of working. You might steal someone else’s leftover Thai food (that looks deliciously tasty). You might back into someone’s car and not leave a note. For those with Machiavellian leanings, the gains can be substantial: how many times would simply not forwarding an e-mail put a coworker (and potential rival) in a difficult situation? The risk is minimal: at worst, you can say you never received it, knowing that because most people don’t expect that level of betrayal, you won’t be found out.

The point is not that people don’t do these things–they certainly do–but that most people don’t do them most of the time. That’s why emerging technologies that promise to ensure the trustworthiness of the people you interact with–your accountant, your mechanic, your babysitter–are so fascinating. These technologies mine publicly available data to build a “trust profile” of someone you might consider bringing into your life: information on criminal convictions and lawsuits, of course, but also social media posts and forum contributions. We all know not to livestream our joyride in the company Lexus–although again, people still do–but most of us don’t consider whether our postings about Donald Trump, global warming, or the New England Patriots might influence whether someone buys our old Playstation, rents to us through Airbnb, or lets their children accept our Halloween candy.

The question of the hour is not whether people will adopt these technologies, but how they will respond when the algorithm contradicts their own intuition. Humans have evolved a pretty good system for detecting liars, unless the person lying is either very well practiced, or so deficient in empathy that he can mimic the subtle behavioral cues we unconsciously rely on. That’s why we want to meet someone face-to-face before deciding to trust them: in just a few seconds, we can decide whether the relationship is worth the risk. And we’re usually right–but not always. As the algorithms get better, they’ll be more likely to be right than we are–but not always. What then?

Who do you trust? How data is helping us decide

The Ethical Minefields of Technology

I whole-heartedly agree with the author that we need to seriously consider the ethical implications of all this new technology, especially when the window in which someone can “opt-out” of adopting it keeps shrinking. Personally, I’m not interested in owning a self-driving car–I actually like the experience of driving–but eventually, I’m not going to have a choice.

https://blogs.scientificamerican.com/observations/the-ethical-minefields-of-technology/

Should We Let the Government Leverage Its Data to Enact Better Policy?

When I was in graduate school, we debated the merits of a centralized data store that policy makers could use to make better decisions; ultimately, we decided the risks to privacy outweighed the benefits.

Data collected by (ethical) businesses is de-identified, typically by assigning each case with an arbitrary number. Government data isn’t, although as the article below points out, it could be. The bigger concern is that, unlike private organizations, the government can detain, arrest, and even execute people. On the one hand, none of these things happen without due process; on the other, power corrupts, and–what is far more worrying–people make mistakes. Are we willing to accept that?

We might be, if it actually does lead to better policy: having worked for the government, I can tell you that we routinely made decisions on what I’ll call sparse information. Several times, I had to request data from another state agency, and each time we had to draft an agreement specifying precisely what my agency could do with it. And there’s no data standardization across agencies, so sometimes after going through all this, I wasn’t able to merge the two data sets.

Which raises another issue: to make this work, each agency would have to use the same data semantics, file structure, and database application. Even in the ideal case, where everyone can agree on a common data dictionary, each agency’s ability to contribute data will be limited by its own architecture. And, as the second link makes clear, things are usually not ideal.

Let’s Use Government Data to Make Better Policy

Investigation Reveals a Military Payroll Rife With Glitches