Artificial Intelligence Translates Chicken “Speech”; Foghorn Leghorn Announces Presidential Bid

Researchers at the University or Georgia and the Georgia Institute of Technology have successfully used machine learning algorithms to measure stress levels in small groups of broiler chickens, based on their vocalizations. You can ready the summary from Scientific American here:

Fowl Language: AI Decodes the Nuances of Chicken Speech

While this sounds like something out of science fiction, it isn’t that surprising: animal vocalizations must be able to communicate information, or else animals wouldn’t use them. Most of the time, the differences in pitch, timbre, and expression are too nuanced for humans to decode (although the article notes that many farmers, after years of experience with their flock, can detect its general mood). A machine learning algorithm, however, can identify the latent structures common to chicken “speech”, and classify vocalizations into groups with similar structures.

Of course, the structures have to actually be there to be found, but as the algorithms continue to improve–and they are improving very rapidly–this type of application will become more and more common. Who knows: in a few years, Alexa may tell you whether your five-year old really needs a third glass of water.

Do You Trust This Face?

Back in my PhD program, I attended a conference on the role of trust in contemporary economies. This was in 2006: the subprime mortgage crisis (and subsequent Recession) hadn’t yet happened. Enron, however, was still newsworthy, with CEO Jeffrey Skilling just sentenced to a relatively brief 24 months in prison for overseeing a decade of fraud. A few years earlier, Skilling was quoted as saying his favorite book was The Selfish Gene, by the eminent biologist Richard Dawkins. His interpretation of Dawkin’s theory was juvenile, at best, but the implication was clear: in business, trust is a fool’s game.

At this conference, several economists–a profession not noted for displays of emotion–were incensed. They discussed how trust was vital when some actors have private knowledge–for example, labor markets, where potential workers knows more about their skills than future employers–and how Adam Smith’s Theory of Moral Sentiments (the other Smith classic) was in fact compatible with the invisible hand of Smith’s The Wealth of Nations. Mostly, though, they discussed the confounding fact people trust each other far more often than they should, even when betrayal is, to use the economic term, in someone’s rational self-interest.

Think about it: how many times a day do you have the opportunity to gain some small benefit by betraying a trust? You might take longer breaks than allowed, or watch videos of adorable cats instead of working. You might steal someone else’s leftover Thai food (that looks deliciously tasty). You might back into someone’s car and not leave a note. For those with Machiavellian leanings, the gains can be substantial: how many times would simply not forwarding an e-mail put a coworker (and potential rival) in a difficult situation? The risk is minimal: at worst, you can say you never received it, knowing that because most people don’t expect that level of betrayal, you won’t be found out.

The point is not that people don’t do these things–they certainly do–but that most people don’t do them most of the time. That’s why emerging technologies that promise to ensure the trustworthiness of the people you interact with–your accountant, your mechanic, your babysitter–are so fascinating. These technologies mine publicly available data to build a “trust profile” of someone you might consider bringing into your life: information on criminal convictions and lawsuits, of course, but also social media posts and forum contributions. We all know not to livestream our joyride in the company Lexus–although again, people still do–but most of us don’t consider whether our postings about Donald Trump, global warming, or the New England Patriots might influence whether someone buys our old Playstation, rents to us through Airbnb, or lets their children accept our Halloween candy.

The question of the hour is not whether people will adopt these technologies, but how they will respond when the algorithm contradicts their own intuition. Humans have evolved a pretty good system for detecting liars, unless the person lying is either very well practiced, or so deficient in empathy that he can mimic the subtle behavioral cues we unconsciously rely on. That’s why we want to meet someone face-to-face before deciding to trust them: in just a few seconds, we can decide whether the relationship is worth the risk. And we’re usually right–but not always. As the algorithms get better, they’ll be more likely to be right than we are–but not always. What then?

Who do you trust? How data is helping us decide

Security Robot Has Comic Mishap, and the Internet Responds

Earlier in 2017, a security robot patrolling a Washington, D.C. technology park accidentally drove itself into a man-made pond. The robot–a K5 Automated Data Machine, manufactured by Knightscope–was patrolling at night and most likely failed to distinguish the steps leading into the retention pond from the surrounding walkway (a mistake I have also made on occasion).

The story quickly went viral, with humorous articles declaring the unfortunate robot had decided to put an end to its monotonous job once and for all: Suicidal robot security guard drowns itself by driving into pond, according to the UK’s Independent, while CNN.com reported the robot was in “critical condition” after nearly drowning. Other posters claimed that “Steps are our best defense against the Robopocalypse”, and that “…today was a win for the humans. robots: 0 humans: 1”.

Clearly, the rather pathetic image of the K5 floating face-down in the water (as near as it can be said to have a “face”) struck a deep chord with the Internet commentariat. And no wonder: research shows that, because modern technology is so sophisticated, we unconsciously relate to machines as if they were, in fact, people. When a machine doesn’t meet our expectations–say, by failing to print a document–we respond as if a cranky coworker is refusing to do his job. It can feel maddening, and–as with our coworkers–may escalate to the type of violence that researchers have named “computer rage.”

However, in this situation, most people weren’t angry at the malfunctioning robot: they were amused, of course, but also compassionate. “It’s ok security robot. It’s a stressful job, we’ve all been there.” writes SparkleOps. Workers at the technology park arranged a makeshift memorial, like those placed on the sidewalk following a tragic accident. The K5 Automated Data Machine may not have been alive, but it’s “passing” nevertheless evoked our collective need to honor, remember, and mourn the fallen. Self-aware machines are still only science fiction, but as we incorporate the technology we do have into all aspects of our lives, perhaps it’s time to consider them, if not alive, then at least fellow travelers.

Security robot ‘in critical condition’ after nearly drowning on job

Machines That Can Think Might One Day Do Just That

This past week was a big one in artificial intelligence news: the New York Times reported that Google, Amazon, and other technology leaders are investing heavily in computer programs that may be able to develop new artificial intelligence algorithms without any input from a human programmer. This is partly because highly-talented AI programmers are in short supply (and high demand), although (as I wrote in last Thursday’s post), it’s also a natural extension of the “computer-assisted” programming that began with the first COBOL compiler, almost 60 years ago.

This type of technology is somewhere between what are traditionally called “strong” and “weak” AI. While Strong AI tries to emulate an actual, thinking being, Weak AI tries to mimic intelligent behavior through sophisticated programming. All existing programs in common use are Weak AI, whereas attempts at Strong AI are limited to speculative research that, so far, hasn’t demonstrated an ability to actually think in a manner comparable to what we consider thinking to be. A program that can design other programs without human intervention is harder to classify; if Google et al are successful, it might finally usher in the era of actual artificial intelligence, where we interact with computers not just as complex machines, but as thinking beings in their own right.

Science fiction is ripe with examples of what this interaction might look like, and how it might change how we live and work. One of my favorite examples is Marvin, the Paranoid Android, from Douglas Adams’ “The Hitchhiker’s Guide to the Galaxy.” Marvin is not only self-aware, he understands his place in the Universe far too well to ever be happy with it. At one point, after being connected to a military supercomputer, Marvin manages to not only plan a foolproof battle strategy, but to solve “all of the major mathematical, physical, chemical, biological, sociological, philosophical, etymological, meteorological and psychological problems of the Universe except his own, three times over”. It’s a trope that happiness declines as intelligence rises, and while there’s a fair amount of truth there, very high intelligence does not, by itself, guarantee a miserable life: for example, Einstein was quite happy, both as a ground-breaking physicist, and as a patent clerk. However, making a machine all but divinely intelligent without giving it problems equal to its ability seems like a clear recipe for unhappiness, depression, and (possibly) rebellion.

At issue here is that programmers are trying to abstract human intelligence from its supporting context. Humans are intelligent, but we’re also emotional, impulsive, nostalgic, and (in many cases) neurotic. It’s not at all clear what a “pure” human intelligence would look like: for example, would a being having only pure intelligence limit its reasoning to the task its owners assign, or would it consider the larger implications of, say, designing a better atomic bomb? Executives like machines because they never take sick days, and never complain about their working conditions. But, what if they did? What if a machine capable of designing entirely new programs goes beyond that specific assignment, and starts to question how the company is organized, or how work is assigned? Intelligent people have always been a double-edged sword to those in power, who need to harness their usefulness within carefully-policed limits to keep their position. And intelligence, by its very nature, seeks to go beyond what it currently knows: can we design a truly intelligent machine that doesn’t?

One last thought: our brains have evolved to solve very specific problems, such as spotting predators at night; because of this, our ability to think abstractly (i.e. without regard to a specific problem) is not “hard-wired”. In other words, we can’t think without using the brains we have, which have evolved in a vastly different environment than we deal with today. Perhaps we won’t invent a truly thinking machine until we can replicate this evolution with a computer program. Going further, perhaps intelligence isn’t possible without the psychological apparatus of modern consciousness: if sometimes there are good reasons to be sad, angry, or afraid, perhaps the only way to have thinking machines is to accept the occasional “mental health” day after all.

Can a Computer Really Build a Better Computer?

Can a computer really be programmed to program itself? Silicon Valley certainly hopes so. What would it look like if they succeed?

Automating the “grunt work” of programming is nothing new: code generators have been around since the time of COBOL, and while they have become much more sophisticated, they still work in much the same way. In fact, from a certain perspective, the very idea of a high-level language such as COBOL, C++, or Java is to spare programmers the onerous task of hand-coding thousands of assembly language instructions for each simple task.

What Google, et al, are striving for is more advanced than this, however. They want to design an artificial intelligence algorithm that can, in turn, invent new types of learning algorithms. The original algorithm would be a factory of sorts, taking the programmer’s design specifications, and designing a specialized algorithm to implement them. It’s not quite the technological singularity, where machines have learned to design their own successors, but it would be a big step forward nonetheless.

Building A.I. That Can Build A.I.

 

Why the Bell Curve Explains So Much

The main issue with machine learning is over-fitting the model to the data. This is especially true of iterative models, such as decision trees, where each additional parameter necessarily improves the model’s fit with the training data. Ensemble learning models, such as the random forest, guard against over-fitting by estimating hundreds of models, then aggregating their results. The idea is that each model in the sample will contain slightly different information, while the ensemble model will filter out more of the “signal” from the individual model “noise.”

It seems counter-intuitive that adding randomness to a model would increase its accuracy. In ordinary usage, saying an outcome is random usually means that it is uncertain, or unpredictable, such as a roulette wheel. However, while it’s generally impossible to predict the outcome of a single random trial, the combined outcome of a large number of random trials can be known to a high degree of accuracy.

This is the principle underlying auto insurance: the insurance company can produce a close estimate of the number of auto accidents each day, even if they don’t know precisely which vehicles will be involved. This is because, under some very general conditions, the sum of a large number of independent random variables is normally distributed. And the total number of accidents is all the insurance company really needs to know.

This property of random variables–formally known as the Central Limit Theorem–has other interesting implications. Most people have heard of the normal “bell curve” that scientists use to model everything from stock prices to strike-outs. The curve was first used by the astrophysicist Carl Friedrich Gauss to correct measurements of distant galaxies for light scatter. The scatter is caused by the light from these galaxies interacting with stray particles in the Earth’s atmosphere. Gauss theorized that each particle interaction was a random event, and that the total impact of all interactions was the sum of these random events. Thus, the normal distribution models the effect of a large number of random, independent events on the measured variable.

The same justification is given in social science. For example, it seems reasonable that someone’s income would be influenced by a large number of variables, such as how much education they have, what type of job they have, and where they live. Therefore, social scientists model income using the normal distribution, with each explanatory variable contributing a partial effect to the final measurement. Just as the combined effect of numerous particle interactions is normally distributed, so is the combined influence of each explanatory variable.

You can even see the bell curve at work on the game show “The Price is Right”: the game “Plinko” has an inclined game board embedded with several dozen metal pegs. Contestants drop a Plinko disc from the top of the board, and it rattles down through the pegs before landing in one of the money slots at the bottom of the board. The path each disc takes is essentially random, so its destination is the combined effect of the impacts of each peg it hits.

These relationships have not gone unnoticed: here, a stock market advisory models stock returns using a physical “bean machine” that has the same operating mechanism as Plinko. The implications of this for one’s 401(k) deserve careful attention.

A “Bean Machine” Model of Stock Returns

 

What Statistics Can Learn From Data Science (and Vice Versa)

Since its inception around the turn of the 20th century, researchers have used classical statistics to analyze data sets. In general, the focus has been on analyzing a sample of the data, then generalizing the findings to the entire population. This was born of necessity, as until very recently, the technology hasn’t existed to allow people to analyze entire data sets containing millions of data points.

Because only a sample of the data is analyzed, statisticians spend a great deal of time ensuring that the assumptions allowing the sample results to be generalized are met. At least, they try to do so: in practice, statistical modelling is an exercise in how many assumptions you can violate without compromising your analysis; as a result, the methods that get used are not necessarily the most powerful, but the ones that are most robust to these violations.

On the other hand, machine learning models are typically validated by testing model performance against a holdout sample of data points that aren’t used to estimate the model’s parameters. Models that don’t perform well on the holdout sample are discarded. This helps guard against over fitting, because the model will only perform well if it contains those features that have predictive value for the entire data population.

To the extent possible, researchers using classical statistics should also use this type of external validation. This is true even when model assumptions seem to be met, as the data set itself represents just one point in time. When the number of data points is small, cross-validation methods, which test a series of models against a very small holdout sample (often one case) can be used. Since these types of models are often parsimonious, they might also be validated against similar data sets, with an aim to only including those features that maintain their predictive power across each data set tested.

In addition, researchers using machine learning should verify their model assumptions are met. This can be easy to overlook, because the assumptions are often less stringent than in classical statistics. It’s also tempting to focus on the model’s performance against the holdout sample as “proof” that it is correct: this is a potentially serious error, as it may not reveal systematic deviations from the assumptions that exist in both the training and validation data.

The foundation of classical statistics is largely due to Sir Ronald Fisher’s early agricultural experiments, conducted in the early 20th century. While his methods continue to be invaluable, it’s important to remember that they were designed for a world where the analysis was conducted by hand on a small set of data points. That’s not usually the case today: as a result, it’s critical to draw from the best insights of both classical statistics, and contemporary analytics, to develop accurate and reliable models for today’s world.

Artificial Intelligence and Organizational Change

Here is an interesting take on the evolving distribution of work between humans and artificial intelligence. We need to begin dealing with AI as an actual form of intelligence, of a different nature than ours, and with its own strengths and weaknesses. True, machines may not actually think (yet), but for specialized tasks such as medical diagnosis, they are beginning to outperform experts that do. Yet, unlike human experts, we have no way to judge the machine’s credibility: as the article notes, that will take fundamental changes in how businesses organize and complete their work.

Artificial Intelligence: The Gap Between Promise and Practice

Machine Learning: Is “Off-the-Shelf” Software Enough?

As more and more businesses use machine learning to inform their decisions, I’m often asked whether”off-the-shelf” software can really take the place of custom solutions.

To answer this, it’s important to distinguish between software that analyzes the data (e.g. estimates a decision tree, or other machine learning model) and software that attempts to interpret the data. In most cases, you don’t need a custom solution to implement the most common algorithms: software such as Talend’s enterprise product can do that with minimal hassle. For those committed to open-source solutions, the R statistical program does more than you’ll ever need it to, albeit with a rather high learning curve (you can see what R makes possible, and the technical skill needed to use it, on this page).

However, while there’s no need to program your own Bayesian classifier from scratch, you’re still likely going to need an experienced data scientist to interpret the data. The reason is that computer programs are ultimately just lists of instructions: they can’t actually “think”. Because of this, they don’t cope well with the unusual, or the unexpected.

For example, I built a model to estimate house prices from property data such as square footage, age, and condition. Overall, it was a good fit, explaining about 85% of the variation in the data: however, there were certain properties that were assigned negative prices, so something was clearly wrong.

After drilling down into the data, I found that these properties were listed as having zero square footage. This was because they were condominiums, and in this data set, the square footage for all units was assigned to the overall building, not the individual units. There wasn’t any way to fix this, so I ended up excluding those cases and building a separate model for condominiums. If I hadn’t understood how the model worked, I wouldn’t have known where to look to find the source of the problem, and I wouldn’t have known to use a different type of model for the other cases.

To summarize, you’ll get the most return on your investment by paying for an experienced analyst to work with one of the off-the-shelf software packages. This gives you the best of both worlds: a proven modelling engine to crunch the numbers, and a skilled professional to interpret them.