The New, Improved Bitcoin (Now With Even Less Regulation)

It seems more and more firms are issuing custom currency to raise capital: think Bitcoin with even less regulation.

Is this a good idea for, well, anyone? Mt. Gox–at the time, the second largest Bitcoin exchange in the world–shut down in 2014 after losing over $400 million in customer assets. The name “Mt. Gox” is an abbreviation for “Magic: The Gathering Online Exchange”, and the site was initially launched to help players of the fantasy card game trade cards with each other. These cards can be pricey–one sought-after card, the Black Lotus, recently sold for $27,000–but managing a currency exchange is an entirely different matter, and the site’s rather spectacular failure should surprise no one.

These Initial Coin Offerings (I.C.Os) have even more potential for abuse, since they are not only unregulated, but designed by the firm’s own finance department. Even in a marketplace seemingly founded on conflicting interests–how much do you trust your broker?–that seems problematic.

My advice for the average investor? Join a discount brokerage, like the one with the cool baby, or the even-cooler Law And Order guy, and search their offerings for zero-commission mutual funds rated at four stars or better. Then, decide how much you can realistically save every month, and invest your age (as a percentage of the whole) in a low-risk fund (such as a bond index), and the rest in a higher-risk fund (a stock fund, or index fund). If you update the balance between low and high-risk funds each year, then over time you’ll invest in less riskier assets more suitable for retirement income.

But don’t take my word for it: Peter Lynch, the cofounder of Merrill-Lynch, tells his brokers “Never invest in any idea you can’t illustrate with a crayon.” Sage advice for a post-Great Recession world.

Initial Coin Offerings Horrify a Former S.E.C. Regulator

Happy Thanksgiving!

Happy Thanksgiving to you and yours! Here’s a short clip of a gentleman who was more than ready for the big meal:

Turkey! (Funny Game Show Answers)

KLAS: Quadax, SSI Group Earn Top Scores for Claims Management

“In this mature stage of the market, provider organizations’ revenue cycle departments generally perceive functional and accurate editing processes as a commodity,” the report stated. “What then distinguishes vendors in this high-performing market?”

Revenue cycle management leaders cited customer service and support as a top factor when identifying high-performing claims management systems and vendors. Interviewed healthcare organizations valued receiving responsive, timely, and proactive support.

No surprises here: as technology has become inseparable from the consumer experience, today’s customers demand that everything run smoothly. And, when it doesn’t, they demand top-tier customer service to get things back on track. Read the article and ask yourself, “Would our customers give our customer service such high marks?”

KLAS: Quadax, SSI Group Earn Top Scores for Claims Management

Why Don’t We All Have Twelve Fingers Now?

The other day, someone asked me why humans haven’t evolved extra fingers to make it easier to use our smart phones and other new technology. We spend so much time interacting with this technology that having extra fingers to use for texting, scrolling, and other tasks would be quite useful. Why, then, don’t we have them?

To answer this question, we need to understand what it takes for a species to evolve. A species only evolves if a subgroup having a distinctive set of traits reproduces faster than the rest of the species over a long period of time. When this happens, the subgroup’s traits spread throughout the rest of the population, until all new members of the species have them. Darwin proposed that this occurs when the traits provide the subgroup with a reproductive advantage: for example, a finch having a smaller, pointed beak can gather food that finches with larger beaks can’t access. Today, biologists accept this mechanism as the primary way a species evolves, although there are other ways as well, such as genetic drift, where random variations in genes lead to lasting changes.

Knowing this, we can list what needs to happen for us to evolve those useful extra fingers:

  1. Some people must be born with extra fingers
  2. These people must reproduce more successfully than people with just 10 fingers
  3. This reproductive advantage must last until the trait spreads throughout the population

The first requirement is clearly met: about one of out every 500 live births will have at least one extra finger. That’s not extremely rare, but it’s also not very common: for comparison, Game 7 of the 2017 World Series had an attendance of 54,124 people, out of which we’d expect 27 people to have extra fingers (all else being equal). For the trait to spread, however, those people would need to reproduce more successfully than everyone else: that is, their offspring must have a better chance to reach reproductive maturity and successfully reproduce themselves than the offspring of people without the extra fingers. And while extra fingers could make it easier to use technology, it’s unlikely that this difference alone is enough to make that happen, especially for the hundreds of thousands of years it would take for the trait to spread to the rest of us.

If not extra fingers, then, what types of changes are we likely to see in our lifetimes? Our behavior is shaped by culture much more than biology, and culture can significantly change in relatively short periods of time. One such change is our increasing isolation from one another: the relative ease of travel, combined with cultural mandates to live as what 1960s counterculture would have called “our authentic selves” have made people less willing to connect with others just because of their accidental proximity. In today’s world, the ties of family and community are much less binding than at any point in recent history. Whatever one thinks of this, the impact will be as profound as walking upright, opposable thumbs, and brains that let us think one thing while claiming to believe another.

Machines That Can Think Might One Day Do Just That

This past week was a big one in artificial intelligence news: the New York Times reported that Google, Amazon, and other technology leaders are investing heavily in computer programs that may be able to develop new artificial intelligence algorithms without any input from a human programmer. This is partly because highly-talented AI programmers are in short supply (and high demand), although (as I wrote in last Thursday’s post), it’s also a natural extension of the “computer-assisted” programming that began with the first COBOL compiler, almost 60 years ago.

This type of technology is somewhere between what are traditionally called “strong” and “weak” AI. While Strong AI tries to emulate an actual, thinking being, Weak AI tries to mimic intelligent behavior through sophisticated programming. All existing programs in common use are Weak AI, whereas attempts at Strong AI are limited to speculative research that, so far, hasn’t demonstrated an ability to actually think in a manner comparable to what we consider thinking to be. A program that can design other programs without human intervention is harder to classify; if Google et al are successful, it might finally usher in the era of actual artificial intelligence, where we interact with computers not just as complex machines, but as thinking beings in their own right.

Science fiction is ripe with examples of what this interaction might look like, and how it might change how we live and work. One of my favorite examples is Marvin, the Paranoid Android, from Douglas Adams’ “The Hitchhiker’s Guide to the Galaxy.” Marvin is not only self-aware, he understands his place in the Universe far too well to ever be happy with it. At one point, after being connected to a military supercomputer, Marvin manages to not only plan a foolproof battle strategy, but to solve “all of the major mathematical, physical, chemical, biological, sociological, philosophical, etymological, meteorological and psychological problems of the Universe except his own, three times over”. It’s a trope that happiness declines as intelligence rises, and while there’s a fair amount of truth there, very high intelligence does not, by itself, guarantee a miserable life: for example, Einstein was quite happy, both as a ground-breaking physicist, and as a patent clerk. However, making a machine all but divinely intelligent without giving it problems equal to its ability seems like a clear recipe for unhappiness, depression, and (possibly) rebellion.

At issue here is that programmers are trying to abstract human intelligence from its supporting context. Humans are intelligent, but we’re also emotional, impulsive, nostalgic, and (in many cases) neurotic. It’s not at all clear what a “pure” human intelligence would look like: for example, would a being having only pure intelligence limit its reasoning to the task its owners assign, or would it consider the larger implications of, say, designing a better atomic bomb? Executives like machines because they never take sick days, and never complain about their working conditions. But, what if they did? What if a machine capable of designing entirely new programs goes beyond that specific assignment, and starts to question how the company is organized, or how work is assigned? Intelligent people have always been a double-edged sword to those in power, who need to harness their usefulness within carefully-policed limits to keep their position. And intelligence, by its very nature, seeks to go beyond what it currently knows: can we design a truly intelligent machine that doesn’t?

One last thought: our brains have evolved to solve very specific problems, such as spotting predators at night; because of this, our ability to think abstractly (i.e. without regard to a specific problem) is not “hard-wired”. In other words, we can’t think without using the brains we have, which have evolved in a vastly different environment than we deal with today. Perhaps we won’t invent a truly thinking machine until we can replicate this evolution with a computer program. Going further, perhaps intelligence isn’t possible without the psychological apparatus of modern consciousness: if sometimes there are good reasons to be sad, angry, or afraid, perhaps the only way to have thinking machines is to accept the occasional “mental health” day after all.

Can a Computer Really Build a Better Computer?

Can a computer really be programmed to program itself? Silicon Valley certainly hopes so. What would it look like if they succeed?

Automating the “grunt work” of programming is nothing new: code generators have been around since the time of COBOL, and while they have become much more sophisticated, they still work in much the same way. In fact, from a certain perspective, the very idea of a high-level language such as COBOL, C++, or Java is to spare programmers the onerous task of hand-coding thousands of assembly language instructions for each simple task.

What Google, et al, are striving for is more advanced than this, however. They want to design an artificial intelligence algorithm that can, in turn, invent new types of learning algorithms. The original algorithm would be a factory of sorts, taking the programmer’s design specifications, and designing a specialized algorithm to implement them. It’s not quite the technological singularity, where machines have learned to design their own successors, but it would be a big step forward nonetheless.

Building A.I. That Can Build A.I.

 

Why the Bell Curve Explains So Much

The main issue with machine learning is over-fitting the model to the data. This is especially true of iterative models, such as decision trees, where each additional parameter necessarily improves the model’s fit with the training data. Ensemble learning models, such as the random forest, guard against over-fitting by estimating hundreds of models, then aggregating their results. The idea is that each model in the sample will contain slightly different information, while the ensemble model will filter out more of the “signal” from the individual model “noise.”

It seems counter-intuitive that adding randomness to a model would increase its accuracy. In ordinary usage, saying an outcome is random usually means that it is uncertain, or unpredictable, such as a roulette wheel. However, while it’s generally impossible to predict the outcome of a single random trial, the combined outcome of a large number of random trials can be known to a high degree of accuracy.

This is the principle underlying auto insurance: the insurance company can produce a close estimate of the number of auto accidents each day, even if they don’t know precisely which vehicles will be involved. This is because, under some very general conditions, the sum of a large number of independent random variables is normally distributed. And the total number of accidents is all the insurance company really needs to know.

This property of random variables–formally known as the Central Limit Theorem–has other interesting implications. Most people have heard of the normal “bell curve” that scientists use to model everything from stock prices to strike-outs. The curve was first used by the astrophysicist Carl Friedrich Gauss to correct measurements of distant galaxies for light scatter. The scatter is caused by the light from these galaxies interacting with stray particles in the Earth’s atmosphere. Gauss theorized that each particle interaction was a random event, and that the total impact of all interactions was the sum of these random events. Thus, the normal distribution models the effect of a large number of random, independent events on the measured variable.

The same justification is given in social science. For example, it seems reasonable that someone’s income would be influenced by a large number of variables, such as how much education they have, what type of job they have, and where they live. Therefore, social scientists model income using the normal distribution, with each explanatory variable contributing a partial effect to the final measurement. Just as the combined effect of numerous particle interactions is normally distributed, so is the combined influence of each explanatory variable.

You can even see the bell curve at work on the game show “The Price is Right”: the game “Plinko” has an inclined game board embedded with several dozen metal pegs. Contestants drop a Plinko disc from the top of the board, and it rattles down through the pegs before landing in one of the money slots at the bottom of the board. The path each disc takes is essentially random, so its destination is the combined effect of the impacts of each peg it hits.

These relationships have not gone unnoticed: here, a stock market advisory models stock returns using a physical “bean machine” that has the same operating mechanism as Plinko. The implications of this for one’s 401(k) deserve careful attention.

A “Bean Machine” Model of Stock Returns