In this issue:

  • Building an Incomplete Graph

  • Boy Band National Champions

  • Protein Folding

  • Golden Handcuffs

  • Amazon vs Wish Rolls On

Building an Incomplete Graph

Netflix’s interface is designed for browsing, but it tolerates search. And it tolerates it in an interesting way. Like any good consumer-facing search product, Netflix tries to guess with every keystroke what you’re looking for. At first, they’re matching text strings to titles, weighting by popularity and your taste. So type "M-A-R" and the app suggests Marlon, Marianne, Marco Polo, and Mary Poppins. Complete the search query ("G-I-N C-A-L-L") and every single text match disappears. Instead, Netflix recommends a documentary about the financial crisis, a documentary about greed, Moneyball, and The Social Network.

Netflix is obviously making the right decision here: at first, presume the user is searching for something Netflix has. Once it’s more likely that they’re searching for something Netflix doesn’t have, give them appropriate recommendations. If you’re in the mood for Margin Call, you might settle for The Social Network instead. Netflix’s first goal is to match users' needs exactly, but a more attainable variant on that is to minimize disappointment.

There’s another reason Netflix can implement this: they probably already had the feature as an internal product before they turned it into a user-facing feature. Netflix’s browsing works because Netflix recommends movies based on prior viewing. There’s an implicit graph of movie-to-movie relationships, and it’s easier to populate that graph if it’s not restricted to the movies Netflix can stream at any particular time. Netflix knows what you would watch on Netflix if you could, both so it can figure out what else you’d like and so it knows what to bid for the streaming rights.

But this solves a deeper problem than slightly raising engagement by offering subscribers their second-best choice. It works around the simple heuristic most users of most products have: nobody likes to estimate probabilities, so people will tend to prefer a product with a near-100% chance of solving some specific need, rather than a high-but-below-100% chance. The better the product is at solving that need—the more likely Netflix is to show you exactly what you want to see, for example—the more jarring any exceptions are. In fact, a service can get good enough that the last few exceptions are net harmful.[1]

Other companies in compounding-advantage businesses find similar ways to supply the harder-to-attain half of the network effect:

  • DoorDash infamously listed restaurants that hadn’t signed up for its service, and accepted negative gross margin orders to keep customers happy. Their implicit model might have been: a given customer expects certain kinds of food from a delivery service, and if those foods aren’t available they’ll check GrubHub. Better to accept some leakage from pizza arbitrage early on than to lose expensive-to-acquire customers to a lack of pizza.

  • Uber prioritized city launches by looking at which cities app users tried to order cars in. This was, statistically, where there was demand. And it was literally where there was demand, too, since they could market to those users.

  • I mentioned Affirm’s virtual card in my writeup of their S-1. The virtual card lets Affirm’s borrowers borrow to make purchases from merchants who don’t offer Affirm yet. And it means that Affirm’s sales team has a very easy case to make that the merchant in question should be using Affirm.

  • Early Airbnb had a shortage of properties, and an especially acute shortage of properties whose hosts were good photographers. So they hired professional photographers to shoot better photos.

  • Facebook maintains "ghost profiles" of people they know exist, but who aren’t on Facebook. When those people finally sign up, Facebook can tag photos, suggest friends, and prompt interactions from those friends.

  • Some network effects are not about binary membership, but about level of usage. A credit card is not exciting to a merchant whose customers prefer cash. Credit cards have variable loyalty point structures, and can tweak these offerings either to better-monetize current merchants or to become indispensable to new ones.

  • Automated bidding tools on auction-based platforms may be an example of this, too. Ads monetize much better when there’s high bidder density—multiple bidders who have unit economics in the same ballpark as the current high bidder, so that bidder is forced to pay something close to the value they’re getting. Any tool that makes it easier for advertiser A to cut in on advertiser B’s arbitrage has an outsize impact on revenue.

What these options have in common is that they’re a suboptimal business relative to the core product, better than nothing at all, and they help shore up whichever side of a two-sided network is lagging at any moment. They also offer a synthesis between two claims about businesses that depend on some form of network effect:

  1. The pollyannaish claim is that network effects are great, because once the business starts growing, it will grow indefinitely. Past a certain point, success is almost automatic!

  2. The cynical claim is that network effects are terrible, because every business starts from zero but these businesses start from two different kinds of zero, and their success is limited by whichever one is harder to improve. Getting to the point where success looks automatic is hard, sometimes impossible!

The usual story of any long-term compounding process is that some of it was incredibly easy, and some of it was painful. The trick is to find which elements of the easy-compounding side can be applied to accelerate the hard-to-compound side. One result of this is that mature companies with network effects will sometimes see worse incremental margins. The first 99% of the network effect is hard, but filling in the last 1% is harder still. But that’s more an artifact of accounting than underlying economics. Expensive efforts to fill in the last gaps in the network graph lead to more usage and better retention in the core product. Mapping software to reality is always harder than it looks, but the last few steps of that process are more valuable than they seem.

[1] To take a morbid example that will hopefully be relevant some time soon: Human drivers in the US have roughly 1.18 fatalities per hundred million vehicle miles. A widely available self-driving car that could do, say, 2 fatalities per hundred million vehicle miles would look perfectly safe to anyone who used it—that’s close enough to human-level performance that it would seem like magic—but would, of course, cause net higher fatalities. A world where that self-driving product is widely available would be a statistically more dangerous world, although the average passenger might perceive it as safer.

Elsewhere

Boy Band National Champions

One of the purposes of government spending is to coddle companies that do something strategically valuable but not economically rewarding. Policy banks try to keep countries somewhat self-sufficient in low-return industries where consistent access is important, and to give exporters a leg up. This applies to a variety of industries, from agriculture (all else equal, it’s better for a country to feed itself) to aerospace.

And now it matters for pop music, too. In my writeup of Big Hit Entertainment, parent company of the boy band BTS, I noted that the band’s members were subject to mandatory military service requirements. Since each member of BTS represents over 10% of Big Hit’s revenue, this is a meaningful risk. The state has responded by revising conscription laws ($, Nikkei) to give them a few more years. Pop culture is an important part of soft power; it’s very helpful for the US that overseas leaders love American political dramas, and the growth of Thai cuisine was partly subsidized by the government of Thailand. Governments, like everyone else, suffer from the cognitive bias of endowment effects: once they have a popular export, they’ll do a lot to keep it.

Protein Folding

Many years ago, I had a friend in school who was really into chess until he got into programming. After a while, he realized that chess would soon be a close to solved problem. So he switched to Go. The possibility space in Go was too large, and at the time in the early 2000s, the best AIs were only as good as mediocre players. Go seemed like a safe bet.

DeepMind’s AlphaGo changed that, becoming the first program to beat a professional human player in a standard game in late 2015, and then beating Lee Sedol five months later. Go switched from "not something AI will be able to solve on any known timeline" to "not something humans will be the best at ever again" over a few months.

DeepMind’s latest effort is a bit more practical—unless shifting human capital from board games to other domains is a big deal. It has accelerated protein folding models from a multi-month process to one that takes hours. Protein folding, like Go, has usually been on the list of problems that are hypothetically solvable by computers, but only given limitless time. There are roughly 10^360 plausible expert-level Go games, and perhaps 10^300 possible ways to fold a protein. It’s striking that in both cases, the number of possibilities is effectively unlimited, but technically finite; surprising AI advances seem to be more common in fields where the number of possibilities to be explored rounds up to infinity.

(For more on the details of protein folding, I recommend this writeup.)

Golden Handcuffs

Applied Divinity Studies has an exploration of a very important tech company economics question: why don’t well-paid employees of big tech companies save up their money for a few years and then quit? This is an important factor in tech company economics: relative to other sectors, engineers and product managers are fantastically overpaid, but relative to the economic output of their work at Google, Facebook, Amazon, and Microsoft, they’re surprisingly underpaid. If they start responding to the opportunity cost of their time appropriately, the market-clearing wage will rise, and the wonderful economics of mature tech companies will accrue to their employees rather than their shareholders. (This happens to many different industries, including autos, airlines, investment banks, and hedge funds.) One theory that arises from this question: it’s driven by behavioral norms, and some of those norms arise from visas that prevent employees from quitting. For some employees, leaving Google means leaving the country. Employees who aren’t subject to those constraints still see brilliant engineers who could follow their passion, and spend their time optimizing ad products instead. So sticking around remains the default.

It’s very interesting to speculate on a) how to avoid this, and b) which companies do avoid this. Big tech companies are at a local maximum, where they’re selecting for skill but increasingly against ambition and variance. If they want to select for variance, too, that means making it easier to quit—but since every growing tech company ends up constrained by hiring, that’s a very damaging decision to make. The other local maximum is to set an even higher bar for new hires, selecting for skill and high variance and high ambition, and then hope that a small number of employees at maximum productivity can outperform orders of magnitude more people operating at lower intensity.

Amazon vs Wish Rolls On

I’ve pointed out a few times, most recently last week, that following the Wish S-1, Amazon stepped up its PR efforts against counterfeiting. This is already paying off: Wish is being investigated in France for selling counterfeit goods. Normally, this would be an implausibly quick timeline for government action, but the French civil service is fairly peppy.