In this issue:
Brad Stone's Amazon Unbound is a follow-up to his 2014 book The Everything Store, and it's instructive to read the two in parallel. What's clear from Unbound is that a book about what Amazon does today is really a book about what Amazon was working on five to ten years ago. This is a deliberate part of how Amazon is engineered. From their 2006 annual letter:
I remember how excited we were in 1996 as we crossed $10 million in book sales. It wasn’t hard to be excited—we had grown to $10 million from zero. Today, when a new business inside Amazon grows to $10 million, the overall company is growing from $10 billion to $10.01 billion. It would be easy for the senior executives who run our established billion dollar businesses to scoff. But they don’t. They watch the growth rates of the emerging businesses and send emails of congratulations. That’s pretty cool, and we’re proud it’s a part of our culture.
The updated numbers: a $10m business added to Amazon's $420 billion revenue run-rate increases the company's size by 0.0024%. Do the congratulatory emails still get sent?
Perhaps. (Knowing Amazon, there is probably by this point a scalable, repeatable process allowing SVPs to blast out congratulatory emails at inhuman speed.) The thinking at Amazon is that they want to lean in to the "explore" phase for as long as possible, to maximize the opportunity set as businesses mature into "exploit." The year after Amazon's book business hit that $10m threshold, it was able to grow at 838%, and while growth has certainly slowed since then, it turns out the online book business had a long runway ahead of it.
One way to frame the Amazon bet is that it's building the most diversified portfolio possible of businesses that 1) have a known high growth rate, and 2) have an unknown duration. All it takes is a handful of new businesses that can maintain above-average growth for an extended period to keep the entire company growing fast, and to ensure that there's no predictable date at which Amazon's growth slows. Since the main input to the net present value of a growth company is the terminal period where growth finally slows to below the cost of capital
, pushing this out has an outsized effect on Amazon's valuation.
Part of Amazon's capital allocation strategy is that it's aiming for an information advantage, maximizing bits generated per dollar of capex and opex. Implicitly, this means that most of the information Amazon needs is in the minds of its customers, and that giving those customers more. ways to spend money gives them more ways to leak data. Growing an already-successful business is a low-information activity; the same metronomic growth model that makes a company attractive to private equity or Berkshire Hathaway ("It’s pleasant to go to bed every night knowing there are 2.5 billion males in the world who have to shave in the morning. A lot of the world is using the same blade King Gillette invented almost 100 years ago") means that growth is uninformative. In fact, turning that around, if the base case has minimal new information—for Gillette, a look at birthrates gives them a decade-and-a-half head start on predicting their addressable market—then the only kind of new information is bad news.
Amazon's capture-all-possible-forms-of-upside model isn't just expressed in the products they start, but in how they do business: I've noted before that Amazon's cargo deals with airlines tend to include a warrant giving Amazon the right to buy equity in the airline, whose terms are based on the volume of shipping they do. Amazon can have an unclear idea of just how much air freight demand will change over time, but they can structure their business portfolio so they get upside if growth outpaces their ability to buy planes themselves.
And it's implied in any long-term model of Amazon's growth. Build a twenty-year model of Procter & Gamble, Con Ed, or Kraft Heinz, and you have a pretty good idea of what they'll be doing to produce whatever revenue you pencilled in for 2040. But for Amazon, a big chunk of the 2040 number comes from businesses that you haven't heard of yet, that don't exist yet, or that exist but are not on the radar as growth drivers even for Amazon's senior leadership.
It's interesting to contrast this with Alphabet's capital allocation strategy. Alphabet, like Amazon, uses the strength and ubiquity of its core business to spin up ancillary products that throw off additional data and amortize the company's fixed costs. But the Alphabet tendency is to target new businesses that are known to have large potential from the beginning. They're notorious for launching and then killing new products, and there are two overlapping explanations for this:
Launching a new product is a good way to get promoted, but getting promoted means having less responsibility for what happens to that product. This theory was in broader circulation a few years ago, and the company has apparently responded by weakening the perverse incentive to launch soon-to-be-orphaned products.
Alphabet targets products that, from the beginning, look like they could be meaningful businesses on their own. It's not completely clear if the long-term goal of "Alphabet" is to have 26 separate companies that are each as important as G(oogle), Y(outTube), M(aps), and A(ndroid) are to the parent company, but that seems to be the rough idea. This allows the company to do something that only a tiny number of organizations can do: launch a product that, from day one, can support a billion or more users. If that product has network effects, the Alphabet version will immediately saturate the network. If it has any other economies of scale, it'll scale into them fast. If it turns out to be a not-especially-impressive product—then it probably got far too much investment, far too early, and will have to be restructured or shut down.
So you can crystalize this comparison down by treating Alphabet as a company that strategizes first, but constrains its strategies by requiring excellent execution at high scale. Amazon's approach is to strategize less, but assume consistently great execution at any scale. The uncertainty comes down to what management is best at: if they're good at predicting the future, the Alphabet model wins, by making that future suddenly happen. If management is better at rapidly adapting to the present, the Amazon approach is better. This doesn't prevent Google from iterating, of course, and the evolution of products like search and YouTube is a testament to the company's willingness to continuously test and roll out new features. And it doesn't stop Amazon from being futuristic; a speaker-shaped computer that obeys voice commands was science fiction until it was a big product (and the Kindle was literally science fiction first). But it means that Amazon's model is mostly structured to find new sources for information and produce effective tactical responses, leading to a blobby business with many growth areas. Alphabet's approach is to consume lots of data, process it into specific new products, and then launch them, leading to a spikier kind of company.
Disclosure: I am long Amazon.
In a nice collision of two supply chain narratives from the last few months, an American subsidiary of Toyota has been hacked, with customer data released ($, Nikkei). Toyota has mostly dodged the chip shortage problem due to its extensive stockpiles (it's partially suspended production in a few places, but has mostly been able to operate normally). The chip shortage, ransomware, and the early days of Covid have all demonstrated the fragility of complex supply chains. A shortage of a single critical component, or a disruption in operations for one hacking victim, can shut down production everywhere else. This particular hack did not cause Toyota to lose any production days, but it does illustrate the broad problem. A long supply chain is a big attack surface, for disasters both natural and man-made.
Building Better Metrics
LIBOR was an interesting exercise in postmodern finance. Originally, it was just a measure of real-world financial transactions: at what price was money being lent overnight between London banks? Then it turned into a benchmark, on which many trillions of dollars worth of financial assets and derivatives were based. Meanwhile, the actual loans themselves faded away; this Economist piece ($) notes that " An interbank lending market with daily transactions of around $500m now underpins contracts pegged to dollar libor that are worth about 450,000 times that."
This gets at a hard problem in coming up with metrics for the price of money. You can have a metric that's based on a broad set of transactions, but the broader it is the more it represents a combination of the demand for money and concerns about credit. (This was a sort of predecessor to the LIBOR manipulation scandal: the quoted rate was artificially low during the crisis, in part because there wasn't much in the way of interbank lending going on at the worst of it, so investors had to use other indicators.) A more narrow and theoretically pure gauge is also easier to manipulate, and more prone to selection effects. If it's only looking at the most creditworthy institutions, it ignores the possibility that the price of short-term loans is affected by credit risk.
Floating interest rates are a useful tool because many financial institutions fund themselves with short-term liabilities, and need the rates on their loans to match this. But the benchmarks are all imperfect. Money is perfectly fungible by design, but as soon as it turns from cash on hand to a short-term debt, it becomes non-fungible courtesy of the individual quirks of the borrower.
CJR ran a study showing that, for a sample of outbound links in New York Times articles, roughly 5% of links die out every year. At one level, it's impressive to think that there are webpages that have displayed the same information for two decades straight, but at another level it's disappointing; it means the entire corpus of information represented by the Internet is dying off (or moving around) at a steady pace. There are workarounds, like the one Wikipedia uses, where they automatically add pages to Archive.org after they're linked. Instead of "the Internet" as a set of static pages, there are timestamped subsets of the Internet as it existed on any given day, but, as of yet, no comprehensive record. Google might be able to solve this in a more thorough way, since they have an index dating back to 2001, and have presumably been storing more thorough data since then. But offering this as a service is challenging for them; the people who will notice it first will be the ones who deleted content because they didn't want it to be available, whereas Google can point users to the best live result without informing them that there's a better one that's no longer available.
Digital Gucci purses on Roblox changed hands for up to $4,000, a significant premium to the offline equivalent. Virtual luxury goods have some very compelling economics:
It's easy to track transactions and see the optimal amount to sell, and it's trivial to sell more once the price is well-known.
Detecting counterfeits is straightforward, both for the brand owner and for everyday users. Roblox can probably implement some kind of edit distance-based IP protection.
The set of people who can be signaled to is both larger and more controllable.
The risk of theft is lower, or at least controllable.
Roblox's economic model benefits from keeping currency in circulation on the app, since Robux purchases generate float which only gets redeemed when players quit.
N=1 for this particular sale, but it's an interesting avenue both for brand owners who worry about getting outflanked and for platform operators who want to translate their popular products into expensive ones.
India's Internet regulator has ordered social media companies to take down references to the "Indian variant" of Covid, while a Russian court has ordered YouTube to reinstate a conservative Russian channel that was taken down last July. Inside the US media narrative, it's easy to see the openness of social media as a force for good, and to see their moderation decisions in a similar context. But to many countries' leadership, the Arab Spring's popularity on Twitter looked like an existential risk, not a positive development for human rights. Local media censorship is comparatively easier, and less newsworthy outside the country (although that varies depending on how egregiously they enforce it), but for states to control what content their citizens see through social media, they need some level of cooperation from the platforms themselves. Privatizing protocols is lucrative—Twitter could have been an RFC or an open-source product instead of a company, but the for-profit version of the protocol had access to more financial resources. But those benefits come at a cost; governments feel entitled to regulate the commons, and those regulations rarely reflect the norms of social media companies' home countries.
What Exactly is Inflation?
This piece is a good meditation on the question of what we mean by "inflation." Aggregates are a good way to see general trends, especially over long periods; it's useful to know that, in the long run, dollars lose some of their purchasing power each year. But in the short term, "inflation" is a phenomenon that people don't directly interact with, because a) consumption baskets vary, b) prices vary by location, and c) willingness to substitute also varies a lot. Last month's CPI, for example, represents a meaningful change in the standard of living of anyone who spends a lot of time traveling in the US on their own dime. For someone who doesn't travel much, though, a spike in airline ticket and hotel prices isn't news at all. The post has some interesting thought experiments about how to slice inflation numbers more usefully. As the US gets polarized, this will become more important: the red state and blue state cost of living indices will not be identical, and future policy adjustments may lead to a different CPI for city-dwelling college graduates than for everyone else.