Play Money And Reputation Systems

Mantic Monday 2/21/22

4 hr ago

For now, US-based prediction markets can’t use real money without clearing near-impossible regulatory hurdles. So smaller and more innovative projects will have to stick with some kind of play money or reputation-based system.

I used to be really skeptical here, but Metaculus and Manifold have softened my stance. So let’s look closer at how and whether these kinds of systems work.

Any play money or reputation system has to confront two big design decisions:

  1. Should you reward absolute accuracy, relative accuracy, or some combination of both?

  2. Should your scoring be zero-sum, positive-sum, or negative sum?

Relative Vs. Absolute Accuracy

As far as I know, nobody suggests rewarding only absolute accuracy; the debate is between relative accuracy vs. some combination of both. Why? If you rewarded only absolute accuracy, it would be trivially easy to make money predicting 99.999% on "will the sun rise tomorrow" style questions.

Manifold only rewards relative accuracy; you have to bet with some other specific person, and you only make money insofar as you’re better than them. All real-money prediction markets are also like this, and Manifold is straightforwardly imitating this straightforward design.

Metaculus has a weird system combining absolute and relative accuracy: all predictions are treated as a combination of "bets with the house" on absolute accuracy, plus bets against other predictors on relative accuracy. Why? As a kind of market-making function; even if nobody else has yet predicted, it’s still worth entering a market for the absolute accuracy points. This works, but has a lot of complicated consequences we’ll discuss more below.

(Manifold solves the same problem by having market makers be a specific user who wants the market to exist, and making that person ante up money at a specific starting price to make that happen. This seems a lot more straightforward and frees them from the complicated consequences.)

Zero Vs. Positive Sum

As far as I know, nobody suggests negative-sum markets; the debate is between zero vs. positive-sum. Technically markets with transaction costs can be negative-sum, but nobody is happy about this, just accepts it as a necessary evil.

Zero-sum is a straightforward choice that imitates real-money markets. Two forecasters bet, and whatever Forecaster A wins, Forecaster B must lose. This is nice because it produces numbers with clear meanings: if you have a positive number, you are on average better than other forecasters; the more positive, the more better.

Positive-sum means that the house always loses; on average, you make money every time you bet. Metaculus is infamous for this; see eg this question on Ukraine:

If Russia invades Ukraine, this person will win +58 points; if it doesn’t, they will win +32 points. Why does Metaculus allow this? They want to incentivize people to forecast. If it’s zero-sum, you’re as likely to lose points by forecasting a question as gain them. In fact, if you’re not the smart money, you’re more likely to lose, much as normal people should try to avoid competing against Wall Street traders when picking stocks. Since Metaculus wants to harness the wisdom of crowds, and you need lots of people to make a crowd, they incentivize you with a better than 50-50 chance (sometimes a guaranteed chance) of getting points.

The disadvantage of this is that it makes points less meaningful; just because someone has a positive number of points, doesn’t mean they’re above average or have ever won a bet with anybody else.

Reputation Systems Aren’t About Reputation

I want to harp for a little longer on why this might be bad.

Suppose Susan is a brilliant superforecaster. She spends an hour researching every question in depth, at the end of which she is always right.

Suppose Randy guesses basically randomly. Or fine, maybe he’s slightly better than random, he has gut feelings, if the question is "will Russia invade Brazil?" he knows that won’t happen and says some very low number. But it’s not like he’s thinking super-hard. Maybe it takes Randy ten seconds to get a gut feeling and type in the relevant number.

In a zero-sum system, Susans (almost) always beats Randys. Susans end up with lots of points, Randys end up with few or negative points, the system works.

In a positive-sum system, in the hour it takes Susan to produce one brilliant forecast, Randy has clicked on 360 different questions. Who ends up with more points? It depends on whether your system rewards a brilliant answer 360x more than the baseline it rewards any answer at all. The above Ukraine question on Metaculus rewards a maximally correct answer 4x more than a lazy answer intended to most efficiently reap the free points - ~50 vs. ~200. So assuming an unlimited number of questions and both people investing the same amount of time, Randy would end up with about a 90x higher reputation than Susan.

Metaculus addresses this issue by . . . totally failing to address this issue and just accepting the consequences. It doesn’t seem so bad for them; their leaderboard contains many people who I know from other contexts to be genuinely excellent forecasters. But it turns a lot of people off from them.

More important, it lampshades an important quality of "reputational" systems: so far, none of them actually produce any kind of a reputation. By this I mean something like: if I claim "I have an IQ of 160" or "I can bench press 300 lbs", people might be impressed by me. If I say "I’m a superforecaster in the Good Judgment Project", the small number of people who know and care what that is will be impressed. I’ve heard people claim all of these things, but I have never heard anyone casually drop their Metaculus score in conversation, even in the weird heavily-selected circles where everyone knows about Metaculus and agrees it is good.

(I’m a relatively well-known blogger who writes a lot of things that may or may not be true, and I’m known to use Metaculus, and nobody has ever asked me my Metaculus score before deciding how much to trust me!)

I think this is partly because everyone understands that Metaculus scores are some combination of how good a forecaster I am, how much meta-gaming I do, and how much time I put into grinding Metaculus questions. But then, what’s the point? Your incentive for playing Metaculus is supposed to be getting a good reputation, but in fact this has no benefits, not even bragging rights!

I can’t deny that this system does, somehow, work. A lot of people use Metaculus (sometimes including me), and I would actually respect someone more if I knew they were on the leaderboard (probably through some assumption that Metaculans seem nice and honest, and even though the Randy strategy is easy, nobody cares enough to do it).

Still, part of me wishes that reputation systems could actually give someone a good reputation - that the big Wall Street firms would consider guaranteeing interviews to people on the leaderboards, or something like that. But right now they’re just not good enough to survive having any real-world consequences.

Play Money Systems: Better Than They Sound?

So what about zero-sum, relative-accuracy play money systems? This is the strategy used by Manifold, plus some of the real-money prediction markets that offer play money to Americans (like Futuur). It’s straightforward and it simulates a real prediction market closely. What could go wrong?

First question: why would anybody want play money? The obvious answer is that it’s a reputation system in disguise - the amount of play money you accumulate is a proxy for how good a forecaster you are - and an accurate one, unlike Metaculus’ reputation. This is mostly true, but with some complications. Manifold lets you buy their play money for real money, which in theory would destroy any reputational value. But they solve this by actually reputationalizing play money profits, which works:

For example, I am now impressed by/concerned by/suspicious of Robert McIntyre. What are you doing?

A second potential reason people might want play money: on Manifold, you can use it to open your own questions, asking the market for information on a topic presumably of interest to you.

(this would be very straightforward if you were subsidizing the market, and the site encourages you to think of it as a subsidy - but is it? You bet your starting ante at some specific level. And usually you as market maker have more insight into the question than anyone else. Half the time it’s on your own personal life; the other half of the time it’s on some broader question which is selected for being something you care about a lot. Far from being a subsidy - money which it is easy for other people to get - this feels like smart money - money that other people should be scared to bet against. So how does this open the market at all? I’m not sure and willing to entertain the possibility that it doesn’t, that the system only holds together because everyone is having fun and nobody cares about the incentives, and that an ante of $1 would work just as well.)

Any broader problem with this system?

I mentioned this last week, but let’s look at it again. This is inexcusably wrong: there’s no way this guy (a wrestler with no political experience who hasn’t even announced he’s running) has a 9% chance of becoming President. Why is nobody correcting it? Because you’d have to tie up your limited supply of play money for 2.5 years to make a tiny profit: the site tells me that if I put in an average person’s entire starting allocation (M$1000), I’d only push the chance down to 2% (still not low enough!) and only make a $35 profit in 2.5 years (a ~1% rate of return) when time proved me right.

My conditional prediction market experiment seems to be failing for the same reason:

I posted about six books I was considering reviewing, and asked people to bet on which ones would get lots of "likes". Only 44% of my book reviews get more than 125 likes, but every book I proposed is at >44% right now. Many are much higher - like this one, about a dry scholarly textbook explaining a famously incomprehensible form of psychoanalysis. I think all these markets are mispriced.

My guess is that people are using this as a way of voting for books they want me to review. They buy "yes" on books they like, but don’t buy "no" on books they don’t like, because that would be against the imaginary rules for the voting that they are falsely imagining this to be. Ideally, actual prediction market players would take these people’s money and drive the markets back down to the base rate. That’s not happening here, and my guess about why is: it’s a small return on a one-year-long market that might never actually trigger (if I don’t review the book, the conditions for the conditional prediction market aren’t met, and it resolves N/A). Nobody wants to lock up their limited play money for this.

Metaculus, for all their system’s problems, would get this one exactly right; since you’re incentivized to predict on every question with no limiting factors, lots of people would bet on this one; since the optimal strategy is to bet your true belief, everyone would bet something very low, and the probability would end up very low.

What to do? In the Manifold Discord, I recommended offering a per market interest-free loan of M$10, usable for a bet in that market only. Since it’s a loan, you don’t get free reputation by participating in as many markets as possible; if you’re not actually applying market-beating levels of work, you’ll only break even; if you’re worse than the market, you’ll lose money.

Still, if I could take out an interest-free M$10 loan on this market, I would. I’d bet NO, and in 2.5 years, I’d make a total of M$1 worth of easy money. If all two hundred-ish Manifold users did this, that would push the probability down to 1%, which is close enough to the real value.

Loans are complicated. For one thing, you’d have to prevent me from taking out the market-specific loan on this, selling my position immediately, and then reinvesting it into some flashier shorter-term question. For another, you’d either need a system of margin calls, or just accept that some people will go below M$0 sometimes (sure, let them go below $0, so what?) Still, I think this would solve a lot of mispricings. If it didn’t, the administrators could fiddle with the size of the loan until it did.

You could also experiment with a mechanism where market makers’ ante funds the loans, ie if you ante M$100 for a one year market, you’re promising to loan the first ten people who enter M$10 each to bet against each other with. I don’t know how to do that in a way which doesn’t reward people who show up early, which is undesirable since it makes the reputation system less valid.

I think the "play money has value because you can use it to subsidize play money prediction markets which have value because people want play money so they can subsidize play money prediction markets which…" loop is clever and could potentially work. So far Manifold has been running off of fun and early goodwill; I look forward to seeing how they solve these difficult problems as they try to scale past that level.


Typo thread

"If Russia invades Ukraine, this person will win +58 points; if it doesn’t, they will win +32 points. Why does Manifold allow this?" - This should say "Metaculus".

You're right, thank you.

Recent prediction market convert and they do things I really, really like. They teach people to forecast, they use the wisdom of crowds super efficiently, they tie consequences to being correct or incorrect, and they incentivize participation. It’s like turning in a search light and aiming it into the future.

Question: Does anyone have a long term strategy on how to make this stick to decision makers? That’s my main point of curiosity. Is the hope that they just get so efficient they can’t be ignored? I’m sure this can make money but I had hoped something like this (I have my own weird scheme I’m super into just like I’m sure everyone here does) could be a civilization’s sense organ.

Right now it seems like the plan is to make really good eyeballs and figure out how to hook them up to the brain later? Is that right? Genuine curiosity.

Mostly agree with this; right now the problem is scaling the technology and making sure it works well, after that decision-makers will either notice on their own or we can pressure them into it. Though if you're not familiar with Philip Tetlock's work with IARPA that would be a good place to start as an example of government/forecasting interfaces.

Thanks Scott. Buying one of his books now. Been reading the Hobbit to my son but at two months, I bet he doesn’t notice the substitution.

We have talked to a few managers who want a private version of Manifold for internal company use. (And of course, we use it to guide our own decisions eg ) The discussions are still really early, but I think this is one way Manifold can have a huge impact: by pushing forward this use of futarchy and decision markets in real-world use cases!

If a private Manifold instance sounds like something you'd want for your team -- get in contact at [email protected]!

What about Kalshi?

Kalshi went through the "near-impossible regulatory hurdles" and allows real money for US citizens. As far as I know, it's the only one to do so.

Yeah still a bit confused by Scott's persistent negative stance towards the first company to be able to create legalized prediction markets ever...

Does he have a negative stance toward it? I recall him being quite excited about it.

They're real money, which makes them irrelevant to this discussion of play money and reputation systems.

they're fantastic if you want to predict untinteresting things that will resolve in a week, like the weather in new york

Ooooh I like the per-market loan system. They should at least try it and see what happens!

Yeah, I like the idea a lot too, as a way to lower the barrier to trading and encouraging activity, without causing significant inflation or giving an edge to people to who trade on everything (ala Metaculus).

As a bonus, it basically allows for 10 "free" comments!

Hey there, I work for Metaculus and wanted to share my perspective on Scott's points about reputation and about how Metaculus incentivizes predictions. Tournaments have a different scoring mechanism than the rest of the platform, because there are cash prizes at stake. If someone is highly-ranked on a tournament leaderboard and wins prize money, it's because they outperformed other forecasters and contributed a lot of information with their forecasts.

I was thinking some about the issue with betting on conditional markets that may well never trigger, particularly in the context of Scott's which book should I review markets, or similar. And I think at least a partial solution is the following:

If there are multiple conditional markets whose conditions for triggering are mutually exclusive, you should be allowed to use the same dollar to bet in as many of those markets as you choose.

Absolute accuracy is usually represented by brier scores, and brier scores suck, because they don't use logarithms, so they can't appreciate the huge difference between 1% and 0.01%. I have an idea to construct a better formula. ( Instead of just squaring (Ft-Ot), apply the transformation g(x) = -lg(1-x), where x=abs(Ft-Ot) So your penalty is ~zero if your probability estimate is close to correct, but your penalty goes to infinity as your confidence in the wrong outcome goes to 1.

Predictit is very negative-sum due to fees, and it works fine.

Positive sum sucks because it incentivizes spamming the most predictions without regard for accuracy.

Positive sum combined with bad formulas (see above) is what lets people get away with predicting 99% on questions that should be 96%. On a real money negative-sum market, you're not going to make much return by doing that. Predictit's incentives are such that 99.9% certainties often trade at 97 cents. I think negative-sum tends to under-estimate probabilities of 99% events, while positive-sum can over-estimate probabilites, but doesn't have to, if they fix the formulae (see above).

I have never seen a prediction market even try to distinguish between 0.1% and 0.01%. I agree this would be a useful ability to cultivate.

Betfair Exchange goes up to 1000, ie predicting 0.1% probability. Weirdly, it only goes down to 1.01 (99% probability). This is obviously meaningless when there are only two possible outcomes, but when there are more than two, then it means that very unlikely possibilities can be meaningfully predicted - while if one option has captured 99% of the probability space, then the market is effectively over anyway.

I mean you could use any proper scoring rule, right? Isn't what you're suggesting just the log score? Or am I missing something?

Proper just means the scoring rule is optimized by predicting the true probability of the event. Proper doesn't say anything about fairly penalizing incorrect probabilities.

Here are three scenarios:

Adam predicts 99.9% on 10 events, and 9 of them happen.

Bob predicts 50% on 10 events, and 5 of them happen

Carl predicts 90% on 10 events, and 8 of them happen

We can probably agree that Adam is the worst predictor in this bunch, then Carl, then Bob.

But according to brier scores, Adam is the best.

If you divide brier score by reference brier score, the ranking is correct but the distance between Adam and Carl is smaller than the distance between Carl and Bob. This still feels wrong.

If you use logarithmic error term and divide the score by the reference logarithmic score, then you get the correct ranking plus correct distances. Bob gets 1, Carl gets 1.08, and Adam gets 2.12

I made a spreadsheet to demonstrate all these examples:

Now that regulators have given the green light to some real-money prediction markets in the U.S., do we think they'll start to incorporate that into policy decisions? I feel like a fully-regulated prediction market is the only sustainable way to create real, skin-in-the-game based information signals.

Alternatively/in addition to initial loans, is there a reason why these markets don't do some kind of dynamic margin depending on the implied probabilities of the market?

If I've bought 100 contracts at $0.90, then with ~90% probability I expect a $100 payout and a $10 profit. Given that my odds are so high, the market provider probably doesn't need me to put up $90 in collateral, right? At 'reasonable' 5x leverage I'd put up ~$20, at horrifying-crypto-exchange 50x leverage I'd put up $2.

I can see one issue, which is that if collateral requirements for long positions go down as the price goes up, then this makes it easier to go *even longer*; if the 'correct' price is lower, then the increased (or at least not-decreased) collateral requirements for short positions make it harder for the market to correct.

One reason why your book review markets are showing probabilities significantly higher than 44% for getting 125 likes is that Yes bettors are incentivized to add more likes.

Anyone who bet Yes would very likely add a like to such a post, and maybe get friends and family to as well (or create fake accounts to add likes, if they really want to win play-money).

This is the problem with using metrics — they often stop being useful when you condition action on them.

One solution is to loop in human judgment: Will Scott believe that this book review was a relative success 1 month after the article was published?

Or, on a scale from 0 to 100, how valuable will Scott judge this book review to be one month after posting?

Or, if you want to be really free-form, you can use our new free-response markets: What will the reader reaction be to this book review? Users would then submit text answers and bet on which answer best describes how the book review was received, and then Scott would choose one winner (or multiple winners). This kind of market could give qualitative descriptions that binary and scalar markets cannot.

This only very remotely on topic, but one of the reasons I did not participate in any of the book review markets is that I think "Number of likes" is not a very good mesure of readers satisfaction. I'd say that number of comments is a much better proxy.

1 hr ago·edited 1 hr agoAuthor

Yes, I noticed that the Sadly Porn review had relatively few likes, but provoked a lot of discussion, and got many subreddit upvotes. I still don't really understand the dynamics here and one day I might try to track different posts' Substack-likes-to-Reddit-upvotes ratio and see if any patterns fall out.

Might be relevant to note that I was not aware their was an ability to "like" something in substack until today.

Typo: If Russia invades Ukraine, this person will win +58 points; if it doesn’t, they will win +32 points. Why does *Manifold* --> *Metaculus* allow this? They want to incentivize people to forecast.

3 hr ago·edited 3 hr ago

We are also still thinking about how to get better predictions for long term markets, where you note that incentives are not-so-good, like for Dwayne Johnson's presidential bid.

We talked to Robin Hanson today, and he suggested creating 3 parallel currencies which are used for short term (<1 month), medium term (1 month - 1 year) and long term (1 year+) questions. He says the shorter term currencies would be able to trade in the longer term markets, but not vice versa.

I quite like this solution and think it would work. It's another example of a zero-sum reputation solution.

I sorta think that the loan thing solves for the long-term bets too-if I can bet without tying up any money, I don't need other currencies, as long as I'm willing to bet small.

Yeah, the loan thing is a good idea as well!

Real-money markets have the same issues with long-term markets. Professional bettors would rather bet on a short term market than lock their money up for any extended period.

The only time you get lots of money in a long-term market is when there is an option whose probability is rising fast. For instance, anyone who predicted that Obama would win the 2008 election prior to the 2004 Convention speech would make a huge profit, as the first probability surge came from that speech.

The only alternative I could think of would be paying interest to locked-up bets, but you'd either have to pay out at the end (which still locks the money up, but does increase the ROI), or you pay out interest on an unsettled prediction (based on the current market value of the prediction) and that incentivises things like "I will flip a coin in 2032, will it come up heads?" as that will stay at 50-50 until resolved and you can just grab the interest.

One thing that helps with real-money markets is the ability to have "trading bets", where you can cash out of the market at the current price. You get people effectively betting on /predicting the future market price rather than the actual result.

A typical example is that Presidential prediction markets are affected strongly on price by the results of primaries, so if there isn't a market on a specific primary, you can predict the primary by betting on the overall Presidential just before the primary and then take your profit after the winner's prices rise. But you can only do that if you can take two bets, on both sides of the question, and resolve those into a profit and get you out of the market.

One fairly obvious wrinkle to add to your loan strategy (which I love!) is that you can't transfer it without first paying off the loan. So if you get a M$10 loan and take a position, you can only make money by selling it for more than M$10. There would be some margin to be had there for folks who trolled around looking for questions and taking mispriced positions, but that would be highly useful! Also, you could fairly easily incorporate that into the ranking: A given user could have at least 3 distinct values by which they could be sorted in a leaderboard: current balance, current gain (abs | %age) over money put in, and a weird confidence interval-looking thing that showed how much they currently owe on loans and the value of their positions.

It's probably necessary if you have this option to make every user pay at least a little to get in-otherwise someone could make a bunch of sockpuppets, each of which would make a large number of stupid bets against their main, making it rich.

Yes, I like your analysis of the loans. We'd be somewhat concerned about people printing money with fake accounts, although they can kind of do that today by creating many new accounts that start with M$ 1000.

But overall it's an intriguing idea, and the incentives are pretty good since you can still lose money eventually if you bet the wrong way. Maybe Scott is onto something here.

I'd be tempted to get rid of the starting money (or at least tie it to something expensive-ish to make, like a CC, phone number, etc.), and then gate the loans with an actual small payment.

Regarding the leaderboard, I think it's important to remember that assessing a forecaster's performance based on a single number has the same problems as assessing an investor's performance based only on the amount of money made. You fall victim to a number of issues, like "did they make all of their money on a single high-payoff prediction that was mostly luck?" or "Were they good only for a brief period of time when there were markets on a specific political event?" or "are they a very high variance forecaster and they just happen to be on a lucky streak?"

You need more sophisticated analysis to answer how good a forecaster is: Things like a sharpe ratio of their profits, or a graph of their winnings over time to show when they were active or not would be a start.

For less formal analysis, Kaggle's rating system ( might provide a good starting point

One way we disaggregate performance is by category. You can see your performance for different subsets of markets by topic. E.g. the leaderboards for just the ACX 2022 prediction markets:

> they solve this by actually reputationalizing play money profits, which works

It partially works. The problem is that profits are also a function of the money you have. Suppose I have $1 and I know for certain that a market that currently has a 1% odd will actually come true. At best, I can end up with ~$100. If I have $1000 to start with, maybe there isn't enough liquidity for me to wind up with $100,000, but maybe there's enough liquidity for me to wind up with $3000.

Buying more money allows you to multiply any winnings, so it's not a good judge of how good of a predictor you are. I think a better judge would be to normalize your profit by the amount you've bought, but that unfortunately takes away Manifold's business model.

Yes -- Scott called out Robert McIntyre as #1 on the leaderboard, and (I hope he'd be okay with me saying that) he's purchased M$ before. This does make it easier to post higher total profits on the leaderboard, assuming he's 51% right or better.

This is something that's actually quite tricky to get right, and we'd welcome thoughts that others have on how to set up leaderboards that are meaningful and accurate. Specifically, we're hoping to run a prediction tournament for cash prizes sometime in the future, and we'd want to know how to fairly set that up (eg limit everyone to the same amount of initial buy-in?)

On the use of positive sum markets to incentivize voting -- there's probably some ideal tradeoff point between number of bids people make vs the accuracy of each guess (i.e. where they stand on the Susan-to-Randy spectrum) which maximizes the amount of real information entering the market. I don't know how Metaculus calculates payouts, but there's presumably also some parameter (or set of parameters) controlling how positive-sum the markets tend to be. This obviously gives Metaculus the ability to tune how generous payouts are in order to optimize user behavior (assuming that people, on average, resemble rational actors enough to vote more readily when payouts are more generous and vice versa.) Of note, they *wouldn't* have this ability if Metaculus bids were denominated in dollars or whatever, since then positive-sum markets simply drive them broke. So perhaps there is a way in which play money markets can be epistemically superior after all.

Of course, this same flexibilty could be joined to the benefits of a real-money market by having bids be made in play money which can be exchanged for real money afterwards, at some rate related to the generosity of the payout system in a way that keeps the market operators financially afloat.

this is exactly what metaculus does. the ratio between "absolute points" to "relative points" appreaches 0 as the number of forecasters increases.

It's worth pointing out that real-money markets are negative-sum (the market takes a vig).

Also, real-money markets need to get a certain amount of attention (a few thousand dollars) before they are making any sort of useful prediction, which heavily restricts the non-sports markets that are created.

There is far more money on a first round tennis match in an obscure tournament than on any but the biggest non-sporting markets.

This is partly because people like getting results quickly, and sports generate large numbers of results in a short period of time, where most non-sporting predictions require locking up money for long periods of time. In many cases, you have to get in relatively early, as a great many predictions become near-certainties (99-1 propositions) for quite a long time before they are finally resolved.

If you have to lock up money for a long time if you want to be making a meaningful prediction, then the ROI is much worse than sports - but the predictive power of the market would be much more valuable. I think this is a hard problem to solve; if you could earn a lot more money by studying the 30th-50th ranked tennis players and predicting their early round results when they play each other, then many smart superpredictors would be incentivised to do that rather than providing information useful to society.

I have a few observations about Manifold, which you may need to take with a grain of salt because I did manage to lose quite a bit of play money over the weekend.

Assuming I haven't misunderstood their Technical Guide (, placing bets on Manifold is, in fact, (slightly) negative-sum. Of the bet pool, 4% goes to the market creator as a 'commision' and 1% is "burned" as a 'platform fee.

In addition to not wanting to tie up their money for long periods, another reason for people not to correct markets is elucidated here: - essentially, because your payout depends on the state of the market at resolution, not only on the state when you place your bid, you get less expected profit if the market moves in the correct direction as people get more information near when trading closes.

You're right on both counts! The platform fee is our attempt at fighting inflation (from users joining or buying M$), and also something we're testing for eventual hypothetical crypto use case.

We also take Kevin's criticisms fairly seriously, and are considering alternative market-making mechanisms such as LMSR; see the discussion here:

Note: 1% and 4% fees are on trader profits, not the bet pool. It is still technically negative sum though!

Sports books try to set a "line" so that half the bets are on one side and half on the other. Over time, the line changes to keep things that way, so it is a kind of prediction market. Since the book always takes a cut, the market is negative-sum for the bettors.

I am very interested to see how positive-sum systems resolve the issue of Randy-strategist bettors because of the implications for what I think is the most interesting model: positive-sum for-profit prediction markets. The idea is simple: by offering a game where the house always loses, you incentivize people to forecast. Your actual business model will be making money off of having more or less some knowledge of the future, as generated by the wisdom of crowds.

See all
© 2022 Scott Alexander
Publish on Substack
Substack is the home for great writing