It seems to me that the information that betting so heavily on FTX and SBF was an avoidable failure. So what could we have done ex-ante to avoid it?
You have to suggest things we could have actually done with the information we had. Some examples of information we had:
First, the best counterargument:
Then again, if we think we are better at spotting x-risks then these people maybe this should make us update towards being worse at predicting things.
Also I know there is a temptation to wait until the dust settles, but I don't think that's right. We are a community with useful information-gathering technology. We are capable of discussing here.
Things we knew at the time
We knew that about half of Alameda left at one time. I'm pretty sure many are EAs or know them and they would have had some sense of this.
We knew that SBF's wealth was a very high proportion of effective altruism's total wealth. And we ought to have known that something that took him down would be catastrophic to us.
This was Charles Dillon's take, but he tweets behind a locked account and gave me permission to tweet it.
Peter Wildeford noted the possible reputational risk 6 months ago:
We knew that corruption is possible and that large institutions need to work hard to avoid being coopted by bad actors.
Many people found crypto distasteful or felt that crypto could have been a scam.
FTX's Chief Compliance Officer, Daniel S. Friedberg, had behaved fraudulently In the past. This from august 2021.
In 2013, an audio recording surfaced that made mincemeat of UB’s original version of events. The recording of an early 2008 meeting with the principal cheater (Russ Hamilton) features Daniel S. Friedberg actively conspiring with the other principals in attendance to (a) publicly obfuscate the source of the cheating, (b) minimize the amount of restitution made to players, and (c) force shareholders to shoulder most of the bill.
24 Answers sorted by
Discourage EA billionaires from being public advocates for EA since this makes them the face of EA and increases reputational risk. (Musk tweeted about EA once and seems like the face of EA to some people!)
I’ve seen numerous articles about SBF’s EA involvement and very few about Dustin Moskovitz, so I think Dustin Moskovitz made the right call here.
I tend to agree. That being said, it is also important to recognise that Musk being public about EA will eventually push people towards learning more about EA, which is good.
I don't think Musk talking about EA is clearly net-positive, lots of people are clearly put off of EA by the association.
We unreasonably narrowed our probability space. As Violet Hour says below win somehow discounted this whole hypothesis without even considering it. Even a 1% chance of this massive scandal was worth someone thinking about it full-time. And we dropped the ball.
Upvoted, but I disagree with this framing.
I don't think our primary problem was with flawed probability assignments over some set of explicitly considered hypotheses. If I were to continue with the probabilistic metaphor, I'd be much more tempted to say that we erred in our formation of the practical hypothesis space — that is, the finite set of hypothesis that the EA community considered to be salient, and worthy of extended discussion.
Afaik (and at least in my personal experience), very few EAs seemed to think the potential malfeasance of FTX was an important topic to discuss. Because the topics weren't salient, few people bothered to assign explicit probabilities. To me, the fact that we weren't focusing on the right topics, in some nebulous sense, is more concerning than the ways in which we erred in assigning probabilities to claims within our practical hypothesis space.
Nathan (or anyone who agrees with the comment above), can you even list all the things that, if they have even a 1% chance of failing then they are "worth someone thinking about in full-time"?
Can you find at least 1 (or 3?) people who'd be competent at looking at these topics, and that you'd realistically remove from their current roles to look at the things-that-might-fail?
Saying you'd look specifically at the FTX-fraud possibility is hindsight bias. In reality, we'd have to look at many similar topics in advance, without knowing which of them is going to fail. Will you bite the bullet?
Fraud is actually a high probability event. Over the the last 20 years especially with similar cases like ENRON in 2001 and LEHMAN Brothers in 2008, traditional institutions - (eg. publicly held corporations or banks) have coped with this through governance boards and internal audit departments.
I think newer industries like Crypto, AI and Space are slower to adapt to these concepts as newer pioneers are setting up managements that are unaware of the fraudulent past, the corruptive nature of the ability to acquire money and be able to hide/ misappropriate it. Well, I could understand why, startups won't spend more money for governance and reviews as it does not have a direct impact to the overall product or service and as they eventually get bigger - they fail to realize the need to review the control features that worked when they were still small...
I should put fraud risk to as high as 10% - could go higher if the internal controls are weak or non-existent. Like who checks SBF? or all of the funders? that is a great thing to start reviewing moving forward in my opinion. The cost of any scandal is to large that it will affect the EA community's ability to deliver its objectives.
All the best,
Yes, as Scott Alexander put it in the context of Covid, a failure, but not of prediction.
And if the risk was 10%, shouldn’t that have been the headline. "TEN PERCENT CHANCE THAT THERE IS ABOUT TO BE A PANDEMIC THAT DEVASTATES THE GLOBAL ECONOMY, KILLS HUNDREDS OF THOUSANDS OF PEOPLE, AND PREVENTS YOU FROM LEAVING YOUR HOUSE FOR MONTHS"? Isn’t that a better headline than Coronavirus panic sells as alarmist information spreads on social media? But that’s the headline you could have written if your odds were ten percent!
There have been calls for a proper whistleblowing system within EA. I guess if I'd had someone I could report to as there is for community issues, then perhaps I would have.
I made a prediction market, did a little investigating, but I didn't really see a way to do things with the little suspicions I had. That seems pretty damning.
This comment seems to support the idea that a whistleblowing system would've helped: https://forum.effectivealtruism.org/posts/xafpj3on76uRDoBja/the-ftx-future-fund-team-has-resigned-1?commentId=NbevNWixq3bJMEW7b
The whistleblower system is a good idea. But Internal Audit Reviews done through International Standards for the Professional Practice of Internal Auditing by an Independent EA Internal Audit Team is still the best solution in ensuring the quality of EA procedures across all EA organizations.
I don't know how much effort it would be to set up a "proper" community-wide whistleblowing system across many organisations, but perhaps some low-hanging fruit would be CEA's community health team having a separate anonymous form specifically for anonymous suspicions of this kind that wouldn't require legal protection (as in your case? because you're not an FTX employee?).
Maybe the team already has an anonymous form but I couldn't find it from a quick look at the CEA website, so maybe it should be more prominent, and in any case I think it may be worth encouraging sharing this kind of info more explicitly now.
Make it clearer that EA is not just classical utilitarianism and that EA abides by moral rules like not breaking the law, not lying and not carrying out violence.
(Especially with short AI timelines and the close to infinite EV of x-risk mitigation, classical utilitarianism will sometimes endorse breaking some moral rules, but EA shouldn’t!)
You keep saying that classical utilitarianism combined with short timelines condones crime, but I don't think this is the case at all.
The standard utilitarian argument for adhering to various commonsense moral norms, such as norms against lying, stealing and killing, is that violating these norms would have disastrous consequences (much worse than you naively think), damaging your reputation and, in turn, your future ability to do good in the world. A moral perspective, such as the total view, for which the value at stake is much higher than previously believed, doesn't increase the utilitarian incentives for breaking such moral norms. Although the goodness you can realize by violating these norms is now much greater, the reputational costs are correspondingly large. As Hubinger reminds us in a recent post, "credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon." Thinking that you have a license to disregard these principles because the long-term future has astronomical value fails to appreciate that endangering your perceived trustworthiness will seriously compromise your ability to protect that valuable future.
EA is not just classical utilitarianism
classical utilitarianism will sometimes endorse breaking some moral rules
Classical utilitarianism endorses maximising total wellbeing. Whether this involves breaking "moral rules" or not is, in my view, an empirical question. For the vast majority of cases, not breaking "moral rules" leads to more total wellbeing than breaking them, but we should be open to the contrary. There are many examples of people breaking "moral rules" in some sense (e.g. lying) which are widely recognised as good deeds (widespread fraud is obviously not one such example).
A coordination mechanism around rumours.
I now here that there were many rumours around bad behaviour at Alameda. If there are similar rumours around another EA org or an individual, how will we ensure they are taken seriously.
Had we spent $10M investigating stuff like this, had it only found this one case, it would have been worth it (without second order effects, which are hard).
A good policy change should be at least general enough to prevent Ben Delo-style catastrophes too (a criminal crypto EA billionaire from a few years back.) Let's not overfit to this data point!
One takeaway I'm tossing about is adopting a much more skeptical posture towards the ultra-wealthy, by which I mean always having your guard up, even if they seem squeaky-clean (so far). With the ultra-wealthy/ultra-powerful, don't start from a position of trust and adjust downwards when you hear bad stories; do the opposite.
It can’t be right to cut all ties to the super-rich, for all the classic reasons (eg the feminist movement was sped up a lot by the funding which created the pill). Hundreds of thousands of people would die if we did this. Ambitious attempts to fix the world’s pressing problems depends on having resources.
But there are plenty of moderate options, too.
Here's three ways I think learning to mistrust the ultra-wealthy could help matters.
A general posture of skepticism towards power might have helped whenever someone raised qualms about FTX, or SBF himself, such that the warnings wouldn't have fallen on deaf ears and could be pieced together over time.
It would also motivate infrastructure to protect EA against SBFs and Delos: for example, ensuring good governance, and whistleblowing mechanisms, or asking billionaires to donate their wealth to separate entities now in advance of grants being allocated, so we know they're for real / the commitment is good. (Each of these mechanisms might not be a good idea, I don't know, I just mean that the general category probably has useful things in it.)
It would also imply 'stop valorizing them'. If we hadn't got to the point where I have a sticker by my bed reading 'What Would SBF Do?' (from EAG SF 2022*), EAs like me could hold our heads higher (I should probably remove that). Other ways to valorize billionaires include inviting them to events, podcasts, parties, working for them, and doing free PR for them. Let's think twice about that next time.
Not saying 'stop talking to Vitalik', but: the alliance with billionaires has been very cosy, and perhaps it should cool, or uneasy.
The above is strongly expressed, but I'm mainly trying this view on for size.
*produced by a random attendee rather than the EAG team
My current understanding is that Ben Delo was convicted over failure to implement a particular regulation that strikes me as not particularly tied to the goodness of humanity; I know very little about him one way or the other but do not currently have in my "poor moral standing" column on the basis of what I've heard.
Obvious potential conflicts of interest are obvious: MIRI received OP funding at one point that was passed through by way of Ben Delo. I don't recall meeting or speaking with Ben Delo at any length and don't make positive endorsement of him.
Hedge on reputation
Even if we took SBF's money at the time we didn't need to double down by connecting ourselves to his reputation. He was just a man. In this way we reinvested capital in something we were already overinvested in.
(This is similar to freedom and utility's point)
We were narrow-minded in the kinds of catastrophic risks to look at and so some high-probability risks were ignored because they weren't the right type.
I sense most that far more of happy to take on the burden of learning more about AI risk than financial risk. I've heard many people trying to upskill in AI. I've never heard someone say they wanted to learn more about FTX's finances to avoid an catastrophic meta risk
Well, obvious thought in hindsight #1: Manifold scandal prediction markets with anonymous trading and rolling expiration dates.
I don't know how else you'd aggregate a lot of possible opinions that people possibly might not be incentivized to say openly, into a useful probability estimate that anyone could actually act on.
I have been suggesting this for like a year, but everyone I talked to told me it would cause problems so I wasn't confident enough to do it unilaterally.
That's no longer the case.
Did people in the general EA polysphere know about the FTX polycule environment, and in particular that the CEOs of FTX and Alameda were sleeping together on a regular basis? If so, this probably should have raised a lot of people's estimates of financial misconduct. I wouldn't expect EAs to be especially savvy as to all the weird arbitrage trades in the crypto space and how they could go wrong, so missing that is very forgivable, but I would expect high-level EAs to be extremely aware of who is fucking who.
Normally I would downvote a claim without evidence. But if this is true, it would help explain why FTX bailed out Alameda using customer funds. And it would have important implications for how EA views conflicts of interest from romantic relationships in other areas.
AI Safety has plenty of romantic relationships between important people. To name just one relationship that has been made fully public and transparent, Holden Karnofsky is married to Daniella Amodei, President of Anthropic and sister to Dario Amodei, an advisor to OpenPhil on AI Safety. I think... (read more)
Downvoted for several reasons: because I would expect colleagues in any work environment to hook up, because I think it's very unkind to assume sexual relations in the workplace are indicative of a problem, because I'm against outing people's sex lifes unless directly relevant to a scandal. And finally, because it seems unnecessary to mention polyamory when talking about two people hooking up.
(Retracted after more consideration. I still disagree with the wording of the comment I responded to but can now see it points towards a real problem)
This would be reportable and disqualifying in a lot of industries. For instance, if a prosecutor is having (or recently had) a sexual relationship with a defense attorney, it would be wildly inappropriate for them to be on the same case. It is impossible to maintain objectivity in that kind of dual relationship. A foundation employee should not be evaluating a grant proposal from someone they are sleeping with. And so on. So one doesn't even have to even considered the possibility of fraud to have realized how improper this would have been if true.
To generalize the your earlier comment, and to probably sound older than I am, it sounds like FTX/Alameda was not at all run in a professionally appropriate or "grown up" manner. How aware was the community of that characteristic more generally?
I sense there wasn't an FTX polcule. Though disagreevote if you think there was.
I do not know whether there was a polycule environment in FTX. However, if there was and such environment is materially correlated with financial misconduct, I think it is legitimate to use that as evidence for financial misconduct.
Set a higher bar for what good business practices are
By lionising someone involved in ponzi schemes and who handled money they should have known came from crime, we sent a credible signal that it was okay to break norms of morality as long as one didn't get caught.
It should not be entirely surprising they went on to commit fraud.
What if there were a norm in EA of not accepting large amounts of funding unless a third-party auditor of some sort has done a thorough review of the funder's finances and found them to above-board? Obviously lots of variables in this proposal, but I think something like this is plausibly good and would be interested to hear pushback.
I don't know much about how this all works but how relevant do you think this point is?
If Sequoia Capital can get fooled - presumably after more due diligence and apparent access to books than you could possibly have gotten while dealing with the charitable arm of FTX FF that was itself almost certainly in the dark - then there is no reasonable way you could have known.
[Edit: I don't think the OP had included the Eliezer tweet in the question when I originally posted this. My point is basically already covered in the OP now.]
It's a relevant point but I think we can reasonably expect EA leadership to do better at vetting megadonors than Sequoia due to (a) more context on the situation, e.g. EAs should have known more about SBF's past than Sequoia and/or could have found it out more easily via social and professional connections (b) more incentive to avoid downside risks, e.g. the SBF blowup matters a lot more for EA's reputation than Sequoia's.
To be clear, this does not apply to charities receiving money from FTXFF, that is a separate question from EA leadership.
I disagree with this. I think we should receive money from basically arbitrary sources, but I think that money should not come with associated status and reputation from within the community. If an old mafia boss wants to buy malaria nets, I think it's much better if they can than if they cannot.
I think the key thing that went wrong was that in addition to Sam giving us money and receiving charitable efforts in return, he also received a lot of status and in many ways became one of the central faces of the EA community, and I think that was quite bad. I think we should have pushed back hard when Sam started being heavily associated with EA (and e.g. I think we should have not invited him to things like coordination forum, or had him speak at lots of EA events, etc.)
I have been quietly thinking "this is crypto money and it could vanish anytime." But I never said it out loud, because I knew people like Eliezer would say the kind of thing Eliezer said in the tweet above: "you're no expert, people way deeper into this stuff than you are putting their life savings in FTX, trust the market." It's a strangely inconsistent point of view from Eliezer in particular, who's expressed that his "faith has been shaken" in the EMH.
What Eliezer's ignoring in his tweet here is that the people who were skeptical of FTX, or crypto generally, mostly just didn't invest, and thus had no particular incentive to scrutinize FTX for wrongdoing. As it turns out, the only people looking closely enough at FTX were their rivals, who may have been doing this strategically in order to exploit vulnerabilities, and thus were incentivized not to spread this information until they were ready to trigger catastrophe. If there's money in scrutinizing a company, there's no money in releasing that information until after you've profited from it.
In my opinion, we need dedicated risk management for the EA community. The express purpose of risk management would be to start with the assumption that markets are not efficient, to brainstorm all the hazards we might face, without a requirement to be rigorously quantitative, to try and prioritize them according to severity and risk, and figure out strategies to mitigate these risks. And to be rude about it.
I think this does point to a serious failure mode within EA. Deference to leadership + insistance on quantitative models + norms of collegiality + lack of formal risk assessment + altruistic focus on other people's problems -> systemic risk of being catastrophically blindsided more than once.
I have been quietly thinking "this is crypto money and it could vanish anytime."
Yeah but that's different from fraud. I think a 99% crypto crash would've been a lot easier to handle. (While still being very disruptive for EA.)
I feel like the OP ("How could we have avoided this?") is less about the collapse of pledged funding and more about fraud.
Let me answer prospectively:
To vet charity destinations, we have GiveWell. Individual donors don't need to do their own research; they can consult GiveWell's recommendations, which include extensive, publicly documented deep dives into those organizations that make fraud overwhelmingly unlikely.
This experience shows that we need something analogous to vet charity sources, ReceiveWell, say. Individual recipients shouldn't need to do their own research; they would consult ReceiveWell's ratings of the funders in question, which would include extensive, publicly documented deep dives into the sources and risks associated with each of them.
Make it clearer that earning to give shouldn’t involve straightforward EV maximisation because charities / NGOs benefit from certainty and stability, and you shouldn’t make decisions which have a high risk of losing money which has already been promised to charities even if the decision is positive EV.
I agree certainty and stability are quite relevant, but these can be integrated into expected value thinking. So I would say "straightforward money [not EV] maximisation", as "value" should account for the risk of the assets.
Make it clearer that EA is not just about maximising expected value since maximising EV will sometimes involve dishonesty and illegality.
I agree EA is not just about maximising expected value, but I think that is a great principle. Connecting it to dishonesty and illegality seems pretty bad. Moreover, there are some rare examples where illegal actions are recognised as good, and one should be open to such cases.
I've commented on a separate post here.
In short: I'm not sure EA could have prevented or predicted this particular event with FTX blowing up.
However, EA did know that its funding sources were very undiversified and volatile, and could have thought more about the risks of an expected source of funding drying up and how to mitigate those risks.
Have a place to store concerns about stuff like this publicly.
There were lots of people with different information. We could have put it all in the FTX forum article.
Encourage more EAs to form for-profit start-ups and aim to diversify donors - lots of multi-millionaires > 2 billionaires
I think I'd aim for just a lot more billionaires but yes, your point stands.
There was no avoiding of this.
The only way would've been with:
- A professional third-party audit, which requires
- Regulatory norms for avoiding frauds such as these.
- Forbidding SBF from associating himself with EA.
The first option is a non-starter because there are no such crypto regulations.
I don't know about the finances of FTX, but I do know about audits. Auditors work based on norms, and those norms get better at predicting new disasters by learning from previous disasters (such as Enron). The same is true of air travel, one of "the safest ways of travel".
The second option is also a non-starter because of ... ¿The First Amendment?
Therefore, we couldn't have "avoided" ... What exactly?
- ... SBF's fraud? or
- ... SBF harming the EA "brand"?
If what we want is to avoid future fraud in the Crypto community, then the goal of the Crypto community should be to replicate the air travel model for air safety:
- Strong (and voluntary) inter-institutional cooperation, and
- A post-mortem of every single disaster in order to incorporate not regulation but "best practices".
However, if the goal is to avoid harming the EA "brand", then there's a profession for that. It's called "Public Relations".
PR it's also the reason why big companies have rules that prevent them from (publicly) doing business with people suspected of doing illegal activities. ("The wife of the Caesar must not only be pure, but also be free of any suspicios of impurity")
For example, EA institutions could from now on:
- Copy GiveDirectly's approach and avoid any single donor from representing more than 50% of their income.
- Perhaps increase or decrease that percentage, depending on the impact in SBF's supported charities.
- Reject money that comes from Tax Havens.
- FTX was a business based mainly in The Bahamas.
- I don't know what is the quality of the Bahamas standard for financial audits. In fact, I don't even know if they demand financial audits at all... but I know that The Bahamas is sometimes classed as a Tax Haven, and is more likely that we find criminals and frads with money in Tax Haven than outside of them.
- Incentivize their own supported charities to reject dependence on a single donor, and to reject money that comes from Tax Havens.
- Perhaps also...
- ... Campaign against Tax Havens?
- Tax Havens crowd out against money given to tax-deductible charities, and therefore for EA Charities.
- There is an economic benefit to some of the citizens of the Tax Haven countries, but when weighted against the criminal conduct that they enable... are they truly more good than bad?
- ... Create a certification for NGOs to be considered "EA"?
- Most people know that some causes (Malaria treatments, Deworming...) are well-known EA causes.
- They are causes that attract million of dollars in funding.
- Since there is no certification for NGO0s to use the name "EA", a fraudster-in-waiting can just
- Start a new NGO tomorrow.
- "Brand" itself as an EA charity
- See the donations begin to pour-in, and
- Commit fraud in a new manner that avoids existing regulation
- Give the news cycle an exciting new story, and the EA community another sleepless night.
- Most people know that some causes (Malaria treatments, Deworming...) are well-known EA causes.
- ... Campaign against Tax Havens?
In fact, fraud in NGO's happens all the time. One of the reasons why Against Malaria Foundation had trouble implementing their first Give Well charity is that they were too stringent on the anti corruption requirements for governments.
It's in the direct interest of the EA community to minimize the amount of fraudulent NGO's, and to minimize the amount of EA branded fraudulent NGO's.
My guess is that the only way in which alarms would've ringed for FTX investors was by realizing that there was a "back door" where somebody could just steal all the money.
Would a financial auditor have looked into that, the programming aspect of the business?
I doubt it, unless specific norms were in place to look for just that.
In fact, at the same time that we're discussing this, a tragic air travel accident happened on a Dallas Airshow. Our efforts might be more "effective" by not discussing SBF and instead discussing Airshow Security and their morality.
The second option is also a non-starter because of ... ¿The First Amendment?
Disagree on two counts. Firstly, EA is a trademark belonging to CEA. Secondly, the association with SBF was bilateral - EA lauded SBF, not just vice versa.
We can take billionaires’ money without palling up.
We could cultivate for-profit entrepreneurship in fields with clearer social benefit. I've seen arguments that people can make a bigger impact by focusing on social impact or profitability rather than trying to do both, and I think that's true on the margin for most people, but embracing crypto may have overcorrected.
A few thoughts on how to avoid this:
- Focus on rule utilitarianism, and stop normalizing unilateral actions that have a chance to burn the commons like lying or stealing. I wouldn't go as far as freedomandutility would, but we need to be clear that lying and stealing are usually bad, and shouldn't be normal.
Speaking of that, that leads to my next suggestion:
- We need to have a whistleblower hotline.
Specifically, we need to make sure that we catch unethical behavior early, before things get big. Here's the thing, a lot of big failures are preceded by smaller failures, as Dan Luu says.
Here's a link:
We really need to move beyond the model of EA posts being a whistleblower's main place, even in the best case scenario.
And 3. We should be much more risk-averse to actions that involve norm violations, because they have usually negative benefits. One of SBF (and partially EA's flaw) was not realizing what the space of outcomes could be.
It is important to distinguish different types of risk-taking here. (1) There is the kind of risk taking that promises high payoffs but with a high chance of the bet falling to zero, without violating commonsense ethical norms, (2) Risk taking in the sense of being willing to risk it all secretly violating ethical norms to get more money. One flaw in SBF's thinking seemed to be that risk-neutral altruists should take big risks because the returns can only fall to zero. In fact, the returns can go negative - eg all the people he has stiffed, and all of the damage he has done to EA.
Yeah, the idea that everything would fall to zero only makes sense if we take a very narrow view of who was harmed. Once we include depositors, EAs and more, it turned negative value.
IF we had a failure of "this was predictable", then I'd point some of my blame at how EA hires people, and specifically our tendency to not try to hire very senior people who can do such very complicated things.
I anyway think this is a mistake we are making as a movement
I expect the best output we could reasonably hope for from any improved detection system would be relatively modest. For example: "several community members have come forward with specific allegations of past serious misbehavior by megadonor X, and so we estimate that there is a 20% chance that X's company will be revealed as (or end up committing) massive fraud in the next ten years." If someone has strongly probative evidence of fraud, that person should not be going an outfit set up by the EA community with that information . . . they should be going to the appropriate authorities.
Let's say a detection system had discerned a 20% chance of significant fraud by SBF -- this seems to at least several times better performance than the results obtained by organizations with better access to FTX's internal accounting and lots of resources/motivation. What then? Does the community turn down any FTX-related money, even though there is an 80% chance there is nothing scandalous about FTX? How does that get communicated in a decentralized community where everyone makes their own decisions about who to accept funding from?
And how is that communicated -- especially in a PR/optics fashion -- in a way that doesn't create a serious risk of slander/libel liability? "We think Megadonor X poses an unacceptable risk of causing grave reputational harm to the community" sure sounds like an opinion based on undisclosed facts, which is a potentially slanderous form of opinion even in the free-speech friendly USA.
It was widely known that crypto-linked assets are inherently volatile and can disappear in a flash, so while better intel on SBF would have better informed the odds of catastrophic funding loss it was not necessary to understand that this risk existed.
All that is to say that the better approach might be more focused on what healthcare workers would call universal precautions than on attempting to identify the higher-risk individuals. Wear gloves with all patients. Always "hedge on reputation" as Nathan put it below.