Epistemics (Part 7: Authority)

EA mistakes value-alignment and seniority for expertise and neglects the value of impartial peer-review. Many EA positions perceived as “solid” are derived from informal shallow-dive blogposts by prominent EAs with little to no relevant training, and clash with consensus positions in the relevant scientific communities. Expertise is appealed to unevenly to justify pre-decided positions … This is a worrying tendency, given that these works commonly do not engage with major areas of scholarship on the topics that they focus on, ignore work attempting to answer similar questions, nor consult with relevant experts, and in many instances use methods and/or come to conclusions that would be considered fringe within the relevant fields.

ConcernedEAs, “Doing EA better

Listen to this post

1. Introduction

This is Part 7 in my series on epistemics: practices that shape knowledge, belief and opinion within a community. In this series, I focus on areas where community epistemics could be productively improved.

Part 1 introduced the series and briefly discussed the role of funding, publication practices, expertise and deference within the effective altruist ecosystem.

Part 2 discussed the role of examples within discourse by effective altruists, focusing on the cases of Aum Shinrikyo and the Biological Weapons Convention.

Part 3 looked at the role of peer review within the effective altruism movement.

Part 4 looked at the declining role of cost-effectiveness analysis within the effective altruism movement. Part 5 continued that discussion by explaining the value of cost-effectiveness analysis.

Part 6 looked at instances of extraordinary claims being made on the basis of less than extraordinary evidence.

Today’s post looks at the role of authority within the effective altruist ecosystem. While short-termist effective altruists often showed significant respect for scientific, scholarly, and journalistic authority, the turn to longtermism has brought along an increasing disregard for these forms of authority when they threaten contentions or activities within the movement.

I structure my discussion around attitudes towards scientific authority (Section 2), scholarly authority (Section 3) and journalistic authority (Section 4). Section 5 concludes.

2. Scientific authority

Many effective altruists and rationalists hold positions that are strongly rejected by the scientific community. Most obviously, we have seen in my series on Human biodiversity and Belonging that a number of community members platform, and often outright endorse or propound forms of race science that are roundly rejected by scientific experts. Often, as in the case of Manifest, they invite pundits, bloggers, and other non-expert speakers to lend a shred of legitimacy to arguments for which they can find few if any legitimate scientific backers.

Likewise, we saw in my series on Biorisk that biosecurity experts are resoundingly skeptical of claimed levels of existential risk. We saw that effective altruists have produced little in the way of credible evidence to support their estimates, and that many of the arguments provided largely repeat facts familiar to the expert community.

There are other places in which effective altruists have gone to war with science. Most prominently, the self-styled decision theorist Eliezer Yudkowsky has spent years pushing versions of `functional’ and `timeless‘ decision theory, which have had a great impact on the rationalist and effective altruist communities. Academic decision theorists have largely struggled to make heads or tails of Yudkowsky’s writings on the subject, and have generally suggested that Yudkowsky might want to finish high school before going to war with an entire academic field.

For example, Dave Chalmers sent Yudkowsky’s work to five academic decision theorists. The results were not encouraging. Chalmers writes:

I sent it to five of the world’s leading decision theorists. Those who I heard back from clearly hadn’t grasped the main idea. Given the people involved, I think this indicates that the paper isn’t a sufficiently clear statement.

Probably the most sympathetic response comes from the excellent decision theorist Kenny Easwaran (though Easwaran is at best lukewarm, and is also a kinder soul than I):

There are definitely some people who are paying some attention to it. It’s still not a mainstream view, but I believe among analytic philosophers working in decision theory, people who got their PhD more than about ten years ago tend to be causal decision theorists (with a few evidential decision theorists mixed in), while people who got their PhD in the last ten years or so are more likely to accept some kind of one-boxing response.

I think the Yudkowski/MIRI approach is still pretty unclear to lots of analytic philosophers (partly because it involves a bunch of unspoken presuppositions about agents being identified with Turing machines, and partly because the most natural explanation relies on some obscure intuitions about which things would be true if some logically impossible things were true).

It’s hard to tell how much is completely new, and how much is reinvention of various other ideas (also unclear and unpopular) by Hofstadter (as mentioned), or by Ned McClennen (under the term “resolute choice”) and David Gauthier. And even in conversation with various MIRI people, I still haven’t gotten a clear answer as to what it says one should do when faced with a clear example of a predictor that is empirically fairly reliable at predicting, but where we have no idea of the mechanism for how it does the prediction.

There has, to be fair, been one paper published on a version of the theory in a leading journal, but given the incredible investment of resources into promoting the theory, I think it is safe to say that the reaction has been relatively cooler than one might expect towards a better-constructed and better-supported view.

We also saw in my series on epistemics that sometimes, effective altruists and especially rationalists go far beyond the scientific consensus. We saw, for example, in Part 6 of that series that Robin Hanson argues for moderate or high credence in the genuineness of UFO sightings, and that Eliezer Yudkowsky took his perusal of economics blogs as sufficient evidence to claim that the Bank of Japan was leaving trillions of dollars on the table due to poor monetary policy. We also saw that even the blogger whom Yudkowsky was reading threw Yudkowsky under the bus here.

There are, to be fair, domains within which effective altruists show consistent and strong respect for scientific authority. This is especially true within short-termist work such as global health and development where effective altruists have been, if anything, criticized for showing too much deference to RCTs and other mainstays of scientific rigor. But this does not change the fact that there are many places in which effective altruists and rationalists, often of a longtermist bent, show insufficient respect for scientific authority.

3. Scholarly authority

Alongside challenges to scientific authority, there are more general challenges to scholarly authority within the movement. We saw these challenges most directly in Part 3 of my series on epistemics, which focuses on the role of peer review within effective altruism.

That post opened with a complaint by a group of concerned effective altruists, who wrote:

EA shows a pattern of prioritising non-peer-reviewed publications – often shallow-dive blogposts – by prominent EAs with little to no relevant expertise … This is a worrying tendency, given that these works commonly do not engage with major areas of scholarship on the topics that they focus on, ignore work attempting to answer similar questions, nor consult with relevant experts, and in many instances use methods and/or come to conclusions that would be considered fringe within the relevant fields. … The fact remains that these posts are simply no substitute for rigorous studies subject to peer review (or genuinely equivalent processes) by domain-experts external to the EA community.

We saw that although some researchers and institutions within the effective altruist ecosystem show an admirable dedication to producing high-quality, peer-reviewed scholarship, many fall considerably short of this bar. And we saw that precisely this concern lies behind skepticism about the cost-effectiveness of research destined to end its life as a forum post or an online report on a charitable foundation’s website.

For example, we saw that a critique of the AI research organization Conjecture (which has received a number of longtermist grants) complained that:

We believe most of Conjecture’s publicly available research to date is low-quality … We think the bar of a workshop research paper is appropriate because it has a lower bar for novelty while still having it’s own technical research. We don’t think Conjecture’s research (combined) would meet this bar.

and suggested:

We recommend Conjecture focus more on developing empirically testable theories, and also suggest they introduce an internal peer-review process to evaluate the rigor of work prior to publicly disseminating their results.

For their part, Conjecture’s leadership declared that the document was a “hit piece” and they would not engage in further defense of their work.

I was heartened to receive a number of comments on my post about peer review from effective altruists who were likewise concerned about the diminished role of this essential scholarly institution within the movement.

There are also, to be fair, important and reasonable forms of moderate skepticism about some institutional features of academia. For example, Holden Karnofsky expressed the view on the 80,000 Hours podcast that some intellectually important topics may not be a good fit for the methods and institutions of academia. And a recent LessWrong discussion of “Some (problematic) aesthetics of what constitutes good work in academia” rightly notes that academics often optimize for intellectual criteria such as novelty and rigor over practical criteria such as societal importance.

As someone trying (and often failing) to publish papers on AI safety, I can sympathize with this critique. Journal editors who believe that it would carry great societal benefit to respond to arguments about existential risk from artificial agents do not hesitate to reject a paper on these topics if they believe that the intellectual merit of the discussion is limited. They are just doing their jobs, though sometimes I wish their jobs were conceived in a broader way.

But many within the effective altruist and rationalist movements increasingly illustrate broad skepticism about academia and academic research. For example, a reasonably well-liked LessWrong discussion concludes:

The reason science sucks so much these days is that mainstream science has been “captured” to serve as the intent-obscuring bureaucracy of a set of major organizations … “science” is now dominated by the group with the lowest level of development, the Clueless, which can only bode poorly. This is in line with my more general feeling that we should expect most scientific progress to occur outside the academy.

Likewise, in a recent EA Forum discussion of my paper “Against the singularity hypothesis,” the director of Epoch AI showed up to offer as counter-evidence a report on the economic singularity produced by Epoch. I suggested that academic economists are resoundingly skeptical of the economic singularity, and that if they wished to advance the state of research on the economic singularity it would probably be a good idea to hire researchers with doctoral degrees in economics and to publish reports in reputable journals instead of on the organization’s website. I was told, in response, that Epoch would not be making sustained attempts to engage with academic economists, because the economists would never believe them anyways.

I brought up this example in a presentation of the same paper to the AI safety reading group (start around 1:05:00). What followed was several minutes of a group member ardently insisting that academic research has little value or reliability.

A more moderate form of this kind of attitude is implicit in much discourse in the effective altruist and rationalist communities. For example, there was recently a discussion on the EA Forum about a paper by my former colleague at the Global Priorities Institute, Teru Thomas entitled “Dispelling the anthropic shadow.” Thomas is not simply an Oxford don, but a well-known super-genius and perfectionist: his papers are impeccable, if not field-defining, and reflect hundreds if not thousands of hours of careful thought.

Oliver Habryka, the director of Lightcone Infrastructure (parent company of LessWrong), showed up to complain:

Is this article different than these other existing critiques of Anthropic shadow arguments? 

https://forum.effectivealtruism.org/posts/A47EWTS6oBKLqxBpw/against-anthropic-shadow

https://www.lesswrong.com/posts/EScmxJAHeJY5cjzAj/ssa-rejects-anthropic-shadow-too

They both are a lot clearer to me (in-particular, they both make their assumptions about anthropic reasoning explicit), though that might just be a preference for the LessWrong/EA Forum style instead of academic philosophy.

And Scott Alexander wrote:

I’m having trouble understanding this. The part that comes closest to making sense to me is this summary:

The fact that life has survived so long is evidence that the rate of
potentially omnicidal events is low…[this and the anthropic shadow effect] cancel out, so that overall the historical record provides evidence for a true rate close to the observed rate.

Are they just applying https://en.wikipedia.org/wiki/Self-indication_assumption_doomsday_argument_rebuttal to anthropic shadow without using any of the relevant terms, or is it something else I can’t quite get?

Both authors evince little evidence of having read and understood the paper in question. They suggest, apparently in earnest, that it is exceeded by forum posts or existing arguments as summarized by Wikipedia articles.

To his credit, Alexander (though not Habryka) does listen patiently to an explanation of the paper and acknowledge that there is a contribution not captured by his favorite forum posts. This is certainly a way of mitigating the damage, though it is disheartening to see this kind of casual skepticism and limited engagement with serious scholarship from some of the most influential individuals within the rationalist community. One would also hope that those who see themselves as at times engaging critically with academic research would show a greater ability to read and understand it before responding.

4. Journalistic authority

There is no requirement to trust journalists on all matters. Many journalists write rather quickly. Some write for low-quality publications. Others have ventured substantially outside of their area of expertise. Some have few morals, low standards, and a sensationalist approach better suited for tabloids than for serious publications.

However, even today there remains a market for serious, deeply researched journalism at the highest levels, carried out by journalists with substantial expertise in the subject on which they are reporting. This type of journalism deserves to be taken seriously.

Effective altruists often do not take reputable journalism seriously when that journalism is critical of the effective altruism movement. This is particularly true when the reporting in question touches on racism, sexism, and other behavioral problems within the movement.

In Part 2 of my series on Human Biodiversity, we discussed the platforming of scientific racists at Manifest 2023 and Manifest 2024. We saw, in particular, that the latest round of criticism was driven by reporting at The Guardian by Jason Wilson and Ali Winston.

Jason Wilson is an investigative journalist who has repeatedly covered the far right for The Guardian. This year alone, Wilson has penned dozens of articles on far-right influence in politics, tech, academia, the military, and other key societal institutions. One would think that Wilson would be treated as an expert on the far right, and certainly as an expert on many of the figures invited to Manifest, whom he has covered extensively.

We saw that the reaction of effective altruists was rather different. I wrote:

A leading effective altruist blogger calls the Guardian article a “ridiculous hit piece,” which links to the “dishonest New York Times smear piece criticizing Scott” Alexander.

One EA-adjacent reader penned an article for no less than Quilette in defense of Manifest 2024. This article called the Guardian article an exercise in “truly lamentable journalism, replete with factual errors, misrepresentations of key people and ideas, and a general attitude of haughty contempt that seeks to denounce rather than understand and explain.” It complained of the article’s “shabbiness” and held that “the authors seem to be entirely unfamiliar with the subculture about which they are writing” …

Oliver Habryka, a leading rationalist and head of Lightcone Infrastructure (which owns the campus that hosted Manifest 2024) complained that “A recent Guardian article about events hosted at our conference venue Lighthaven is full of simple factual inaccuracies

Echoing Habryka, journalist Kelsey Piper complains that “Wow. The wacky Guardian piece is badly wrong about many things, but mostly things for which I wouldn’t expect a correction… but this is way past that.”

Here effective altruists and their allies seized on a few genuine, though non-essential factual errors in the piece to ignore its message, painting reporting in a leading international publication by a leading expert as a simple hit piece. We saw in Part 2 of my series on belonging that Wilson and Winston were largely correct in their core concerns about Manifest.

In Part 4 of my series on Belonging, we discussed a TIME Magazine investigation into sexual harassment in the effective altruism movement. The author, Charlotte Alter, is a senior correspondent at TIME who has penned at least two deeply reported articles on developments within the movement. Her investigative reporting on sexual harassment drew on over thirty interviews over a long period, and covered a number of credible incidents of harassment and abuse.

We saw in my post that not everyone took this type of deeply-researched investigative journalism seriously. We saw, for example, that Eliezer Yudkowsky objected even to the idea of trying to steelman the article, writing that:

Trying to “steelman” the work of an experienced adversary who relies on, and is exploiting, your tendency to undercompensate and not realize how distorted these things actually are … seems like a mistake.

When I objected that the article was written by an internationally renowned journalist in a leading publication, Yudkowsky took aim at the journalistic profession in general, suggesting that he had had better experiences with random bloggers than with professional journalists:

I’ve had worse experiences with coverage from professional journalists than I have from random bloggers.  My standard reply to a journalist who contacts me by email is “If you have something you actually want to know or understand, I will answer off-the-record; I am not providing any on-the-record quotes after past bad experiences.”  Few ever follow up with actual questions.

Yudkowsky was not alone in this reaction. This discussion took place on an EA Forum thread by Aella, entitled “People will sometimes just lie about you,” suggesting that the article was a point-scoring exercise on behalf of the woke left, and that those who spoke to Alter had uncharitably misinterpreted their experiences and spoken uncharitably to a journalist.

I had a lot of skepticism of the recent TIME article claiming that EA is a hotbed for sexual harassment, I think in large part because of those experiences I’ve had. We’re dealing with something high visibility (EA), where the most popular political coalition in journalism (people on the left side of the political aisle) can score points by hating you (insufficiently woke), and that is politically controversial (polyamory, weird nerds, SBF). It seems obvious to me that the odds of having some people with personal experience in the community who also regularly uncharitably misinterpret interactions, and uncharitably speak to a journalist (with both political and financial incentives to be uncharitable), are very high.

As I write, Aella’s post enjoys over 300 karma. The second most-upvoted comment on that post expresses strong support for Aella’s remarks:

I am at best 1/1000th as “famous” as the OP, but the first ten paragraphs ring ABSOLUTELY TRUE from my own personal experience, and generic credulousness on the part of people who are willing to entertain ludicrous falsehoods without any sort of skepticism has done me a lot of damage.

Another prominent comment echoes this line and attempts to construct a biological explanation:

I strongly endorse your central point that in modern social media culture, our normal intuitions and heuristics about how to assess people’s moral traits get completely derailed by the power of bad actors such as stalkers, grudge-holders, mentally ill people, and unscrupulous journalists.

From an biological perspective, there’s probably some deep ‘evolutionary mismatch‘ at work here . We evolved for many millennia in small-scale tribal societies including a few dozen to a few hundred people. In such societies, if some person X is subject to serious accusations of unethical misconduct by several independent people, it’s prudent to update one’s view of X in the direction of X being a bad person, psychopath, liar, harasser, or whatever. The reputational heuristic ‘where there’s smoke, there’s fire’ can be reasonable in small-scale contexts. If 5 people out of 50 in one’s clan report seriously negative interactions with person X, that’s a pretty high hit rate of grievances.

However, in modern partisan social media culture, where someone can have many thousands or millions of followers, including hundreds to thousands of ideologically motivated hostile critics, this heuristic breaks down. Anybody who is sufficiently famous, notorious, or influential, and who is even marginally controversial, will attract a virtually limitless quantity of negativity. It becomes easy to find not just 5 people who hate person X, but 50, or 500 — and our Pleistocene social brains interpret that as dispositive evidence that the person X must be worthy of hatred.

Then it becomes easy for journalists, bloggers, or random social media obsessives to compile all this negativity into a carefully curated grievance-bundle that makes someone — or some group, organization, subculture, or movement– sound very bad indeed. 

Here we see a willingness to privilege ad-hoc and implausible explanations of journalistic behavior over the best-supported hypothesis: that reputable journalists are concerned about behavioral problems within effective altruism because they have good evidence that these problems exist.

I am, to soften this point, pleased that much of the worst distrust directed at journalists is confined to incidents of racism and sexual misconduct. To some extent, I think that distrust of reputable journalism within the movement may be confined to a few subject areas, as opposed to other more general instances of skepticism about expertise.

5. Taking stock

Many effective altruists are excellently educated and have a healthy respect for legitimate authority within most walks of their lives. This continues to hold, as we have seen, throughout many areas of short-termist effective altruism.

However, particularly within the longtermist movement, we saw that effective altruists are often willing to challenge highly reputable authorities when those sources of authority conflict with doctrines held by the movement or challenge the movement’s behaviors.

Regarding scientific authority, we saw that there is a strong prevalence of repudiated race science, views about existential biorisk that are roundly rejected by biosecurity experts, an outsized investment in a poorly-formulated and generally-disliked approach to decision theory pushed largely by nonspecialists, and some investment in further claims about aliens and monetary policy that go substantially beyond the pale.

Regarding scholarly authority, we saw that there is increasing hostility towards peer review, growing unwillingness to submit research outputs to peer-reviewed scholarly publications, skepticism of academic communities which disagree with key contentions of the movement, and an oddly touch-and-go style of engagement with reputable publications shown by leading figures within the rationalist movement.

Regarding journalistic authority, we saw that although there is often a healthy respect for journalistic authority, this often goes out the window when journalists criticize behaviors within the effective altruist movement. We saw, in particular, that reliable reporting on sexual harassment, sexual abuse and racism within the movement was dismissed as “wacky” and a “smear piece,” the work of an “experienced adversary” and a point-scoring exercise on behalf of the woke left. We saw a leading figure within the movement evince more trust in the reliability of random bloggers than the reliability of journalists, as well as a willingness by some community members to construct inventive biological explanations for reporting behaviors more naturally explained by truth-seeking motives.

This is not to say that there is no regard for legitimate authority within the effective altruism movement, or that all claims to authority should be taken at face value. But I would like to see greater regard for legitimate scientific, scholarly and journalistic authorities, many of whom have worked for decades or centuries on the same problems that effective altruists tackle today.

Comments

8 responses to “Epistemics (Part 7: Authority)”

  1. Bob Jacobs Avatar

    Thanks David, though I’m surprised you didn’t mention the EA people page: https://forum.effectivealtruism.org/topics/people
    It has been made less prominent recently (it used to be displayed very prominently), but I always like to show this to people because it’s a very concrete list.
    When you say things like “controversial figure X is a prominent figure in the Y-movement” members of the Y-movement can just pretend that isn’t the case and it becomes a he-said-she-said situation. With this list I can say “controversial figure X is prominent in the EA-movement, check this link on their own website”.

    This list gives a good sense of who the EA movement considers an authority. You can see that 9 out of 10 are white and 9 out of 10 are male. You can also infer some other trends from this, such as political leanings and which disciplines are respected (see my post on this subject). I will let people decide for themselves whether they consider everyone on this list the pinnacle of moral and epistemic virtue, but I personally don’t think all of them are and think there are much better effective altruists out there.
    I deleted Elon Musk and Sam Bankman-Fried from this list with a comment explaining why and it got downvoted so much that I had to ask a friend to upvote it. And this was *after* the FTX scandal! Suffice to say, the fact that they were displayed, the fact that no-one took them down, and the fact I got downvoted for taking them, was another display of a profound disagreement I had with the community on who should be considered a model EA, and was one of the reasons I left the movement.

    1. David Thorstad Avatar

      Thanks Bob!

      I wasn’t aware of this list. It is definitely a helpful resource, and I will be sure to take a look at it in the future.

      You are right to stress that one way to reflect on this list is to think about the demographics represented here. I share your concern that they are very homogenous – indeed, much more so than the EA community itself, as reflected by community surveys.

      I think it would definitely be interesting to look more at other things we can infer, for example about disciplinary affiliations and political leanings. Do you want to link to your post below so readers can find it?

      1. Bob Jacobs Avatar

        Sure! It’s not about political affiliation (I’m too conflict-averse to wade into that quagmire) but it is about disciplinary affiliation, specifically EA’s preoccupation with the economics profession to the exclusion of other disciplines: https://bobjacobs.substack.com/p/the-ea-community-inherits-the-problems

  2. titotal23 Avatar
    titotal23

    Honestly, you’ve barely scratched the surface in terms of the weird anti-expert trends in the rationalist movement, and especially from yudkowsky.

    For example, Yudkowsky is a diehard supporter of the many world interpretation of quantum mechanics, and declares the fact that scientists aren’t all onboard as a reason that you have to “break your allegiance to science”: https://www.lesswrong.com/posts/viPPjojmChxLGPE2v/the-dilemma-science-or-bayes. While MWI is a reasonably popular theory, Eliezer has greatly exaggerated it’s benefits and swept it’s flaws under the rug, while making bad mathematical errors in his writeups: https://titotal.substack.com/p/an-intro-to-quantum-physics-fixing.

    And of course, he’s a leading promotor of AI foom theory and AI doomerism, something that even leading AI x-risk concerned computer scientists like Hinton don’t endorse.

    He’s waded into arguments about consciousness, declaring that Chalmers arguments about p-zombies must be insane and wrong. granted, the topic is controversial in philosophy, but it doesn’t sound like many actual philosophers have bought in.

    Yud that drexlerian nanotech is obviously possible, and at one point declared that it would arrive in 2010. That never happened, (in fact mechanosynthesis research stalled out around 2010 and has never really recovered) and nanoscientists these days are pretty skeptical of the technology.

    He’s also taken weird stances on things like nutrition (endorsing Gary taubes), more consciousness (declaring that it’s impossible that chatgpt is more likely to be sentient than a chicken), Cryonics (saying that if you don’t sign up to alcor you’re a horrible person), I think I’ve seen him supported lab leak theory even recently, Probability theory, epistemology, etc etc etc.

    The fact that most experts don’t agree with him on these out there stances is taken as proof that the world is dumb and crazy and only the rationalists and their friends are worth trusting, rather than the actual bayesian position that they are a bunch of weirdos who are in over their heads.

    1. David Thorstad Avatar

      Thanks titotal! It’s good to hear from you, and I hope the tone and content of your comments will go some way towards helping readers understand how the behaviors discussed in this post resonate with others.

      Like you, I do not tend to have an especially positive view of Yudkowsky’s epistemics. You’re quite right to stress that the incident with the Bank of Japan is just one of many instances in which Yudkowsky seems to fall epistemically short of the mark. This list is a valuable resource.

      I especially liked your point about nanotech. It is easy to forget that in the past, it was not AI but rather nanobots that were supposed to kill us all, and many (including Yudkowsky) actually wanted to build superintelligent AI in order to stop the nanobots. Yudkowsky went so far as to found the Singularity Institute (later MIRI) with the express purpose of creating a superintelligent machine.

      As you know, I think that existential threats from nanotechnology are one of the clearest examples of past estimates of existential risk that seem to have been highly inflated and based on rather insufficient evidence. I don’t think most readers will be aware of the history and science here, and I would like for more people to read and update on this evidence. I have found your post on this subject extremely helpful. Do you want to link it for reference?

    2. Bob Jacobs Avatar

      > I think I’ve seen him supported lab leak theory
      I don’t know if he still believes it, but here’s a source for him believing it in May 2021: https://www.facebook.com/yudkowsky/posts/10159653334879228.
      This was well after scientists had determined that the wet market was the location where the virus jumped to humans (most likely more than once), that the genome of the virus was consistent with natural evolution, and that a virus escaping from a lab would not behave the way that SARS-cov-2 did. Studies on this had already been published by early 2020: https://www.nature.com/articles/s41591-020-0820-9.

  3. Bob Jacobs Avatar

    > In Part 2 of my series on Human Biodiversity, we discussed the platforming of scientific racists at Manifest 2023 and Manifest 2024.

    In Part 2 you established a number of links between EA and Manifold:
    > Readers might, with some justification, ask what all of this has to do with effective altruism. […] Let us consider a few of the many ways in which Manifold, Manifund and Manifest were linked to effective altruist grants, organizations, personnel, and projects. […] Manifest 2023 was advertised by the organizers at least twice at post-length on the EA Forum, as was Manifest 2024. Manifold and Manifund are frequent topics of discussion on the EA Forum […]

    But you missed a big one. One of the “core topics” of the EA forum, on the same level as “Global Health” and “Animal Welfare”, is “Forecasting”. If you click on a core topic page you’ll see in the infobox what the core topic is about, with links to those subtopics. Furthermore, if you click on “wiki” you’ll get a written explanation of the core topic. With the “Forecasting” core topic we can see “manifold markets” right there in the infobox, and if you click on “wiki” you can see Manifold has its own subheader where you can read this:
    > Manifold.Markets is a prediction market platform that uses play money. It is noteworthy for its ease of use, great UI and the fact that the market creator decides how the market resolves.
    If EAs want to say they have nothing to do with Manifold, I suggest they stop having an advertisement for Manifold on one of their core topic pages.

    1. David Thorstad Avatar

      Thanks Bob!

      I appreciate you pointing this out. I think that this could potentially be a good place to reduce mentions of Manifold and would definitely be an excellent place to tone down the praise and inform readers of relevant concerns.

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading