Animal exploitation is the issue where Effective Altruists have a conflict of interest. You benefit from it multiple times per day. Changing requires initial effort. Why would your reasoning be less corrupted than the factory owner’s, when the psychological mechanisms are identical? This doesn’t mean you can’t recognize that farmed animals suffer. But eating meat multiple times per day makes it highly unlikely that you perceive the intensity and urgency of that suffering accurately.
ADITI & SHREYA: Effective Altruists are highly motivated to act based on reason… We imagine our ethical reasoning works something like this: Encounter a moral question → evaluate the evidence and arguments → reach a conclusion → act accordingly (or not, depending on how demanding the action is).
This is how it works with most EA cause areas: global health, AI safety, biosecurity. We evaluate them impartially, and demandingness comes at the end — a practical consideration that might limit what we’re obligated to do, even when we agree something is wrong.
Factory farming is a different type of cause area, because many people considering the issue are themselves directly contributing to it. The moral patients we are considering are frequently on our plates.
So with factory farming and diet change, I think our ethical reasoning can run backwards: Topic comes up → subconscious recognition that accepting this requires personal change → motivated reasoning to reject or minimise the ethics → conscious experience of having “thought it through.” The demandingness shapes the reasoning from the beginning, not just at the end…
EAs are unusually good at updating beliefs and changing behaviour based on evidence. We’re vigilant about psychological biases that distort our reasoning: scope insensitivity, the identifiable victim effect, present bias. We know these lead to misallocating resources.
But we give surprisingly little attention to cognitive dissonance when evaluating factory farming, despite it being one of the most powerful and well-documented biases.
The test isn’t whether you can reason impartially about psychologically distant causes (global health, longtermism, digital minds, wild animal suffering). The test is whether you can reason impartially about issues where you are currently part of the harm…
Animal exploitation is the issue where EAs have a conflict of interest. You benefit from it multiple times per day. Changing requires initial effort. Why would your reasoning be less corrupted than the factory owner’s, when the psychological mechanisms are identical?
This doesn’t mean you can’t recognise that farmed animals suffer. But eating meat multiple times per day makes it highly unlikely that you perceive the intensity and urgency of that suffering accurately. Research shows regular meat consumers underestimate animal minds and suffering, and avoid information that would increase moral discomfort1.
Cognitive dissonance affects even the most rational people. If you’re trying to effectively prioritise between cause areas — comparing farmed animal welfare to AI safety or global health — you need clarity about how much animal suffering actually matters. That clarity is very hard to achieve when you’re actively participating in and benefitting from avoidable animal suffering…
Even when EAs look at sentience probabilities, welfare points, cost-effectiveness analyses to quantify the suffering of farmed animals, cognitive dissonance can still shape how we act on that information. The numbers themselves don’t lie, but I would argue that our behaviour in response to them (whether to donate differently or change our diets) is still susceptible to motivated reasoning and rationalisations. SOURCE
RELATED VIDEO: