By Jake Rothschild & Andrew Lewis


One of my closest friends and I fall on opposite ends of the political spectrum (I won’t say which side I’m on to avoid alienating half of the readers). And, though we often try to debate politics, we almost always fail. This is not because either of us are uninformed, or too rancorous to respect each other’s opinions. On the contrary, we respect each other completely. Instead, it is because we live in entirely different informational worlds — such that our debates devolve from points of morality and economics to arguing about whose facts are, well, facts.

Our interactions seem to mirror the broader state of political discourse in this age of polarization, from friends at a dinner table to the highest levels of government. And, as I believe is the case for my friend and me, this problem does not exist because either side is less intelligent or willfully incorrect (fake news allegations notwithstanding). Rather, this fundamental incongruity exists because nearly all people, no matter how intelligent, form their arguments in a manner that is cognitively economical, but logically irrational. That is, because of the costs of consuming information that disagrees with our beliefs (social strife, cognitive dissonance, emotional upset, etc.), it is much easier to avoid it altogether. The result of this method of opinion creation is that we interpret all information in a biased manner, and are almost entirely unable (or unwilling) to change our minds about important issues. If you desire to maintain accurate positions, or convince someone of a viewpoint you believe is correct, is it essential to appeal to their logical rationality, and bypass this cognitive incentive.

To further explore this issue, we must define a few terms. Cognitively economical behavior can, rather broadly, be thought of as acting in one’s own psychological self-interest. If changing my opinion about an issue will cause me to lose face with my social circle, the costs of doing so (and thus, the cognitive strain) are high. In such a case, I may preserve both my cognitive capacity and my social standing by maintaining a, logically speaking, bad opinion. Conversely, logical rationality is just that — the pursuit of logic, even at the expense of my own pre-existing dogmas or peer pressures. Logical rationality is the formulation of a worldview (of both opinion and facts) using only logical deductions from empirical evidence.

In situations in which you have high efficacy (i.e., an ability to influence the outcome of some event), logical rationality and cognitive economy go hand in hand. For example, if you run a business, having an accurate view of the future of market is best achieved through logical rationality, and necessary in making economically rational choices. However, in low efficacy situations — like, say, the political opinions one holds — one need not always be logically rational. The cost of being logical is significant: it involves an investment of time and mental ​resources. And, as Kirkebøen et al show, changing one’s mind also is simply an unpleasant experience. As such, given how we often create these initial opinions, the new position can be detrimental to our social well-being. Meanwhile, the reward is essentially negligible for the vast majority of people. In less you are one of a few people, your ability to impact the world politically is very small.

What this means is that for issues in which you have low efficacy, you have almost no incentive to absorb facts and construct opinions in a logical, deductive manner. What we do instead, as several studies have shown, is interpret facts and form beliefs largely through motivated reasoning. What this means is that you create your opinions and decide which facts are accurate largely inductively — or, even worse, avoid information entirely. Our starting principles are typically a combination of our other opinions, social status, and emotions. We then decide who is right and wrong, good and bad, and so on, based on what is cognitively economical, not logically rational.

If you don’t care about having accurate opinions, and are intent on preserving your worldview, please feel free to stop reading this article immediately. However, if you are like most people, and do care about informing your opinions with the best available evidence, there are ways to improve your rationality and rhetoric. You will not be correct about everything, and you will still likely fail to change the minds of others most of the time. However, by understanding how opinions are formed, you will be able to take small steps toward becoming more accurate in your worldview and improving your discourse with others.

How to change your mind

While there has been a lot of research into de-biasing decision-making processes, there is a dearth of academic research about how best to change your mind. There is no scientifically agreed upon method for how to best scrutinize the opinions we hold. However, there are certain steps that are suggested by people who make a career thinking about rational thinking. While these steps have not been proven effective empirically, they at least provide a good outline as to how we might improve our thinking.

To provide a more concrete example of these concepts, imagine Mark, a liberal student at a liberal university with liberal parents. Mark has been told by his parents, friends, and news sources (that he agrees with about other issues) that raising taxes on the wealthy is good. He is well-informed, but not knowledgeable enough about tax policy to know whether this opinion is correct in an objective way. He has read more articles favoring a higher tax policy, and retained more of the facts from such essays.

1: Minimize identity

The way we form our identities merits its own article, and has been an important area of study in the psychological and behavioral sciences. I will define identity as the ideas we have about ourselves that are too crucial to change under normal circumstances. This is a valuable human trait; for example, it is important that people with children consider themselves parents. However, when your identity stretches to encompass ideas that are less crucial and more ambiguous, it becomes counterproductive. The more you consider an issue part of your identity, the more painful it will be to change it, increasing the cost and worsening the chances of logical evaluation.

To reduce the scope of your identity, try to reconsider the words you use to describe yourself. When you think of yourself as some type of person, all of the ideas of that type become a part of your identity. By reducing these words (e.g.: libertarian, vegan, patriot, cynic) you reduce the number of issues that you feel forced to defend irrationally. When arguing, it is all but impossible to get another person to reconsider her identity. However, there is still a lesson here about persuasion. When debating, it is crucial to steer the other person from thinking about an issue in the context of her identity. This includes discussions of political figures, parties, and events, as these will trigger self-preservation and result in defensive thinking. This is not an easy task, as so much of political opinion is simply identity. With that said, the more you can separate a person’s thinking from her identity, the more logic you can expect from her.

For Mark, the first step towards possibly changing his mind would be to stop defining himself as a liberal. As long he considers himself a liberal, it becomes necessary to treat all evidence against liberal positions as personal attacks, and do his best to defend himself (i.e., to also defend these positions). If instead Mark thinks of himself as a person who often agrees with tenets of the liberal ideology, evidence against specific liberal opinions will no longer constitute a personal attack. He may very well continue to hold those opinions, but he will be able to consider counter-evidence, rather than immediately rejecting it.

2: Find the sticking point

It is self-evident that most issues have trade-offs. Resources are limited, and if resources are used to help some people, it is likely that some other people will lose something (this does not mean that all issues are zero-sum games, as people often believe). Beyond that, while ideologies often dictate the ends towards which one should work, those who share perspectives can still disagree about the means to those ends. Yet, because opinions tend to fall in line with an entire political platform, many people hold entirely positive or entirely negative opinions about a given issue — without considering the tradeoffs. For example, as single-payer healthcare becomes a mainstay of the liberal agenda, liberals now support the dual views that healthcare is a human right, and that universal healthcare is best achieved in a single system. Similarly, in justifying their anti-immigration beliefs, conservatives hold the view that immigration is both bad for the economy and bad for national security. What this means is, in our effort to align ourselves in full with an ideology, we uncritically accept beliefs that, in isolation, we may reject. Put another way, we don’t consider the trade-offs.

To find out what is truly important, ask yourself what you would support given certain facts (even if these facts are unknowable). As a liberal advocate of universal healthcare, would you support a single-payer system if it was inefficient, but the only way to provide coverage for all? Would your anti-immigration uncle continue his position if it was the case that immigrants commit less crimes than other citizens? (Side note: this is indeed the case.) By finding the part of a position that is too integral to give up, you can isolate reasons for supporting a policy, and begin to find the evidence that matters.

Most liberals believe that tax rates should be higher such that less fortunate people can receive more social services. They also believe that higher tax rates are good (or at least neutral) for the economy, two essentially unrelated positions. The next step would be for Mark to ask himself whether he would support higher taxes for the rich even if doing so would harm the economy. If the answer is yes (which I suspect it would be for many liberals), then his position is not what he thought it was. Rather than a tax-policy opinion, it is an opinion about inequality.

3. Make predictions

We live in a noisy, noisy world. If you want to, you can find a study confirming or disconfirming just about anything, especially for less tangible issues, as political issues often are. And when information is distributed and processed in a tribal manner, as is so often the case today, it becomes even more likely that you will only confirm what you think you know. Finally, information can be framed such that it appears to favor almost anything, a factor especially crucial for complex policies and outcomes.

For this reason, if you want so seriously evaluate your opinions on a topic, it is necessary that you make predictions. When information is cherry-picked from an infinite number of possible scenarios, it is possible to support any position. By picking the scenarios in advance, you ensure that you are not selecting for situations in which you will be proven right. If your ideas have merit, they should be applicable to future events. If you aren’t confident enough in your opinions to make predictions, that in itself suggests that you should reconsider your opinions.

Let’s assume that Mark discerned that his real reason for supporting a progressive tax policy was to aid the poor, not to stimulate economic growth. He should then attempt to make predictions about what will happen in places in which progressive tax policies are put in place. These predictions should be specific and quantifiable, such that it is clear whether his predictions are correct. He can then measure the success of his predictions, which should give some insight into the validity of his ideas.

4. Actively re-evaluate

The last step is to actively consider your opinions with these new ways of thinking. Once you have minimized your identity such that you’ve been able to look at ideas objectively, found the true reasons you believe what you believe, and determined whether you can use your opinion accurately, how strongly do you still believe your original idea? This should be an ongoing and deliberate process; one failed prediction or logical fallacy is not nearly enough to prove that your idea is wrong. What a series of failed predictions and logical errors would hopefully do is give you the lack of confidence necessary to once again be able to consider the facts and arguments from the other side in a more objective manner.

Our opinions, emotions, and identities are so deeply ingrained that we will still likely change our minds about very few issues, and persuade others of even less. However, I still argue that pursuing logical evaluation and persuasion is important. Right now, people are so unable to evaluate information accurately that their opinions are based on completely irrelevant factors. In a world in which opinions are so tribalistic (and I say this as a full-blown member of such a political tribe), small steps like these can collectively make a big impact.



Golman, R., Hagmann, D., & Loewenstein, G. (2016). Information avoidance.

Hart, P. S., & Nisbet, E. C. (2012). Boomerang effects in science communication: How motivated reasoning and identity cues amplify opinion polarization about climate mitigation policies. Communication Research, 39(6), 701-723.

Kirkebøen, G., Vasaasen, E., & Halvor Teigen, K. (2013). Revisions and regret: The cost of changing your mind. Journal of Behavioral Decision Making, 26(1), 1-12

Kunda, Z. (1990). The case for motivated reasoning. Psychological bulletin, 108(3), 480.

Milkman, K. L., Chugh, D., & Bazerman, M. H. (2009). How can decision making be improved?. Perspectives on psychological science, 4(4), 379-383.

Meegan, D. V. (2010). Zero-sum bias: perceived competition despite unlimited resources. Frontiers in psychology, 1.

Schwartz, N. D. (2017, April 8). Boom or Bust: Stark Partisan Divide on How Consumers View Economy. The New York Times. Retrieved from

Slothuus, R., & De Vreese, C. H. (2010). Political parties, motivated reasoning, and issue framing effects. The Journal of Politics, 72(3), 630-645.

Vignoles, V. L., Regalia, C., Manzi, C., Golledge, J., & Scabini, E. (2006). Beyond self-esteem: influence of multiple motives on identity construction. Journal of personality and social psychology, 90(2), 308.

Jake Rothschild & Andrew Lewis
JAKE ROTHSCHILD is an undergraduate at Carnegie Mellon University in the Quantitative Social Science Scholars program, majoring in Behavioral Economics, Policy, and Organizations. He is a research assistant to professors in behavioral economics in the SDS department at Carnegie Mellon. He is interested in how behavioral insights can be used to improve policy outcomes. Jake is a research assistant at The Decision Lab. ANDREW LEWIS is a master's student in comparative social policy at the University of Oxford. He is a graduate of Carnegie Mellon University, where he studied public policy and behavioral economics, and was a research and teaching assistant to Dr. George Loewenstein. At CMU, he worked as a researcher at the BEDR Policy Lab, conducting experiments on topics ranging from confirmation bias in voting beliefs to the efficacy of various incentives for inducing pro-social behavior. Andrew is particularly interested in the behavioral and psychological effects of poverty and inequality, and how folk conceptions of fairness shape attitudes towards these phenomena. His writing has been published and republished at The American Interest, Marginal Revolution, Arts & Letters Daily, Real Clear Policy, and his hometown paper, the Pittsburgh Post-Gazette. He is an Editor-in-chief at The Decision Lab.
Jake Rothschild & Andrew Lewis

Latest posts by Jake Rothschild & Andrew Lewis (see all)