What if your next Google search helped reinforce a false belief—without you even realizing it? In a sweeping set of 21 studies involving thousands of participants, cognitive scientist Eugina Leung and her research team uncovered a subtle but powerful psychological phenomenon: the very act of typing a question into a search engine can entrench existing biases, even when people believe they’re being objective. As detailed in a recent study and press release from Tulane University, the “narrow search effect” reveals how our unconscious framing of queries, paired with relevance-focused algorithms, creates self-constructed echo chambers.
Leung’s findings go further: even when users weren’t consciously seeking confirming evidence, their search behavior reflected implicit biases—and the results they got back amplified them. The good news? The study found that relatively simple tweaks to search algorithms—such as offering balanced information by default or introducing a “Search Broadly” option—can nudge users toward a more complete understanding of complex topics without sacrificing relevance or usability.
In this Q&A, Leung breaks down the cognitive traps baked into everyday search behavior, the promise of algorithmic interventions, and what her team’s findings mean for the design of AI tools like Google and ChatGPT in an age of polarization and generative search.

Your research found that people often search for information in ways that end up confirming what they already believe—even when they don’t mean to. Can you walk us through how something as simple as the words we choose to type into a search bar can shape what we end up believing?
Absolutely. It’s a two-part process that creates a powerful feedback loop.
First, our existing beliefs, even subtle ones, unconsciously shape the questions we ask. If you have a hunch that caffeine is unhealthy, you’re more likely to search for something like “dangers of caffeine” or “risks of coffee” rather than a neutral term. We saw this consistently in our studies. For example, after the 2020 election, Google search data showed that people in states with more Republican voters were far more likely to search for “Trump win/won,” while those in Democratic-leaning states were more likely to search for “Biden win/won.”
Second, search engines like Google or ChatGPT are engineered to deliver the most relevant results for the specific words you used. So, a search for “dangers of caffeine” will return a list of articles about its negative effects. The algorithm is doing its job perfectly, but the result is a narrow slice of information that matches the bias in your original query.
When you combine these two things (i.e., our tendency to ask biased questions and the algorithm’s focus on relevance), you end up in an echo chamber of your own making. The information you receive reinforces your initial hunch, even if a broader set of facts would tell a more complicated story.
One of the most striking parts of the study is that people weren’t trying to be biased, but their search behavior still leaned in that direction. What does this say about how automatic or unconscious our thinking can be when we look things up online?
That’s one of the most important takeaways from our work. It shows that this isn’t about people deliberately trying to prove themselves right. It’s a cognitive shortcut: an automatic, unconscious process.
In one of our studies, we specifically asked people if they were trying to find information to confirm their beliefs, and only a tiny fraction (less than 10%) said yes. When we excluded those people from our analysis, the effect remained just as strong. This tells us that the “narrow search effect” is a modern, digital manifestation of a long-documented psychological phenomenon called confirmation bias. Specifically, it’s a type of bias in how we formulate questions. We naturally frame our queries in a way that is likely to support our initial hypothesis, without even realizing we’re doing it.
So, when you search online, you might think you’re on a neutral fact-finding mission, but your brain’s automatic wiring is subtly steering the ship. This highlights how our deep-seated cognitive patterns interact with technology in ways we don’t expect.
You tested a few different ways to help people see more balanced information. Why were changes to the search engine’s algorithm—like giving users a mix of viewpoints—more effective than simply telling people to be more open-minded or to search again?
This was a key question for us, and the results were very clear. Simply prompting people to do a second search had a very limited effect. While these nudges helped a little, they weren’t enough to overcome the powerful default of narrow searching. It’s hard to fix a deep-seated cognitive habit just by telling someone to “think better.”
In contrast, changing the information environment itself by tweaking the algorithm was far more effective. When we used our custom-built search engines to automatically provide broader, more balanced results—for instance, by mixing in results for “caffeine pros and cons” even when a user searched for “caffeine risks”—people’s final beliefs were significantly less biased.
The reason this works better is that it removes the burden from the user. It makes receiving balanced information the easy, default option. Critically, we also found that people rated these broader search results as just as useful and relevant. This means we can build better, less-biasing search tools without sacrificing the user experience. The most effective solution isn’t to try to change human nature, but to design technology that accounts for it.
You looked at topics ranging from caffeine and crime to COVID-19 and nuclear energy. Were there any subjects where people were especially resistant to changing their minds, even when shown different perspectives?
Across the wide range of topics we tested (including health, finance, and social issues) the pattern was generally consistent. The “narrow search effect” appeared in all of them, and the interventions to broaden search results successfully encouraged people to update their beliefs in all those areas. This suggests that for many everyday topics, people’s beliefs are actually quite malleable; they aren’t deeply entrenched and they are open to new information when it’s presented to them.
That said, our research suggests there are likely limits. We would expect that for highly polarized, identity-defining topics—issues that are central to someone’s political or social identity—people might be more resistant. In those cases, another form of bias called “motivated reasoning” might kick in, where people actively discredit information that contradicts their core beliefs.
However, our findings show that for many topics people search for day-to-day, the main barrier to belief-updating isn’t stubbornness, but rather the unintentional echo chamber created by how we search.
You mention a ‘Search Broadly’ button as a possible feature to help people break out of their echo chambers. What would that actually look like on a platform like Google or ChatGPT, and how would it work differently from what we have now?
Think of it as the conceptual opposite of Google’s old “I’m Feeling Lucky” button, which gives you the single most narrow result.
On a platform like Google: You would type in your query, for example, “risks of nuclear energy,” and next to the main search button, you’d see a second option: “Search Broadly.” If you clicked it, the algorithm would run your query but also automatically run related but more balanced queries like “pros and cons of nuclear energy” or “nuclear energy safety and benefits.” It would then present you with a mixed set of results that includes both perspectives, giving you a fuller picture right from the start.
On a platform like ChatGPT: It would work by changing the AI’s instructions. When you ask a question and hit “Search Broadly,” the AI would be prompted not just to give the most direct answer, but to provide a balanced viewpoint, covering pros and cons, different perspectives, and a more comprehensive overview. Instead of just a list of risks, you’d get a synthesized paragraph explaining the risks, the benefits, and the ongoing debate.
The key difference from what we have now is that it makes seeking out diverse viewpoints an explicit, easy, one-click option, rather than something the user has to remember to do themselves by manually trying out multiple different search terms. And when we surveyed people, 84% said they would be interested in using a feature like this.
Across 21 studies and thousands of participants, were there any results that really surprised you—something that went against what you expected about how people search for and process information?
What surprised me most was how well people responded to the broadened search results. There’s a prevailing assumption in the tech world that users demand hyper-relevant, narrowly-focused information, and that giving them anything else would be seen as less useful or frustrating.
However, our studies consistently found that this wasn’t true. When we showed people a more balanced set of search results or a broader AI-generated answer, they rated the information as just as useful and relevant as the people who received narrow, belief-confirming results.
This was a genuine surprise and is optimistic. It means that platforms could redesign their algorithms to foster a more informed public without necessarily sacrificing user satisfaction. We can have a search experience that is both relevant and broad, and users could be happy with it and better informed because of it.
Given how common AI-driven search tools have become, how important is it that platforms rethink how they deliver search results? And do you think tech companies are ready to make those kinds of design changes?
It’s critically important. In fact, perhaps more now than ever before. With traditional search, you get a list of links and have to do the work of sorting through them. But new AI tools often synthesize information into a single, authoritative-sounding answer. If that single answer is shaped by a user’s narrow, biased query, the echo chamber effect could become even stronger and less visible. We are at a fork in the road where AI can either deepen these divisions or help bridge them.
As for whether tech companies are ready, I see both reasons for optimism and for caution.
On the one hand, the technology is absolutely there. As our study with a custom AI chatbot showed, it’s relatively easy to use “prompt engineering” by giving the AI better instructions to generate more balanced, comprehensive answers. We even saw that Microsoft’s New Bing sometimes does this automatically, reformulating a narrow query like “nuclear energy is good” into a broader one like “nuclear energy pros and cons.” This shows they are aware of the issue.
On the other hand, the core business model of these platforms has long been optimized for relevance and engagement, which has meant defaulting to narrow results. Making a fundamental shift toward prioritizing breadth would require a change in design philosophy. Our research provides a strong argument that this change is not only beneficial for society but also sometimes feasible without harming the user experience. The willingness to make that shift will be the real test.





Leave a Reply