top of page

Too much trust in GenAI can impair critical thinking, study warns

  • Abhijit Ahaskar
  • Feb 11
  • 3 min read

Updated: Mar 12


AI, workplace

Using generative AI tools for work can boost productivity, but it can also lead to decline in critical thinking in knowledge workers, claims a joint study by researchers at Microsoft Research Cambridge and Carnegie Mellon University. 

The study warns that over-reliance on AI, particularly when users show more confidence in its abilities, can lead to less problem-solving on their own and a gradual decline in critical thinking processes like analysis and evaluation.


Researchers interviewed 319 knowledge workers, collecting 936 real-world examples of generative AI use, to measure their perceptions of critical thinking during these tasks, including when it is needed, how it is applied, and how it affects their effort. For the survey, they used Bloom’s framework that defines critical thinking on the basis of six activities – recalling ideas, comprehension, application, analysis, synthesis and evaluation. 


Use of generative AI at workplace is growing as more companies want workers to use AI chatbots and co-pilots to increase productivity and save time. For instance, OpenAI reported last September that its corporate offerings, including ChatGPT Team, ChatGPT Enterprise and ChatGPT Edu, have more than 1 million paid business users.  


This increasing use of AI for tasks previously done by humans has also raised concerns about its potential impact on skill development and ability to think critically. 


Previous studies show that over-dependence on these tools can hamper workers’ ability to learn and improve skills. For instance, one study found that using these tools can deprive  writers of critical learning steps such as constructing arguments and understanding the subject matter. Similarly, over-reliance on generative AI can affect worker’s ability to learn and remember, claims another study. 


Microsoft Research and Carnegie Mellon University study found  that workers who already tend to reflect on their work are more likely to continue engaging in critical thinking even when using generative AI tools. However, high confidence in the AI’s ability to perform a task reduces the worker’s own critical thinking.  Those who completely trust the AI to handle everything are less likely to think critically themselves.


The researchers also observed that reliance, confidence, and critical thinking vary from task to task. So a user might be very confident in AI for one task and therefore think less critically, but less confident for another and think more critically.  


Some participants said that they used critical thinking to improve the AI generated content  tools when it fell short of their expectations. Many of them found AI responses too shallow and generic. “The AI does not understand the niche type of work I do. I have to adapt the output to fit my needs,” one participant said. 

For some participants, critical thinking was driven by the worry that generative AI tools can harm their work by generating inaccurate information or codes. 


To get the best results from generative AI tools users have to formulate prompts that reflect their requirements effectively. This process, which often involves revising queries, was perceived as increasing the effort required for analysis, a key critical thinking skill when working with generative AI.


While generative AI was the reason for being critical for some, it can also inhibit critical thinking. For instance, users who believe AI is competent enough to do simple tasks can overestimate AI’s capabilities. Some participants said that it made them doubt their ability to perform tasks independently. 

Researchers also found motivational barriers to critical thinking such as the pressure to meet a daily work quota or finish a task faster. 


“The reason I use AI is because in sales, I must reach a certain quota daily or risk losing my job. I use AI to save time and don’t have much room to ponder over the result,” said another participant. Lack of domain knowledge was another limitation that affected critical thinking as the worker didn’t know when AI was hallucinating. 


Further, generative AI has made information gathering much easier for users, which has lowered the mental effort required to find and organize information. 


That said the researcher acknowledged that their study has some limitations. For example, sometimes participants confuse putting less effort while using AI with reduction in critical thinking. This might be because most workers usually don’t think about critical thinking in their daily work. 

Researchers feel that future research could use different methods, like observing people as they work, to better understand the difference between reduced effort and actual critical thinking.



Image credit: Pexels

bottom of page