New Research Suggests Overreliance on AI Could Hinders Critical Thinking
Over-dependence AI systems may hinder our critical thinking potential Researchers at Microsoft and Carnegie Mellon University say that as people will synthesize with machines. The study is scheduled CHI Conference on Human Factors in Computing Systems April in Yokohama, Japan.
In the study, researchers define critical thinking as a pyramid of hierarchy, whose knowledge is on top, and then their understanding of the thoughts, putting the thoughts into practice, analyzing relevant thoughts, integrating or combining them and evaluating thoughts through set standards .
According to a survey conducted by 319 knowledge workers, it can often be classified as white-collar work, but the study found that while generating AI can increase efficiency, it can inhibit critical engagement with work and can lead to long-term over-reliance on the tool. And reduce the skills to solve problems independently.”
The researchers found that workers like to use AI to double-check the work and meet certain criteria by comparing it with other external sources. While this certainly requires critical analysis, the researchers noted that workers’ use of AI to automate routine or lower bet tasks raises concerns about “long-term dependence and reduce independent problem solving.”
Interestingly, when workers have higher confidence in AI responses, “it seems to reduce the perceived effort required for critical thinking tasks.” But workers who truly believe in their expertise will eventually make greater efforts to assess AI response. So while AI can help workers retrieve information faster, they may end up spending more time trying to verify all information as accurate rather than hallucinations.
“As workers move from task execution to oversight of AI, they trade hands-on participation to validate and edit AI output, revealing the risks of increased efficiency and critical reduction,” the study said.
However, researchers do not want people to draw clear conclusions about AI use and weaken critical thinking. They acknowledge that correlation does not indicate causality. When a person reads the answers generated by AI, it is impossible to see the heart of the human being and know exactly what thoughts bounce around.
Nevertheless, the data did lead to some suggestions from researchers. The study said that as workers transition from information collection tasks to more information verification, their cross-reference AI output should be trained and the importance of their relevance should be evaluated.
The impact on businesses is particularly important depending on the proliferation of AI in all areas, which could reduce the workforce by 41%. World Economic Forum. Big Technology CEO has Admit they have offloaded more tasks to AIresulting in layoffs and fewer job opportunities. Krana’s CEO Tell the BBC He has reduced the workforce from 5,000 to 3,800 and plans to reduce it to 2,000, but admits that the remaining employees will be paid more.
Former President Joe Biden’s series of AI security-related execution orders are Overturned by President Donald Trumpprovides a larger guardrail. last week, Google lifts ban on AI to develop weapons and surveillance tools. All these changes make the results of this study more relevant, as workers can use more AI tools and are responsible for overseeing more AI-generated information.
The researchers do point out that concerns about human cognition decline are commonplace with any new technological innovation. For example, they point out that Socrates opposes writing, Trithemius opposes printing, and educators have long been wary of calculator and the use of the internet.
But they also point out: “One key to automation is that by mechanizing routine tasks and leaving abnormal handling for human users, you can deprive users of the regular opportunity to practice their judgments and strengthen their cognitive muscles Not prepared when exceptions appear.”