Is AI making us stupid? Probably one of the world’s largest AI companies

Credit: Andrea Piacquadio of Pexels
Most of us are just thinking about what we can do in our heads. Don’t reach for pen and paper, try dividing 16,951 by 67. Or a calculator. Try shopping weekly without a list on the back of last week’s receipt. Or on your phone.
Are we making ourselves smarter or more foolish by relying on these devices to make our lives easier? Have we traded the efficiency improvements to get closer to folly as a species?
This question is particularly important to consider when it comes to generator artificial intelligence (AI) technologies such as ChatGPT, an AI chatbot owned by Tech Company Openai, with regard to technologies such as ChatGPT, which are used by 300 million people each week at the time of writing. is.
According to a recent paper by a team of researchers at Microsoft and Carnegie Mellon University in the US, the answer may be yes. But there’s more to the story.
I think about it a lot
Researchers evaluated how users perceive the effects of generative AI on their critical thinking.
Generally speaking, critical thinking involves thinking carefully.
One way to do this is to judge your own thought process against established norms and methods of legitimate reasoning. These norms include values such as accuracy, clarity, accuracy, width, depth, relevance, importance, and codinity of discussion.
Other factors that may affect the quality of thinking include the influence of existing worldviews, cognitive bias, and reliance on incomplete or inaccurate mental models.
The authors of recent studies employ the definition of critical thinking developed in 1956 by American educational psychologist Benjamin Bloom and colleagues. That’s not a definition at all. Rather, it is a hierarchical way of classifying cognitive skills, such as information, understanding, application, analysis, integration, and recall assessments.
The author states that he prefers this classification, also known as “classification.” However, since it was invented, it fell out of favor and was discredited by Robert Marzano and indeed by Bloom himself.
In particular, we assume that there is a hierarchy of cognitive skills in which so-called “higher” skills are built on the basis of “low-order” skills. This does not hold any logical or evidence-based basis. For example, assessments that are usually considered vertex or higher order processes can be the beginning of a study or very easily performed in some contexts. It is more context than cognition that determines the refinement of thought.
The problem with using this taxonomy in this study is that many generation AI products also seem to use it to lead their own output. Therefore, this study can be interpreted as testing whether the generator AI is effective in framing how users think about critical thinking in a way that is designed.
And what is lacking in Bloom’s classification is a fundamental aspect of critical thinking. It is the fact that critical thinkers not only perform these and many other cognitive skills, but also perform them well. They do this. Because they have comprehensive concerns about the truth. This is something that AI systems don’t have.
Higher trust in AI equals less critical thinking
A study published earlier this year revealed that “there was a significant negative correlation between frequent AI tools use and critical thinking ability.
New research is further investigating this idea. It examined 319 knowledge workers, including healthcare workers, educators, and engineers, who discussed 936 tasks they performed with the help of generator AI. Interestingly, this study found that users think that they use less critical thinking in performing tasks than providing monitoring during the verification and editing stages.
In a high-stakes work environment, combined with the desire to produce high-quality work and Leplus’ fear, it serves as a powerful incentive for users to involve critical thinking to review AI output.
Overall, however, participants believe that there is greater increased efficiency than compensated for the efforts put into service such monitoring.
This study found that people with higher confidence in AI generally have less critical thinking, but those with higher confidence in themselves tend to show more critical thinking. there was.
This suggests that generative AI does not harm critical thinking.
The problem is that this study is too dependent on self-report, which can impose a variety of bias and interpretation issues. Putting this aside, critical thinking was defined by users as “setting clear goals, improving prompts, and evaluating content generated to meet certain criteria and standards.”
Here, “criteria and standards” refers to the purpose of tasks rather than the purpose of critical thinking. For example, the output meets the criteria when it is “conforming to the query” and meets the standards when the workplace “generated artifact is functional.”
This raises the question of whether this study really measures critical thinking.
Become a critical thinker
Implicitly in new research is the idea that exercise critical thinking during the surveillance stage is at least superior to reflexive overdependence on generated AI.
The authors recommend that generative AI developers add features to trigger critical user monitoring. But is this enough?
Critical thinking is when formulating questions or hypotheses to test at any previous stage, and while using AI, when examining output for bias and accuracy.
The only way to ensure generative AI does not harm your critical thinking is to become a critical thinker before you use it.
To become a critical thinker, you need to identify and challenge the unstatemented assumptions behind the claim. You also need to practice systematic and systematic reasoning, collaboratively test ideas, and think with others.
Chalk and chalkboards are better in math. Can generative AI make us better with critical thinking? Maybe if we are careful, we can use generative AI to challenge ourselves and strengthen our critical thinking.
But in the meantime, there are always what we can and need to do to improve our critical thinking, rather than letting AI think for us.
Provided by conversation
This article will be republished from the conversation under a Creative Commons license. Please read the original article.
Quote: Is AI making us stupid? Probably one of the world’s largest AI companies (February 16, 2025), February 16, 2025 https://phys.org/news/2025-02-02-ai-stupider-world-biggest -companies.html
This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.