«

聊天机器人或许让学习看似轻松——但这只是表面现象。

qimuai 发布于 阅读:66 一手编译


聊天机器人或许让学习看似轻松——但这只是表面现象。

内容来源:https://www.sciencenews.org/article/ai-chatbots-learning-superficial-llm

内容总结:

最新研究表明,使用传统搜索引擎进行信息检索比依赖AI聊天机器人更能建立深层知识体系。这项发表于《PNAS Nexus》期刊的研究通过七项实验对超过1万名参与者展开测试,发现使用谷歌搜索的人群在知识掌握深度、信息产出质量及建议采纳意愿等方面均优于ChatGPT用户。

实验要求参与者就特定主题进行研究后撰写建议,结果显示即使控制信息来源变量,通过聊天机器人获取摘要式答案的参与者仍表现出更浅层的知识理解。研究人员指出,虽然AI工具能减轻信息整合负担,但这种便利性是以牺牲知识深度为代价的。值得注意的是,在提供原始网页链接的ChatGPT实验中,仅约四分之一用户主动点击查看信息来源。

专家强调,这并不意味着应该完全放弃使用AI工具,而是需要优化工具设计以激励用户深入探索。正如心理学家所言:“工具效果取决于使用方式,当前研究表明人们尚未自然形成最佳使用习惯。”该发现为人工智能时代的知识获取方式提供了重要启示。

中文翻译:

聊天机器人或许让学习变轻松了——但学到的都很肤浅
一项研究发现:用传统方式搜索比使用人工智能工具能带来更深入的学习效果
作者:帕亚尔·达尔

学习新知识时,相比询问ChatGPT,使用传统搜索方式或许是更明智的选择。
大语言模型作为驱动ChatGPT等聊天机器人的核心技术,正日益成为快速获取答案的来源。但发表于《PNAS Nexus》十月刊的一项新研究表明:使用传统搜索引擎获取信息的人,比依赖AI聊天机器人的人形成了更深入的知识体系。

宾夕法尼亚大学消费心理学研究员什里·梅卢马德指出:“大语言模型不仅从根本上改变了我们获取信息的方式,更改变了知识建构的过程。我们越了解其影响——包括益处与风险——就越能有效使用它们,进而优化其设计。”

梅卢马德与宾大神经科学家尹镇浩(音)通过七组实验对比了人们通过大语言模型与传统网络搜索获得的知识差异。逾万名参与者被随机分配使用谷歌或ChatGPT研究不同主题(如如何经营菜园、如何养成健康生活方式),并依据所学为朋友撰写建议。研究人员从知识掌握程度和投入度两个维度进行了评估。

即便控制信息来源变量(例如在模拟界面使用完全相同的事实数据),结论依然成立:通过聊天机器人摘要获取的知识始终比网页链接获取的更肤浅。判断知识“深浅”的指标包含参与者自述、自然语言处理工具及独立评审的评估结果。

研究还发现:相比使用网络搜索的群体,通过大语言模型学习的人对其所给建议投入度更低,产出的信息量更少,自身实践建议的意愿也更弱。梅卢马德强调:“即便使用可提供原始网页链接的ChatGPT版本,结果依然如此。”在约800名参与的相关实验中,仅四分之一有动力点击至少一个链接。

她补充道:“虽然大语言模型能减轻自主整合信息的负担,但这种便利的代价是牺牲对主题的深入理解。”同时指出未来应设计能主动引导用户深度探索的搜索工具。

匹兹堡卡内基梅隆大学心理学家丹尼尔·奥本海默认为这项研究设计出色,但提出不同解读视角。他指出更准确的表述应是“大语言模型降低了人们自主思考的动力”,而非断言自主整合信息者比接受AI合成内容者理解更深入。不过他强调,不应因可能导致浅层学习就放弃实用工具:“如同所有学习方法,工具效果取决于使用方式。本研究揭示的是人们尚未自然发挥其最大潜力。”

英文来源:

Chatbots may make learning feel easy — but it’s superficial
Googling the old-fashioned way leads to deeper learning than using AI tools, a study finds
By Payal Dhar
When it comes to learning something new, old-fashioned Googling might be the smarter move compared with asking ChatGPT.
Large language models, or LLMs — the artificial intelligence systems that power chatbots like ChatGPT — are increasingly being used as sources of quick answers. But in a new study, people who used a traditional search engine to look up information developed deeper knowledge than those who relied on an AI chatbot, researchers report in the October PNAS Nexus.
“LLMs are fundamentally changing not just how we acquire information but how we develop knowledge,” says Shiri Melumad, a consumer psychology researcher at the University of Pennsylvania. “The more we learn about their effects — both their benefits and risks — the more effectively people can use them, and the better they can be designed.”
Melumad and Jin Ho Yun, a neuroscientist at the University of Pennsylvania, ran a series of experiments comparing what people learn through LLMs versus traditional web searches. Over 10,000 participants across seven experiments were randomly assigned to research different topics — such as how to grow a vegetable garden or how to lead a healthier lifestyle — using either Google or ChatGPT, then write advice for a friend based on what they’d learned. The researchers evaluated how much participants learned from the task and how invested they were in their advice.
Even controlling for the information available — for instance, by using identical sets of facts in simulated interfaces — the pattern held: Knowledge gained from chatbot summaries was shallower compared with knowledge gained from web links. Indicators for “shallow” versus “deep” knowledge were based on participant self-reporting, natural language processing tools and evaluations by independent human judges.
The analysis also found that those who learned via LLMs were less invested in the advice they gave, produced less informative content and were less likely to adopt the advice for themselves compared with those who used web searches. “The same results arose even when participants used a version of ChatGPT that provided optional web links to original sources,” Melumad says. Only about a quarter of the roughly 800 participants in that “ChatGPT with links” experiment were even motivated to click on at least one link.
“While LLMs can reduce the load of having to synthesize information for oneself, this ease comes at the cost of developing deeper knowledge on a topic,” she says. She also adds that more could be done to design search tools that actively encourage users to dig deeper.
Psychologist Daniel Oppenheimer of Carnegie Mellon University in Pittsburgh says that while this is a good project, he would frame it differently. He thinks it’s more accurate to say that “LLMs reduce motivation for people to do their own thinking,” rather than claiming that people who synthesize information for themselves gain a deeper understanding than those who receive a synthesis from another entity, such as an LLM.
However, he adds that he would hate for people to abandon a useful tool because they think it will universally lead to shallower learning. “Like all learning,” he says, “the effectiveness of the tool depends on how you use it. What this finding is showing is that people don’t naturally use it as well as they might.”

AI科学News

文章目录


    扫描二维码,在手机上阅读