在大规模城市文本数据中识别主题的混合深度学习方法:优势与权衡

IF 7.1 1区 地球科学 Q1 ENVIRONMENTAL STUDIES Computers Environment and Urban Systems Pub Date : 2024-05-24 DOI:10.1016/j.compenvurbsys.2024.102131
Madison Lore , Julia Gabriele Harten , Geoff Boeing
{"title":"在大规模城市文本数据中识别主题的混合深度学习方法:优势与权衡","authors":"Madison Lore ,&nbsp;Julia Gabriele Harten ,&nbsp;Geoff Boeing","doi":"10.1016/j.compenvurbsys.2024.102131","DOIUrl":null,"url":null,"abstract":"<div><p>Large-scale text data from public sources, including social media or online platforms, can expand urban planners' ability to monitor and analyze urban conditions in near real-time. To overcome scalability challenges of manual techniques for qualitative data analysis, researchers and practitioners have turned to computer-automated methods, such as natural language processing (NLP) and deep learning. However, the benefits, challenges, and trade-offs of these methods remain poorly understood. How much meaning can different NLP techniques capture and how do their results compare to traditional manual techniques? Drawing on 90,000 online rental listings in Los Angeles County, this study proposes and compares manual, semi-automated, and fully automated methods for identifying context-informed topics in unstructured, user-generated text data. We find that fully automated methods perform best with more-structured text, but struggle to separate topics in free-flow text and when handling nuanced language. Introducing a manual technique first on a small data set to train a semi-automated method, however, improves accuracy even as the structure of the text degrades. We argue that while fully automated NLP methods are attractive replacements for scaling manual techniques, leveraging the contextual understanding of human expertise alongside efficient computer-based methods like BERT models generates better accuracy without sacrificing scalability.</p></div>","PeriodicalId":48241,"journal":{"name":"Computers Environment and Urban Systems","volume":"111 ","pages":"Article 102131"},"PeriodicalIF":7.1000,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0198971524000607/pdfft?md5=9c8f877cb67840528ee457f6a117bb9b&pid=1-s2.0-S0198971524000607-main.pdf","citationCount":"0","resultStr":"{\"title\":\"A hybrid deep learning method for identifying topics in large-scale urban text data: Benefits and trade-offs\",\"authors\":\"Madison Lore ,&nbsp;Julia Gabriele Harten ,&nbsp;Geoff Boeing\",\"doi\":\"10.1016/j.compenvurbsys.2024.102131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Large-scale text data from public sources, including social media or online platforms, can expand urban planners' ability to monitor and analyze urban conditions in near real-time. To overcome scalability challenges of manual techniques for qualitative data analysis, researchers and practitioners have turned to computer-automated methods, such as natural language processing (NLP) and deep learning. However, the benefits, challenges, and trade-offs of these methods remain poorly understood. How much meaning can different NLP techniques capture and how do their results compare to traditional manual techniques? Drawing on 90,000 online rental listings in Los Angeles County, this study proposes and compares manual, semi-automated, and fully automated methods for identifying context-informed topics in unstructured, user-generated text data. We find that fully automated methods perform best with more-structured text, but struggle to separate topics in free-flow text and when handling nuanced language. Introducing a manual technique first on a small data set to train a semi-automated method, however, improves accuracy even as the structure of the text degrades. We argue that while fully automated NLP methods are attractive replacements for scaling manual techniques, leveraging the contextual understanding of human expertise alongside efficient computer-based methods like BERT models generates better accuracy without sacrificing scalability.</p></div>\",\"PeriodicalId\":48241,\"journal\":{\"name\":\"Computers Environment and Urban Systems\",\"volume\":\"111 \",\"pages\":\"Article 102131\"},\"PeriodicalIF\":7.1000,\"publicationDate\":\"2024-05-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0198971524000607/pdfft?md5=9c8f877cb67840528ee457f6a117bb9b&pid=1-s2.0-S0198971524000607-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers Environment and Urban Systems\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0198971524000607\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENVIRONMENTAL STUDIES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers Environment and Urban Systems","FirstCategoryId":"89","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0198971524000607","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL STUDIES","Score":null,"Total":0}
引用次数: 0

摘要

来自公共资源(包括社交媒体或在线平台)的大规模文本数据可以提高城市规划者近乎实时地监控和分析城市状况的能力。为了克服人工定性数据分析技术在可扩展性方面的挑战,研究人员和从业人员转向了计算机自动化方法,如自然语言处理(NLP)和深度学习。然而,人们对这些方法的优势、挑战和权衡仍然知之甚少。不同的 NLP 技术能捕捉多少意义,其结果与传统人工技术相比又如何?本研究以洛杉矶县的 90,000 份在线租房信息为基础,提出并比较了人工、半自动和全自动方法,用于识别非结构化用户生成文本数据中的上下文关联主题。我们发现,全自动方法在处理结构化程度较高的文本时表现最佳,但在分离自由流动文本中的主题和处理细微语言时却很吃力。不过,首先在小型数据集上引入人工技术来训练半自动方法,即使文本结构退化,也能提高准确性。我们认为,虽然完全自动化的 NLP 方法可以很好地替代人工技术,但利用人类专业知识对上下文的理解,再加上基于计算机的高效方法(如 BERT 模型),可以在不牺牲可扩展性的情况下提高准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A hybrid deep learning method for identifying topics in large-scale urban text data: Benefits and trade-offs

Large-scale text data from public sources, including social media or online platforms, can expand urban planners' ability to monitor and analyze urban conditions in near real-time. To overcome scalability challenges of manual techniques for qualitative data analysis, researchers and practitioners have turned to computer-automated methods, such as natural language processing (NLP) and deep learning. However, the benefits, challenges, and trade-offs of these methods remain poorly understood. How much meaning can different NLP techniques capture and how do their results compare to traditional manual techniques? Drawing on 90,000 online rental listings in Los Angeles County, this study proposes and compares manual, semi-automated, and fully automated methods for identifying context-informed topics in unstructured, user-generated text data. We find that fully automated methods perform best with more-structured text, but struggle to separate topics in free-flow text and when handling nuanced language. Introducing a manual technique first on a small data set to train a semi-automated method, however, improves accuracy even as the structure of the text degrades. We argue that while fully automated NLP methods are attractive replacements for scaling manual techniques, leveraging the contextual understanding of human expertise alongside efficient computer-based methods like BERT models generates better accuracy without sacrificing scalability.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
13.30
自引率
7.40%
发文量
111
审稿时长
32 days
期刊介绍: Computers, Environment and Urban Systemsis an interdisciplinary journal publishing cutting-edge and innovative computer-based research on environmental and urban systems, that privileges the geospatial perspective. The journal welcomes original high quality scholarship of a theoretical, applied or technological nature, and provides a stimulating presentation of perspectives, research developments, overviews of important new technologies and uses of major computational, information-based, and visualization innovations. Applied and theoretical contributions demonstrate the scope of computer-based analysis fostering a better understanding of environmental and urban systems, their spatial scope and their dynamics.
期刊最新文献
Estimating the density of urban trees in 1890s Leeds and Edinburgh using object detection on historical maps The role of data resolution in analyzing urban form and PM2.5 concentration Causal discovery and analysis of global city carbon emissions based on data-driven and hybrid intelligence Editorial Board Exploring the built environment impacts on Online Car-hailing waiting time: An empirical study in Beijing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1