首页 > 最新文献

Inf. Comput.最新文献

英文 中文
Improving the Effectiveness and Efficiency of Web-Based Search Tasks for Policy Workers 提高政策工作者网络搜索任务的有效性和效率
Pub Date : 2023-06-29 DOI: 10.3390/info14070371
T. Schoegje, A. D. Vries, L. Hardman, T. Pieters
We adapt previous literature on search tasks for developing a domain-specific search engine that supports the search tasks of policy workers. To characterise the search tasks we conducted two rounds of interviews with policy workers at the municipality of Utrecht, and found that they face different challenges depending on the complexity of the task. During simple tasks, policy workers face information overload and time pressures, especially during web-based searches. For complex tasks, users prefer finding domain experts within their organisation to obtain the necessary information, which requires a different type of search functionality. To support simple tasks, we developed a web search engine that indexes web pages from authoritative sources only. We tested the hypothesis that users prefer expert search over web search for complex tasks and found that supporting complex tasks requires integrating functionality that enables finding internal experts within the broader web search engine. We constructed representative tasks to evaluate the proposed system’s effectiveness and efficiency, and found that it improved user performance. The search functionality developed could be standardised for use by policy workers in different municipalities within the Netherlands.
我们改编了先前关于搜索任务的文献,以开发支持政策工作人员搜索任务的特定领域搜索引擎。为了描述搜索任务的特点,我们对乌得勒支市的政策工作人员进行了两轮采访,发现他们面临着不同的挑战,这取决于任务的复杂性。在执行简单任务时,策略工作人员会面临信息过载和时间压力,尤其是在进行基于web的搜索时。对于复杂的任务,用户更喜欢在他们的组织内寻找领域专家来获得必要的信息,这需要不同类型的搜索功能。为了支持简单的任务,我们开发了一个只对权威来源的网页进行索引的网络搜索引擎。我们测试了用户更喜欢专家搜索而不是网络搜索复杂任务的假设,发现支持复杂任务需要集成功能,以便在更广泛的网络搜索引擎中找到内部专家。我们构建了具有代表性的任务来评估所提出系统的有效性和效率,并发现它提高了用户的性能。开发的搜索功能可以标准化,供荷兰不同城市的政策工作人员使用。
{"title":"Improving the Effectiveness and Efficiency of Web-Based Search Tasks for Policy Workers","authors":"T. Schoegje, A. D. Vries, L. Hardman, T. Pieters","doi":"10.3390/info14070371","DOIUrl":"https://doi.org/10.3390/info14070371","url":null,"abstract":"We adapt previous literature on search tasks for developing a domain-specific search engine that supports the search tasks of policy workers. To characterise the search tasks we conducted two rounds of interviews with policy workers at the municipality of Utrecht, and found that they face different challenges depending on the complexity of the task. During simple tasks, policy workers face information overload and time pressures, especially during web-based searches. For complex tasks, users prefer finding domain experts within their organisation to obtain the necessary information, which requires a different type of search functionality. To support simple tasks, we developed a web search engine that indexes web pages from authoritative sources only. We tested the hypothesis that users prefer expert search over web search for complex tasks and found that supporting complex tasks requires integrating functionality that enables finding internal experts within the broader web search engine. We constructed representative tasks to evaluate the proposed system’s effectiveness and efficiency, and found that it improved user performance. The search functionality developed could be standardised for use by policy workers in different municipalities within the Netherlands.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"107 1","pages":"371"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81351633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Mining Using Association Rules for Intuitionistic Fuzzy Data 基于关联规则的直觉模糊数据挖掘
Pub Date : 2023-06-29 DOI: 10.3390/info14070372
F. Petry, Ronald R. Yager
This paper considers approaches to the computation of association rules for intuitionistic fuzzy data. Association rules can provide guidance for assessing the significant relationships that can be determined while analyzing data. The approach uses the cardinality of intuitionistic fuzzy sets that provide a minimum and maximum range for the support and confidence metrics. A new notation is used to enable the representation of the fuzzy metrics. A running example of queries about the desirable features of vacation locations is used to illustrate.
本文研究了直觉模糊数据关联规则的计算方法。关联规则可以为评估在分析数据时确定的重要关系提供指导。该方法使用直觉模糊集的基数,为支持度和置信度指标提供最小和最大范围。使用一种新的符号来表示模糊度量。本文使用了一个运行的查询示例来说明度假地点的理想特性。
{"title":"Data Mining Using Association Rules for Intuitionistic Fuzzy Data","authors":"F. Petry, Ronald R. Yager","doi":"10.3390/info14070372","DOIUrl":"https://doi.org/10.3390/info14070372","url":null,"abstract":"This paper considers approaches to the computation of association rules for intuitionistic fuzzy data. Association rules can provide guidance for assessing the significant relationships that can be determined while analyzing data. The approach uses the cardinality of intuitionistic fuzzy sets that provide a minimum and maximum range for the support and confidence metrics. A new notation is used to enable the representation of the fuzzy metrics. A running example of queries about the desirable features of vacation locations is used to illustrate.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"41 1","pages":"372"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79371384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text to Causal Knowledge Graph: A Framework to Synthesize Knowledge from Unstructured Business Texts into Causal Graphs 文本到因果知识图:将非结构化商业文本中的知识合成为因果图的框架
Pub Date : 2023-06-28 DOI: 10.3390/info14070367
Seethalakshmi Gopalakrishnan, Victor Zitian Chen, Wenwen Dou, Gus Hahn-Powell, Sreekar Nedunuri, Wlodek Zadrozny
This article presents a state-of-the-art system to extract and synthesize causal statements from company reports into a directed causal graph. The extracted information is organized by its relevance to different stakeholder group benefits (customers, employees, investors, and the community/environment). The presented method of synthesizing extracted data into a knowledge graph comprises a framework that can be used for similar tasks in other domains, e.g., medical information. The current work addresses the problem of finding, organizing, and synthesizing a view of the cause-and-effect relationships based on textual data in order to inform and even prescribe the best actions that may affect target business outcomes related to the benefits for different stakeholders (customers, employees, investors, and the community/environment).
本文提出了一种最先进的系统,从公司报告中提取和综合因果陈述,形成有向因果图。提取的信息按照与不同涉众群体利益(客户、员工、投资者和社区/环境)的相关性进行组织。所提出的将提取的数据合成为知识图的方法包括一个框架,该框架可用于其他领域(例如,医疗信息)中的类似任务。当前的工作解决了查找、组织和综合基于文本数据的因果关系视图的问题,以便告知甚至规定可能影响与不同利益相关者(客户、员工、投资者和社区/环境)利益相关的目标业务结果的最佳操作。
{"title":"Text to Causal Knowledge Graph: A Framework to Synthesize Knowledge from Unstructured Business Texts into Causal Graphs","authors":"Seethalakshmi Gopalakrishnan, Victor Zitian Chen, Wenwen Dou, Gus Hahn-Powell, Sreekar Nedunuri, Wlodek Zadrozny","doi":"10.3390/info14070367","DOIUrl":"https://doi.org/10.3390/info14070367","url":null,"abstract":"This article presents a state-of-the-art system to extract and synthesize causal statements from company reports into a directed causal graph. The extracted information is organized by its relevance to different stakeholder group benefits (customers, employees, investors, and the community/environment). The presented method of synthesizing extracted data into a knowledge graph comprises a framework that can be used for similar tasks in other domains, e.g., medical information. The current work addresses the problem of finding, organizing, and synthesizing a view of the cause-and-effect relationships based on textual data in order to inform and even prescribe the best actions that may affect target business outcomes related to the benefits for different stakeholders (customers, employees, investors, and the community/environment).","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"101 1","pages":"367"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83123870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Scene Text Recognition Based on Improved CRNN 基于改进CRNN的场景文本识别
Pub Date : 2023-06-28 DOI: 10.3390/info14070369
Wenhua Yu, Mayire Ibrayim, A. Hamdulla
Text recognition is an important research topic in computer vision. Scene text, which refers to the text in real scenes, sometimes needs to meet the requirement of attracting attention, and there is the situation such as deformation. At the same time, the image acquisition process is affected by factors such as occlusion, noise, and obstruction, making scene text recognition tasks more challenging. In this paper, we improve the CRNN model for text recognition, which has relatively low accuracy, poor performance in recognizing irregular text, and only considers obtaining text sequence information from a single aspect, resulting in incomplete information acquisition. Firstly, to address the problems of low text recognition accuracy and poor recognition of irregular text, we add label smoothing to ensure the model’s generalization ability. Then, we introduce the smoothing loss function from speech recognition into the field of text recognition, and add a language model to increase information acquisition channels, ultimately achieving the goal of improving text recognition accuracy. This method was experimentally verified on six public datasets and compared with other advanced methods. The experimental results show that this method performs well in most benchmark tests, and the improved model outperforms the original model in recognition performance.
文本识别是计算机视觉领域的一个重要研究课题。场景文本,是指真实场景中的文本,有时需要满足吸引眼球的要求,出现变形等情况。同时,图像采集过程受到遮挡、噪声、障碍物等因素的影响,使得场景文本识别任务更具挑战性。本文对文本识别的CRNN模型进行了改进,该模型准确率较低,对不规则文本的识别性能较差,并且只考虑从单一方面获取文本序列信息,导致信息获取不完整。首先,为了解决文本识别精度低、不规则文本识别能力差的问题,我们增加了标签平滑来保证模型的泛化能力。然后,将语音识别中的平滑损失函数引入文本识别领域,并加入语言模型增加信息获取通道,最终达到提高文本识别准确率的目的。该方法在6个公开数据集上进行了实验验证,并与其他先进方法进行了比较。实验结果表明,该方法在大多数基准测试中表现良好,改进后的模型在识别性能上优于原模型。
{"title":"Scene Text Recognition Based on Improved CRNN","authors":"Wenhua Yu, Mayire Ibrayim, A. Hamdulla","doi":"10.3390/info14070369","DOIUrl":"https://doi.org/10.3390/info14070369","url":null,"abstract":"Text recognition is an important research topic in computer vision. Scene text, which refers to the text in real scenes, sometimes needs to meet the requirement of attracting attention, and there is the situation such as deformation. At the same time, the image acquisition process is affected by factors such as occlusion, noise, and obstruction, making scene text recognition tasks more challenging. In this paper, we improve the CRNN model for text recognition, which has relatively low accuracy, poor performance in recognizing irregular text, and only considers obtaining text sequence information from a single aspect, resulting in incomplete information acquisition. Firstly, to address the problems of low text recognition accuracy and poor recognition of irregular text, we add label smoothing to ensure the model’s generalization ability. Then, we introduce the smoothing loss function from speech recognition into the field of text recognition, and add a language model to increase information acquisition channels, ultimately achieving the goal of improving text recognition accuracy. This method was experimentally verified on six public datasets and compared with other advanced methods. The experimental results show that this method performs well in most benchmark tests, and the improved model outperforms the original model in recognition performance.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"09 1","pages":"369"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86226020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
U-Net_dc: A Novel U-Net-Based Model for Endometrial Cancer Cell Image Segmentation 基于u - net的子宫内膜癌细胞图像分割新模型U-Net_dc
Pub Date : 2023-06-28 DOI: 10.3390/info14070366
Zhanlin Ji, Dashuang Yao, R. Chen, Tao Lyu, Q. Liao, Li Zhao, Ivan Ganchev
Mutated cells may constitute a source of cancer. As an effective approach to quantifying the extent of cancer, cell image segmentation is of particular importance for understanding the mechanism of the disease, observing the degree of cancer cell lesions, and improving the efficiency of treatment and the useful effect of drugs. However, traditional image segmentation models are not ideal solutions for cancer cell image segmentation due to the fact that cancer cells are highly dense and vary in shape and size. To tackle this problem, this paper proposes a novel U-Net-based image segmentation model, named U-Net_dc, which expands twice the original U-Net encoder and decoder and, in addition, uses a skip connection operation between them, for better extraction of the image features. In addition, the feature maps of the last few U-Net layers are upsampled to the same size and then concatenated together for producing the final output, which allows the final feature map to retain many deep-level features. Moreover, dense atrous convolution (DAC) and residual multi-kernel pooling (RMP) modules are introduced between the encoder and decoder, which helps the model obtain receptive fields of different sizes, better extract rich feature expression, detect objects of different sizes, and better obtain context information. According to the results obtained from experiments conducted on the Tsinghua University’s private dataset of endometrial cancer cells and the publicly available Data Science Bowl 2018 (DSB2018) dataset, the proposed U-Net_dc model outperforms all state-of-the-art models included in the performance comparison study, based on all evaluation metrics used.
突变的细胞可能是癌症的一个来源。细胞图像分割作为一种量化癌变程度的有效方法,对于了解疾病发生机制、观察癌细胞病变程度、提高治疗效率和药物的有用效果都具有特别重要的意义。然而,由于癌细胞密度大、形状大小不一,传统的图像分割模型并不是理想的癌细胞图像分割方案。为了解决这一问题,本文提出了一种新的基于U-Net的图像分割模型U-Net_dc,该模型将原来的U-Net编码器和解码器扩展了一倍,并在它们之间使用了跳过连接操作,以便更好地提取图像特征。此外,最后几个U-Net层的特征图被上采样到相同的大小,然后连接在一起产生最终的输出,这使得最终的特征图保留了许多深层次的特征。此外,在编码器和解码器之间引入密集亚鲁斯卷积(dense atrous convolution, DAC)和残差多核池(residual multikernel pooling, RMP)模块,帮助模型获得不同大小的接受域,更好地提取丰富的特征表达式,检测不同大小的对象,更好地获取上下文信息。根据清华大学子宫内膜癌细胞私人数据集和公开的数据科学碗2018 (DSB2018)数据集进行的实验结果,基于所使用的所有评估指标,所提出的U-Net_dc模型优于性能比较研究中包含的所有最先进的模型。
{"title":"U-Net_dc: A Novel U-Net-Based Model for Endometrial Cancer Cell Image Segmentation","authors":"Zhanlin Ji, Dashuang Yao, R. Chen, Tao Lyu, Q. Liao, Li Zhao, Ivan Ganchev","doi":"10.3390/info14070366","DOIUrl":"https://doi.org/10.3390/info14070366","url":null,"abstract":"Mutated cells may constitute a source of cancer. As an effective approach to quantifying the extent of cancer, cell image segmentation is of particular importance for understanding the mechanism of the disease, observing the degree of cancer cell lesions, and improving the efficiency of treatment and the useful effect of drugs. However, traditional image segmentation models are not ideal solutions for cancer cell image segmentation due to the fact that cancer cells are highly dense and vary in shape and size. To tackle this problem, this paper proposes a novel U-Net-based image segmentation model, named U-Net_dc, which expands twice the original U-Net encoder and decoder and, in addition, uses a skip connection operation between them, for better extraction of the image features. In addition, the feature maps of the last few U-Net layers are upsampled to the same size and then concatenated together for producing the final output, which allows the final feature map to retain many deep-level features. Moreover, dense atrous convolution (DAC) and residual multi-kernel pooling (RMP) modules are introduced between the encoder and decoder, which helps the model obtain receptive fields of different sizes, better extract rich feature expression, detect objects of different sizes, and better obtain context information. According to the results obtained from experiments conducted on the Tsinghua University’s private dataset of endometrial cancer cells and the publicly available Data Science Bowl 2018 (DSB2018) dataset, the proposed U-Net_dc model outperforms all state-of-the-art models included in the performance comparison study, based on all evaluation metrics used.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"232 1","pages":"366"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80132053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regularized Mislevy-Wu Model for Handling Nonignorable Missing Item Responses 处理不可忽略缺失项响应的正则Mislevy-Wu模型
Pub Date : 2023-06-28 DOI: 10.3390/info14070368
A. Robitzsch
Missing item responses are frequently found in educational large-scale assessment studies. In this article, the Mislevy-Wu item response model is applied for handling nonignorable missing item responses. This model allows that the missingness of an item depends on the item itself and a further latent variable. However, with low to moderate amounts of missing item responses, model parameters for the missingness mechanism are difficult to estimate. Hence, regularized estimation using a fused ridge penalty is applied to the Mislevy-Wu model to stabilize estimation. The fused ridge penalty function is separately defined for multiple-choice and constructed response items because previous research indicated that the missingness mechanisms strongly differed for the two item types. In a simulation study, it turned out that regularized estimation improves the stability of item parameter estimation. The method is also illustrated using international data from the progress in international reading literacy study (PIRLS) 2011 data.
在教育大规模评估研究中,经常发现缺失项目反应。本文采用Mislevy-Wu项目响应模型来处理不可忽略的缺失项目响应。该模型允许项目的缺失取决于项目本身和一个进一步的潜在变量。然而,对于低到中等数量的缺失项目反应,缺失机制的模型参数很难估计。因此,将融合脊罚的正则化估计应用于Mislevy-Wu模型以稳定估计。由于以往的研究表明,两种类型的缺失机制存在明显差异,因此将多项选择题和建构题的融合脊罚函数分开定义。仿真研究表明,正则化估计提高了项目参数估计的稳定性。该方法还使用2011年国际阅读素养研究进展(PIRLS)数据中的国际数据进行了说明。
{"title":"Regularized Mislevy-Wu Model for Handling Nonignorable Missing Item Responses","authors":"A. Robitzsch","doi":"10.3390/info14070368","DOIUrl":"https://doi.org/10.3390/info14070368","url":null,"abstract":"Missing item responses are frequently found in educational large-scale assessment studies. In this article, the Mislevy-Wu item response model is applied for handling nonignorable missing item responses. This model allows that the missingness of an item depends on the item itself and a further latent variable. However, with low to moderate amounts of missing item responses, model parameters for the missingness mechanism are difficult to estimate. Hence, regularized estimation using a fused ridge penalty is applied to the Mislevy-Wu model to stabilize estimation. The fused ridge penalty function is separately defined for multiple-choice and constructed response items because previous research indicated that the missingness mechanisms strongly differed for the two item types. In a simulation study, it turned out that regularized estimation improves the stability of item parameter estimation. The method is also illustrated using international data from the progress in international reading literacy study (PIRLS) 2011 data.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"24 1","pages":"368"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81136653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Document-Level Relation Extraction with Local Relation and Global Inference 基于局部关系和全局推理的文档级关系提取
Pub Date : 2023-06-27 DOI: 10.3390/info14070365
Yiming Liu, Hongtao Shan, Feng Nie, Gaoyu Zhang, G. Yuan
The current popular approach to the extraction of document-level relations is mainly based on either a graph structure or serialization model method for the inference, but the graph structure method makes the model complicated, while the serialization model method decreases the extraction accuracy as the text length increases. To address such problems, the goal of this paper is to develop a new approach for document-level relationship extraction by applying a new idea through the consideration of so-called “Local Relationship and Global Inference” (in short, LRGI), which means that we first encode the text using the BERT pre-training model to obtain a local relationship vector first by considering a local context pooling and bilinear group algorithm and then establishing a global inference mechanism based on Floyd’s algorithm to achieve multi-path multi-hop inference and obtain the global inference vector, which allow us to extract multi-classified relationships with adaptive thresholding criteria. Taking the DocRED dataset as a testing set, the numerical results show that our proposed new approach (LRGI) in this paper achieves an accuracy of 0.73, and the value of F1 is 62.11, corresponding to 28% and 2% improvements by comparing with the classical document-level relationship extraction model (ATLOP), respectively.
目前流行的文档级关系提取方法主要是基于图结构方法或序列化模型方法进行推理,但图结构方法使模型复杂,而序列化模型方法则随着文本长度的增加而降低提取精度。为了解决这些问题,本文的目标是通过考虑所谓的“局部关系和全局推理”(简称LRGI),应用一种新的思想,开发一种新的文档级关系提取方法。即首先利用BERT预训练模型对文本进行编码,首先考虑局部上下文池和双线性群算法,得到局部关系向量,然后建立基于Floyd算法的全局推理机制,实现多路径多跳推理,得到全局推理向量,从而实现基于自适应阈值准则的多分类关系提取。以DocRED数据集为测试集,数值结果表明,与经典文档级关系提取模型(ATLOP)相比,本文提出的新方法(LRGI)的准确率为0.73,F1值为62.11,分别提高了28%和2%。
{"title":"Document-Level Relation Extraction with Local Relation and Global Inference","authors":"Yiming Liu, Hongtao Shan, Feng Nie, Gaoyu Zhang, G. Yuan","doi":"10.3390/info14070365","DOIUrl":"https://doi.org/10.3390/info14070365","url":null,"abstract":"The current popular approach to the extraction of document-level relations is mainly based on either a graph structure or serialization model method for the inference, but the graph structure method makes the model complicated, while the serialization model method decreases the extraction accuracy as the text length increases. To address such problems, the goal of this paper is to develop a new approach for document-level relationship extraction by applying a new idea through the consideration of so-called “Local Relationship and Global Inference” (in short, LRGI), which means that we first encode the text using the BERT pre-training model to obtain a local relationship vector first by considering a local context pooling and bilinear group algorithm and then establishing a global inference mechanism based on Floyd’s algorithm to achieve multi-path multi-hop inference and obtain the global inference vector, which allow us to extract multi-classified relationships with adaptive thresholding criteria. Taking the DocRED dataset as a testing set, the numerical results show that our proposed new approach (LRGI) in this paper achieves an accuracy of 0.73, and the value of F1 is 62.11, corresponding to 28% and 2% improvements by comparing with the classical document-level relationship extraction model (ATLOP), respectively.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"2 1","pages":"365"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88236990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonlinear Activation-Free Contextual Attention Network for Polyp Segmentation 多边形分割的非线性无激活上下文注意网络
Pub Date : 2023-06-26 DOI: 10.3390/info14070362
Weidong Wu, Hongbo Fan, Yu Fan, Jian Wen
The accurate segmentation of colorectal polyps is of great significance for the diagnosis and treatment of colorectal cancer. However, the segmentation of colorectal polyps faces complex problems such as low contrast in the peripheral region of salient images, blurred borders, and diverse shapes. In addition, the number of traditional UNet network parameters is large and the segmentation effect is average. To overcome these problems, an innovative nonlinear activation-free uncertainty contextual attention network is proposed in this paper. Based on the UNet network, an encoder and a decoder are added to predict the saliency map of each module in the bottom-up flow and pass it to the next module. We use Res2Net as the backbone network to extract image features, enhance image features through simple parallel axial channel attention, and obtain high-level features with global semantics and low-level features with edge details. At the same time, a nonlinear n on-activation network is introduced, which can reduce the complexity between blocks, thereby further enhancing image feature extraction. This work conducted experiments on five commonly used polyp segmentation datasets, and the experimental evaluation metrics from the mean intersection over union, mean Dice coefficient, and mean absolute error were all improved, which can show that our method has certain advantages over existing methods in terms of segmentation performance and generalization performance.
结直肠息肉的准确分割对结直肠癌的诊断和治疗具有重要意义。然而,结肠直肠息肉的分割面临着突出图像周边对比度低、边界模糊、形状多样等复杂问题。此外,传统UNet网络参数数量较多,分割效果一般。为了克服这些问题,本文提出了一种新颖的非线性无激活不确定性上下文注意网络。在UNet网络的基础上,增加一个编码器和一个解码器,预测自底向上流程中每个模块的显著性映射并传递给下一个模块。我们以Res2Net为骨干网络提取图像特征,通过简单的平行轴向通道关注增强图像特征,获得具有全局语义的高级特征和具有边缘细节的低级特征。同时,引入非线性非激活网络,降低了块之间的复杂度,从而进一步增强了图像特征提取。本工作在5个常用的息肉分割数据集上进行了实验,从平均交集/并、平均Dice系数、平均绝对误差等实验评价指标都得到了改进,可以看出我们的方法在分割性能和泛化性能上都比现有的方法有一定的优势。
{"title":"Nonlinear Activation-Free Contextual Attention Network for Polyp Segmentation","authors":"Weidong Wu, Hongbo Fan, Yu Fan, Jian Wen","doi":"10.3390/info14070362","DOIUrl":"https://doi.org/10.3390/info14070362","url":null,"abstract":"The accurate segmentation of colorectal polyps is of great significance for the diagnosis and treatment of colorectal cancer. However, the segmentation of colorectal polyps faces complex problems such as low contrast in the peripheral region of salient images, blurred borders, and diverse shapes. In addition, the number of traditional UNet network parameters is large and the segmentation effect is average. To overcome these problems, an innovative nonlinear activation-free uncertainty contextual attention network is proposed in this paper. Based on the UNet network, an encoder and a decoder are added to predict the saliency map of each module in the bottom-up flow and pass it to the next module. We use Res2Net as the backbone network to extract image features, enhance image features through simple parallel axial channel attention, and obtain high-level features with global semantics and low-level features with edge details. At the same time, a nonlinear n on-activation network is introduced, which can reduce the complexity between blocks, thereby further enhancing image feature extraction. This work conducted experiments on five commonly used polyp segmentation datasets, and the experimental evaluation metrics from the mean intersection over union, mean Dice coefficient, and mean absolute error were all improved, which can show that our method has certain advantages over existing methods in terms of segmentation performance and generalization performance.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"14 1","pages":"362"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85130688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Informed Decision Support Framework from a Strategic Perspective in the Health Sector 从战略角度看卫生部门的知情决策支持框架
Pub Date : 2023-06-26 DOI: 10.3390/info14070363
M. Alojail, Mohanad Alturki, S. B. Khan
This paper introduces an informed decision support framework (IDSF) from a strategic perspective in the health sector, focusing on Saudi Arabia. The study addresses the existing challenges and gaps in decision-making processes within Saudi organizations, highlighting the need for proper systems and identifying the loopholes that hinder informed decision making. The research aims to answer two key research questions: (1) how do decision makers ensure the accuracy of their decisions? and (2) what is the proper process to govern and control decision outcomes? To achieve these objectives, the research adopts a qualitative research approach, including an intensive literature review and interviews with decision makers in the Saudi health sector. The proposed IDSF fills the gap in the existing literature by providing a comprehensive and adaptable framework for decision making in Saudi organizations. The framework encompasses structured, semi-structured, and unstructured decisions, ensuring a thorough approach to informed decision making. It emphasizes the importance of integrating non-digital sources of information into the decision-making process, as well as considering factors that impact decision quality and accuracy. The study’s methodology involves data collection through interviews with decision makers, as well as the use of visualization tools to present and evaluate the results. The analysis of the collected data highlights the deficiencies in current decision-making practices and supports the development of the IDSF. The research findings demonstrate that the proposed framework outperforms existing approaches, offering improved accuracy and efficiency in decision making. Overall, this research paper contributes to the state of the art by introducing a novel IDSF specifically designed for the Saudi health sector.
本文从卫生部门的战略角度介绍了一个知情决策支持框架(IDSF),重点是沙特阿拉伯。该研究解决了沙特组织内部决策过程中存在的挑战和差距,强调了建立适当制度的必要性,并确定了阻碍知情决策的漏洞。本研究旨在回答两个关键的研究问题:(1)决策者如何确保其决策的准确性?(2)治理和控制决策结果的适当过程是什么?为了实现这些目标,本研究采用了定性研究方法,包括深入的文献综述和对沙特卫生部门决策者的访谈。拟议的IDSF填补了现有文献中的空白,为沙特组织的决策提供了一个全面和适应性强的框架。该框架包括结构化、半结构化和非结构化决策,确保全面的方法做出明智的决策。它强调将非数字信息来源整合到决策过程中的重要性,以及考虑影响决策质量和准确性的因素。该研究的方法包括通过与决策者的访谈收集数据,以及使用可视化工具来呈现和评估结果。对收集到的数据进行的分析突出了当前决策做法的不足,并为编制IDSF提供了支持。研究结果表明,所提出的框架优于现有的方法,提高了决策的准确性和效率。总的来说,本研究论文通过介绍专门为沙特卫生部门设计的新颖IDSF,为最先进的技术做出了贡献。
{"title":"An Informed Decision Support Framework from a Strategic Perspective in the Health Sector","authors":"M. Alojail, Mohanad Alturki, S. B. Khan","doi":"10.3390/info14070363","DOIUrl":"https://doi.org/10.3390/info14070363","url":null,"abstract":"This paper introduces an informed decision support framework (IDSF) from a strategic perspective in the health sector, focusing on Saudi Arabia. The study addresses the existing challenges and gaps in decision-making processes within Saudi organizations, highlighting the need for proper systems and identifying the loopholes that hinder informed decision making. The research aims to answer two key research questions: (1) how do decision makers ensure the accuracy of their decisions? and (2) what is the proper process to govern and control decision outcomes? To achieve these objectives, the research adopts a qualitative research approach, including an intensive literature review and interviews with decision makers in the Saudi health sector. The proposed IDSF fills the gap in the existing literature by providing a comprehensive and adaptable framework for decision making in Saudi organizations. The framework encompasses structured, semi-structured, and unstructured decisions, ensuring a thorough approach to informed decision making. It emphasizes the importance of integrating non-digital sources of information into the decision-making process, as well as considering factors that impact decision quality and accuracy. The study’s methodology involves data collection through interviews with decision makers, as well as the use of visualization tools to present and evaluate the results. The analysis of the collected data highlights the deficiencies in current decision-making practices and supports the development of the IDSF. The research findings demonstrate that the proposed framework outperforms existing approaches, offering improved accuracy and efficiency in decision making. Overall, this research paper contributes to the state of the art by introducing a novel IDSF specifically designed for the Saudi health sector.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"45 1","pages":"363"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90337981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Authorship Identification of Binary and Disassembled Codes Using NLP Methods 基于NLP方法的二进制和反汇编码作者识别
Pub Date : 2023-06-25 DOI: 10.3390/info14070361
Aleksandr Romanov, A. Kurtukova, A. Fedotova, A. Shelupanov
This article is part of a series aimed at determining the authorship of source codes. Analyzing binary code is a crucial aspect of cybersecurity, software development, and computer forensics, particularly in identifying malware authors. Any program is machine code, which can be disassembled using specialized tools and analyzed for authorship identification, similar to natural language text using Natural Language Processing methods. We propose an ensemble of fastText, support vector machine (SVM), and the authors’ hybrid neural network developed in previous works in this research. The improved methodology was evaluated using a dataset of source codes written in C and C++ languages collected from GitHub and Google Code Jam. The collected source codes were compiled into executable programs and then disassembled using reverse engineering tools. The average accuracy of author identification for disassembled codes using the improved methodology exceeds 0.90. Additionally, the methodology was tested on the source codes, achieving an average accuracy of 0.96 in simple cases and over 0.85 in complex cases. These results validate the effectiveness of the developed methodology and its applicability to solving cybersecurity challenges.
本文是旨在确定源代码作者身份的系列文章的一部分。分析二进制代码是网络安全、软件开发和计算机取证的一个重要方面,特别是在识别恶意软件作者方面。任何程序都是机器代码,可以使用专门的工具对其进行反汇编,并分析其作者身份,类似于使用自然语言处理方法的自然语言文本。在本研究中,我们提出了一种集成fastText、支持向量机(SVM)和作者在先前工作中开发的混合神经网络的方法。改进后的方法使用从GitHub和Google Code Jam收集的C和c++语言编写的源代码数据集进行评估。收集到的源代码被编译成可执行程序,然后使用逆向工程工具进行反汇编。采用改进的方法对反汇编代码进行作者识别的平均准确率超过0.90。此外,该方法在源代码上进行了测试,在简单情况下达到0.96的平均精度,在复杂情况下达到0.85以上。这些结果验证了所开发方法的有效性及其在解决网络安全挑战方面的适用性。
{"title":"Authorship Identification of Binary and Disassembled Codes Using NLP Methods","authors":"Aleksandr Romanov, A. Kurtukova, A. Fedotova, A. Shelupanov","doi":"10.3390/info14070361","DOIUrl":"https://doi.org/10.3390/info14070361","url":null,"abstract":"This article is part of a series aimed at determining the authorship of source codes. Analyzing binary code is a crucial aspect of cybersecurity, software development, and computer forensics, particularly in identifying malware authors. Any program is machine code, which can be disassembled using specialized tools and analyzed for authorship identification, similar to natural language text using Natural Language Processing methods. We propose an ensemble of fastText, support vector machine (SVM), and the authors’ hybrid neural network developed in previous works in this research. The improved methodology was evaluated using a dataset of source codes written in C and C++ languages collected from GitHub and Google Code Jam. The collected source codes were compiled into executable programs and then disassembled using reverse engineering tools. The average accuracy of author identification for disassembled codes using the improved methodology exceeds 0.90. Additionally, the methodology was tested on the source codes, achieving an average accuracy of 0.96 in simple cases and over 0.85 in complex cases. These results validate the effectiveness of the developed methodology and its applicability to solving cybersecurity challenges.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":"14 1","pages":"361"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87348618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Inf. Comput.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1