首页 > 最新文献

Journal of Computing Science and Engineering最新文献

英文 中文
An Efficient Attention Deficit Hyperactivity Disorder (ADHD) Diagnostic Technique Based on Multi-Regional Brain Magnetic Resonance Imaging 基于多区域脑磁共振成像的注意力缺陷多动障碍(ADHD)高效诊断技术
Q3 Engineering Pub Date : 2023-09-30 DOI: 10.5626/jcse.2023.17.3.135
Sachnev Vasily, B. S. Mahanand
In this paper, an efficient technique for the diagnosis of attention deficit hyperactivity disorder (ADHD) was proposed. The proposed method used features/voxels extracted from structural magnetic resonance imaging (MRI) scans of seven brain regions and efficiently classified three subtypes of ADHD: ADHD-C, ADHD-H, and ADHD-I, as well as the typically developing control (TDC). Training and testing data for experiments were obtained from ADHD-200 database, and 41,721 features/voxels were extracted from sMRI by using region-of-interest (ROI). The proposed ADHD diagnostic technique built an efficient ADHD classifier in two steps. In the first step, a proposed regional voxels selection method (rVSM) selected an optimal set of features/voxels from seven brain regions available in ADHD-200, i.e., the Amygdala, Caudate, Cerebellar Vermis, Corpus Callosum, Hippocampus, Striatum, and Thalamus. In the second step, voxels/features selected by rVSM were used together to form a unified set of voxels. The unified set of voxels was used by a multi-region voxels selection method to train an efficient classifier using the extreme learning machine (ELM). Finally, the proposed method selected a unique set of voxels from the seven brain regions and built a final ELM classifier with maximum accuracy. Experiments clearly indicated that the proposed method produced better results than existing methods.
本文提出了一种诊断注意缺陷多动障碍(ADHD)的有效方法。该方法使用从七个大脑区域的结构磁共振成像(MRI)扫描中提取的特征/体素,有效地分类了ADHD的三种亚型:ADHD- c、ADHD- h和ADHD- i,以及典型发展对照(TDC)。从ADHD-200数据库中获取实验训练和测试数据,利用感兴趣区域(ROI)从sMRI中提取41,721个特征/体素。所提出的ADHD诊断技术分两步建立了一个高效的ADHD分类器。在第一步中,提出的区域体素选择方法(rVSM)从ADHD-200可用的七个大脑区域(即杏仁核、尾状体、小脑蚓、胼胝体、海马、纹状体和丘脑)中选择一组最优的特征/体素。第二步,将rVSM选择的体素/特征一起使用,形成统一的体素集合。利用统一的体素集,采用多区域体素选择方法,利用极限学习机训练出高效的分类器。最后,该方法从7个大脑区域中选择一组独特的体素,构建出具有最大准确率的最终ELM分类器。实验结果表明,本文提出的方法比现有的方法效果更好。
{"title":"An Efficient Attention Deficit Hyperactivity Disorder (ADHD) Diagnostic Technique Based on Multi-Regional Brain Magnetic Resonance Imaging","authors":"Sachnev Vasily, B. S. Mahanand","doi":"10.5626/jcse.2023.17.3.135","DOIUrl":"https://doi.org/10.5626/jcse.2023.17.3.135","url":null,"abstract":"In this paper, an efficient technique for the diagnosis of attention deficit hyperactivity disorder (ADHD) was proposed. The proposed method used features/voxels extracted from structural magnetic resonance imaging (MRI) scans of seven brain regions and efficiently classified three subtypes of ADHD: ADHD-C, ADHD-H, and ADHD-I, as well as the typically developing control (TDC). Training and testing data for experiments were obtained from ADHD-200 database, and 41,721 features/voxels were extracted from sMRI by using region-of-interest (ROI). The proposed ADHD diagnostic technique built an efficient ADHD classifier in two steps. In the first step, a proposed regional voxels selection method (rVSM) selected an optimal set of features/voxels from seven brain regions available in ADHD-200, i.e., the Amygdala, Caudate, Cerebellar Vermis, Corpus Callosum, Hippocampus, Striatum, and Thalamus. In the second step, voxels/features selected by rVSM were used together to form a unified set of voxels. The unified set of voxels was used by a multi-region voxels selection method to train an efficient classifier using the extreme learning machine (ELM). Finally, the proposed method selected a unique set of voxels from the seven brain regions and built a final ELM classifier with maximum accuracy. Experiments clearly indicated that the proposed method produced better results than existing methods.","PeriodicalId":37773,"journal":{"name":"Journal of Computing Science and Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Study on the Recognition of English Pronunciation Features in Teaching by Machine Learning Algorithms 基于机器学习算法的英语语音特征识别研究
Q3 Engineering Pub Date : 2023-09-30 DOI: 10.5626/jcse.2023.17.3.93
Xiong Wei
A better understanding of students" English pronunciation features would be a useful guide for teaching spoken English. This paper first analyzed the English pronunciation features and extracted Mel-frequency cepstral coefficients (MFCC) features from the pronunciation signal. Then, the support vector machine (SVM) method was used to identify the cases of incorrect and correct pronunciation. To further improve the recognition effect, deep features were extracted using deep brief network (DBN) as the input of the SVM, and the parameters of both DBN and SVM were optimized by the sparrow search algorithm (SSA). Experiments were conducted on the dataset. The results showed that the MFCC-SSA-SVM algorithm had better recognition performance than the MFCC-SVM algorithm. The DBN-SVM algorithm had higher recognition correctness and accuracy than the MFCC-SSA-SVM algorithm, while the SSA-DBN-SVM method had 88.07% correctness and 85.49% accuracy, indicating the best performance. The results demonstrated the reliability of the proposed method for English pronunciation feature recognition; therefore, it can be applied in practical spoken language teaching.
更好地了解学生的英语发音特征将对英语口语教学起到有益的指导作用。本文首先对英语语音特征进行分析,从语音信号中提取Mel-frequency倒谱系数(MFCC)特征。然后,利用支持向量机(SVM)方法识别发音错误和正确的情况。为进一步提高识别效果,采用深度简要网络(deep brief network, DBN)作为支持向量机的输入提取深度特征,并采用麻雀搜索算法(sparrow search algorithm, SSA)对DBN和SVM的参数进行优化。在数据集上进行了实验。结果表明,MFCC-SSA-SVM算法比MFCC-SVM算法具有更好的识别性能。DBN-SVM算法的识别正确性和准确率均高于MFCC-SSA-SVM算法,而SSA-DBN-SVM方法的识别正确性和准确率分别为88.07%和85.49%,性能最好。结果表明,该方法用于英语语音特征识别是可靠的;因此,它可以应用于实际的口语教学中。
{"title":"A Study on the Recognition of English Pronunciation Features in Teaching by Machine Learning Algorithms","authors":"Xiong Wei","doi":"10.5626/jcse.2023.17.3.93","DOIUrl":"https://doi.org/10.5626/jcse.2023.17.3.93","url":null,"abstract":"A better understanding of students\" English pronunciation features would be a useful guide for teaching spoken English. This paper first analyzed the English pronunciation features and extracted Mel-frequency cepstral coefficients (MFCC) features from the pronunciation signal. Then, the support vector machine (SVM) method was used to identify the cases of incorrect and correct pronunciation. To further improve the recognition effect, deep features were extracted using deep brief network (DBN) as the input of the SVM, and the parameters of both DBN and SVM were optimized by the sparrow search algorithm (SSA). Experiments were conducted on the dataset. The results showed that the MFCC-SSA-SVM algorithm had better recognition performance than the MFCC-SVM algorithm. The DBN-SVM algorithm had higher recognition correctness and accuracy than the MFCC-SSA-SVM algorithm, while the SSA-DBN-SVM method had 88.07% correctness and 85.49% accuracy, indicating the best performance. The results demonstrated the reliability of the proposed method for English pronunciation feature recognition; therefore, it can be applied in practical spoken language teaching.","PeriodicalId":37773,"journal":{"name":"Journal of Computing Science and Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Autism Detection Using Structural Magnetic Resonance Imaging Based on Selective Binary Coded Genetic Algorithm 基于选择性二值编码遗传算法的结构磁共振成像自闭症检测
Q3 Engineering Pub Date : 2023-09-30 DOI: 10.5626/jcse.2023.17.3.127
Sachnev Vasily, B. S. Mahanand
In this work, an efficient machine learning technique for autism diagnosis using structural magnetic resonance imaging (MRI) is proposed. The proposed technique employs the voxel-based morphometry (VBM) approach to extract a set of 989 relevant features from MRI. These features are used to train an efficient extreme learning machine (ELM) classifier to identify autism spectrum disorder (ASD) and healthy controls. The proposed selective binary coded genetic algorithm (sBCGA) found a subset of significant VBM features. The selected subset of features was used to build a final ELM classifier with maximum overall accuracy. The proposed sBCGA uses a selective sample-balanced crossover designed to improve the classification of ASD and healthy controls. The proposed sBCGA has been extensively tested, and the experimental results clearly indicated better accuracy than existing methods.
在这项工作中,提出了一种有效的机器学习技术,用于结构磁共振成像(MRI)的自闭症诊断。该技术采用基于体素的形态测量(VBM)方法从MRI中提取一组989个相关特征。这些特征被用来训练一个高效的极限学习机(ELM)分类器来识别自闭症谱系障碍(ASD)和健康对照。提出的选择性二进制编码遗传算法(sBCGA)发现了一个重要的VBM特征子集。选择的特征子集用于构建具有最大总体精度的最终ELM分类器。拟议的sBCGA使用选择性样本平衡交叉,旨在改善ASD和健康对照的分类。所提出的sBCGA已经过广泛的测试,实验结果明显表明比现有方法具有更好的准确性。
{"title":"An Efficient Autism Detection Using Structural Magnetic Resonance Imaging Based on Selective Binary Coded Genetic Algorithm","authors":"Sachnev Vasily, B. S. Mahanand","doi":"10.5626/jcse.2023.17.3.127","DOIUrl":"https://doi.org/10.5626/jcse.2023.17.3.127","url":null,"abstract":"In this work, an efficient machine learning technique for autism diagnosis using structural magnetic resonance imaging (MRI) is proposed. The proposed technique employs the voxel-based morphometry (VBM) approach to extract a set of 989 relevant features from MRI. These features are used to train an efficient extreme learning machine (ELM) classifier to identify autism spectrum disorder (ASD) and healthy controls. The proposed selective binary coded genetic algorithm (sBCGA) found a subset of significant VBM features. The selected subset of features was used to build a final ELM classifier with maximum overall accuracy. The proposed sBCGA uses a selective sample-balanced crossover designed to improve the classification of ASD and healthy controls. The proposed sBCGA has been extensively tested, and the experimental results clearly indicated better accuracy than existing methods.","PeriodicalId":37773,"journal":{"name":"Journal of Computing Science and Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploration of Key Point Localization Neural Network Architectures for Y-Maze Behavior Test Automation y迷宫行为测试自动化中关键点定位神经网络体系结构的探索
Q3 Engineering Pub Date : 2023-09-30 DOI: 10.5626/jcse.2023.17.3.100
Gwanghee Lee, Sangjun Moon, Dasom Choi, Gayeon Kim, Kyoungson Jhang
The Y-maze behavioral test is a pivotal tool for assessing the memory and exploratory tendencies of mice in novel environments. A significant aspect of this test involves the continuous tracking and pinpointing of the mouse’s location, a task that can be labor-intensive for human researchers. This study introduced an automated solution to this challenge through camera-based image processing. We argued that key point localization techniques are more effective than object detection methods, given that only a single mouse is involved in the test. Through an experimental comparison of eight distinct neural network architectures, we identified the most effective structures for localizing key points such as the mouse’s nose, body center, and tail base. Our models were designed to predict not only the mouse key points but also the reference points of the Y-maze device, aiming to streamline the analysis process and minimize human intervention. The approach involves the generation of a heatmap using a deep learning neural network structure, followed by the extraction of the key points’ central location from the heatmap using a soft argmax function. The findings of this study provide a practical guide for experimenters in the selection and application of neural network architectures for Y-maze behavioral testing.
y形迷宫行为测试是评估小鼠在新环境中的记忆和探索倾向的关键工具。这项测试的一个重要方面包括持续跟踪和精确定位老鼠的位置,这对人类研究人员来说可能是一项劳动密集型的任务。这项研究通过基于相机的图像处理引入了一种自动化解决方案来应对这一挑战。我们认为关键点定位技术比目标检测方法更有效,因为只有一只老鼠参与了测试。通过对八种不同神经网络结构的实验比较,我们确定了最有效的结构来定位关键点,如老鼠的鼻子、身体中心和尾巴基部。我们设计的模型不仅可以预测鼠标关键点,还可以预测y形迷宫装置的参考点,旨在简化分析过程,最大限度地减少人为干预。该方法包括使用深度学习神经网络结构生成热图,然后使用软argmax函数从热图中提取关键点的中心位置。本研究结果为实验人员在y迷宫行为测试中选择和应用神经网络架构提供了实践指导。
{"title":"Exploration of Key Point Localization Neural Network Architectures for Y-Maze Behavior Test Automation","authors":"Gwanghee Lee, Sangjun Moon, Dasom Choi, Gayeon Kim, Kyoungson Jhang","doi":"10.5626/jcse.2023.17.3.100","DOIUrl":"https://doi.org/10.5626/jcse.2023.17.3.100","url":null,"abstract":"The Y-maze behavioral test is a pivotal tool for assessing the memory and exploratory tendencies of mice in novel environments. A significant aspect of this test involves the continuous tracking and pinpointing of the mouse’s location, a task that can be labor-intensive for human researchers. This study introduced an automated solution to this challenge through camera-based image processing. We argued that key point localization techniques are more effective than object detection methods, given that only a single mouse is involved in the test. Through an experimental comparison of eight distinct neural network architectures, we identified the most effective structures for localizing key points such as the mouse’s nose, body center, and tail base. Our models were designed to predict not only the mouse key points but also the reference points of the Y-maze device, aiming to streamline the analysis process and minimize human intervention. The approach involves the generation of a heatmap using a deep learning neural network structure, followed by the extraction of the key points’ central location from the heatmap using a soft argmax function. The findings of this study provide a practical guide for experimenters in the selection and application of neural network architectures for Y-maze behavioral testing.","PeriodicalId":37773,"journal":{"name":"Journal of Computing Science and Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation and Rigid Registration of Liver Dynamic Computed Tomography Images for Diagnostic Assessment of Fatty Liver Disease 肝动态ct图像分割与刚性配准在脂肪肝诊断评估中的应用
Q3 Engineering Pub Date : 2023-09-30 DOI: 10.5626/jcse.2023.17.3.117
Kyoyeong Koo, Sunyoung Lee, Kyoung Won Kim, Kyung Won Kim, Jeongjin Lee, Jiwon Hwang, Taeyong Park, Heeryeol Jeong, Seungwoo Khang, Jongmyoung Lee, Hyuk Kwon, Seungwon Na
This study presents a method for diagnosing fatty liver disease by using time-difference liver computed tomography (CT) images of the same patient to perform segmentation and rigid registration on liver regions, excluding the vascular regions. The proposed method comprises three main steps. First, the liver region is segmented in the precontrast phase, and the liver and liver vessel regions are segmented in the portal phase. Second, rigid registration is performed between the liver regions to align the liver positions affected by the patient
本研究提出了一种诊断脂肪肝的方法,利用同一患者的肝脏时差计算机断层扫描(CT)图像对肝脏区域进行分割和刚性配准,排除血管区域。该方法包括三个主要步骤。首先,在对比前期分割肝脏区域,在门静脉期分割肝脏和肝血管区域。其次,在肝脏区域之间进行刚性配准,以对齐受患者影响的肝脏位置
{"title":"Segmentation and Rigid Registration of Liver Dynamic Computed Tomography Images for Diagnostic Assessment of Fatty Liver Disease","authors":"Kyoyeong Koo, Sunyoung Lee, Kyoung Won Kim, Kyung Won Kim, Jeongjin Lee, Jiwon Hwang, Taeyong Park, Heeryeol Jeong, Seungwoo Khang, Jongmyoung Lee, Hyuk Kwon, Seungwon Na","doi":"10.5626/jcse.2023.17.3.117","DOIUrl":"https://doi.org/10.5626/jcse.2023.17.3.117","url":null,"abstract":"This study presents a method for diagnosing fatty liver disease by using time-difference liver computed tomography (CT) images of the same patient to perform segmentation and rigid registration on liver regions, excluding the vascular regions. The proposed method comprises three main steps. First, the liver region is segmented in the precontrast phase, and the liver and liver vessel regions are segmented in the portal phase. Second, rigid registration is performed between the liver regions to align the liver positions affected by the patient","PeriodicalId":37773,"journal":{"name":"Journal of Computing Science and Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Counting Monotone Polygons and Holes in a Point Set 关于点集中单调多边形和孔的计数
Q3 Engineering Pub Date : 2023-09-30 DOI: 10.5626/jcse.2023.17.3.109
Sang-Won Bae
In this paper, we study the problem of counting the number of monotone polygons in a given set S of n points in general position in the plane. A simple polygon is said to be monotone when any vertical line intersects its boundary at most twice. To our best knowledge, this counting problem remains unsolved and no nontrivial algorithm is known so far. As a research step forward to tackle the problem, we define a subclass of monotone polygons and present the first efficient algorithms that exactly count them.
本文研究平面上一般位置上有n个点的给定集合S中单调多边形个数的计算问题。当任何一条垂直线与它的边界相交最多两次时,我们就说它是单调的。据我们所知,这个计数问题仍然没有解决,到目前为止还没有已知的非平凡算法。为了进一步解决这一问题,我们定义了单调多边形的一个子类,并提出了第一个精确计数单调多边形的有效算法。
{"title":"On Counting Monotone Polygons and Holes in a Point Set","authors":"Sang-Won Bae","doi":"10.5626/jcse.2023.17.3.109","DOIUrl":"https://doi.org/10.5626/jcse.2023.17.3.109","url":null,"abstract":"In this paper, we study the problem of counting the number of monotone polygons in a given set S of n points in general position in the plane. A simple polygon is said to be monotone when any vertical line intersects its boundary at most twice. To our best knowledge, this counting problem remains unsolved and no nontrivial algorithm is known so far. As a research step forward to tackle the problem, we define a subclass of monotone polygons and present the first efficient algorithms that exactly count them.","PeriodicalId":37773,"journal":{"name":"Journal of Computing Science and Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collective Experience: A Database-Fuelled, Inter-Disciplinary Team-Led Learning System. 集体经验:数据库驱动、跨学科团队领导的学习系统。
Q3 Engineering Pub Date : 2012-03-01 DOI: 10.5626/JCSE.2012.6.1.51
Leo A Celi, Roger G Mark, Joon Lee, Daniel J Scott, Trishan Panch
We describe the framework of a data-fuelled, interdisciplinary team-led learning system. The idea is to build models using patients from one's own institution whose features are similar to an index patient as regards an outcome of interest, in order to predict the utility of diagnostic tests and interventions, as well as inform prognosis. The Laboratory of Computational Physiology at the Massachusetts Institute of Technology developed and maintains MIMIC-II, a public deidentified high- resolution database of patients admitted to Beth Israel Deaconess Medical Center. It hosts of teams of clinicians (nurses, doctors, pharmacists) and scientists (database engineers, modelers, epidemiologists) who translate the day-to-day questions during rounds that have no clear answers in the current medical literature into study designs, perform the modeling and the analysis and publish their findings. The studies fall into the following broad categories: identification and interrogation of practice variation, predictive modeling of clinical outcomes within patient subsets and comparative effectiveness research on diagnostic tests and therapeutic interventions. Clinical databases such as MIMIC-II, where recorded health care transactions - clinical decisions linked with patient outcomes - are constantly uploaded, become the centerpiece of a learning system.
我们描述了一个数据驱动的、跨学科团队领导的学习系统的框架。其想法是利用来自自己机构的患者建立模型,这些患者的特征与感兴趣的结果相似,以便预测诊断测试和干预措施的效用,并告知预后。麻省理工学院计算生理学实验室开发并维护了MIMIC-II,这是一个公开的高分辨率数据库,记录了贝斯以色列女执事医疗中心收治的患者。它拥有由临床医生(护士、医生、药剂师)和科学家(数据库工程师、建模师、流行病学家)组成的团队,他们将查房期间在当前医学文献中没有明确答案的日常问题转化为研究设计,进行建模和分析,并发表他们的发现。这些研究可分为以下几大类:对实践差异的识别和调查,对患者亚群临床结果的预测建模,以及对诊断测试和治疗干预的比较有效性研究。像MIMIC-II这样的临床数据库,记录了医疗保健交易——与患者结果相关联的临床决策——不断被上传,成为学习系统的核心。
{"title":"Collective Experience: A Database-Fuelled, Inter-Disciplinary Team-Led Learning System.","authors":"Leo A Celi, Roger G Mark, Joon Lee, Daniel J Scott, Trishan Panch","doi":"10.5626/JCSE.2012.6.1.51","DOIUrl":"https://doi.org/10.5626/JCSE.2012.6.1.51","url":null,"abstract":"We describe the framework of a data-fuelled, interdisciplinary team-led learning system. The idea is to build models using patients from one's own institution whose features are similar to an index patient as regards an outcome of interest, in order to predict the utility of diagnostic tests and interventions, as well as inform prognosis. The Laboratory of Computational Physiology at the Massachusetts Institute of Technology developed and maintains MIMIC-II, a public deidentified high- resolution database of patients admitted to Beth Israel Deaconess Medical Center. It hosts of teams of clinicians (nurses, doctors, pharmacists) and scientists (database engineers, modelers, epidemiologists) who translate the day-to-day questions during rounds that have no clear answers in the current medical literature into study designs, perform the modeling and the analysis and publish their findings. The studies fall into the following broad categories: identification and interrogation of practice variation, predictive modeling of clinical outcomes within patient subsets and comparative effectiveness research on diagnostic tests and therapeutic interventions. Clinical databases such as MIMIC-II, where recorded health care transactions - clinical decisions linked with patient outcomes - are constantly uploaded, become the centerpiece of a learning system.","PeriodicalId":37773,"journal":{"name":"Journal of Computing Science and Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3678291/pdf/nihms-391025.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"31598658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Clustered Dwarf Structure to Speed up Queries on Data Cubes 一种提高数据立方体查询速度的聚类矮结构
Q3 Engineering Pub Date : 2007-09-03 DOI: 10.5626/jcse.2007.1.2.195
Y. Bao, Fangling Leng, Daling Wang, Ge Yu
Dwarf is a highly compressed structure, which compresses the cube by eliminating the semantic redundancies while computing a data cube. Although it has high compression ratio, Dwarf is slower in querying and more difficult in updating due to its structure characteristics. We all know that the original intention of data cube is to speed up the query performance, so we propose two novel clustering methods for query optimization: the recursion clustering method which clusters the nodes in a recursive manner to speed up point queries and the hierarchical clustering method which clusters the nodes of the same dimension to speed up range queries. To facilitate the implementation, we design a partition strategy and a logical clustering mechanism. Experimental results show our methods can effectively improve the query performance on data cubes, and the recursion clustering method is suitable for both point queries and range queries.
Dwarf是一种高度压缩的结构,它在计算数据立方体时通过消除语义冗余来压缩立方体。虽然它具有很高的压缩比,但由于它的结构特点,查询速度较慢,更新难度较大。我们都知道数据立方体的本意是为了提高查询性能,因此我们提出了两种新的查询优化聚类方法:递归聚类方法,通过递归方式聚类节点来加快点查询的速度;分层聚类方法,通过聚类相同维度的节点来加快范围查询的速度。为了便于实现,我们设计了分区策略和逻辑集群机制。实验结果表明,我们的方法可以有效地提高数据立方体的查询性能,递归聚类方法适用于点查询和范围查询。
{"title":"A Clustered Dwarf Structure to Speed up Queries on Data Cubes","authors":"Y. Bao, Fangling Leng, Daling Wang, Ge Yu","doi":"10.5626/jcse.2007.1.2.195","DOIUrl":"https://doi.org/10.5626/jcse.2007.1.2.195","url":null,"abstract":"Dwarf is a highly compressed structure, which compresses the cube by eliminating the semantic redundancies while computing a data cube. Although it has high compression ratio, Dwarf is slower in querying and more difficult in updating due to its structure characteristics. We all know that the original intention of data cube is to speed up the query performance, so we propose two novel clustering methods for query optimization: the recursion clustering method which clusters the nodes in a recursive manner to speed up point queries and the hierarchical clustering method which clusters the nodes of the same dimension to speed up range queries. To facilitate the implementation, we design a partition strategy and a logical clustering mechanism. Experimental results show our methods can effectively improve the query performance on data cubes, and the recursion clustering method is suitable for both point queries and range queries.","PeriodicalId":37773,"journal":{"name":"Journal of Computing Science and Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77286602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Transformation of Continuous Aggregation Join Queries over Data Streams 数据流上连续聚合连接查询的转换
Q3 Engineering Pub Date : 2007-07-16 DOI: 10.5626/jcse.2009.3.1.027
T. Tran, B. Lee
We address continuously processing an aggregation join query over data streams. Queries of this type involve both join and aggregation operations, with windows specified on join input streams. To our knowledge, the existing researches address join query optimization and aggregation query optimization as separate problems. Our observation, however, is that by putting them within the same scope of query optimization we can generate more efficient query execution plans. This is through more versatile query transformations, the key idea of which is to perform aggregation before join so join execution time may be reduced. This idea itself is not new (already proposed in the database area), but developing the query transformation rules faces a completely new set of challenges. In this paper, we first propose a query processing model of an aggregation join query with two key stream operators: (1) aggregation set update, which produces an aggregation set of tuples (one tuple per group) and updates it incrementally as new tuples arrive, and (2) aggregation set join, i.e., join between a stream and an aggregation set of tuples. Then, we introduce the concrete query transformation rules specialized to work with streams. The rules are far more compact and yet more general than the rules proposed in the database area. Then, we present a query processing algorithm generic to all alternative query execution plans that can be generated through the transformations, and study the performances of alternative query execution plans through extensive experiments.
我们解决了在数据流上连续处理聚合连接查询的问题。这种类型的查询涉及连接和聚合操作,并在连接输入流上指定窗口。据我们所知,现有的研究将连接查询优化和聚合查询优化作为单独的问题进行处理。然而,我们的观察是,通过将它们放在相同的查询优化范围内,我们可以生成更有效的查询执行计划。这是通过更通用的查询转换实现的,其关键思想是在连接之前执行聚合,这样可以减少连接的执行时间。这个想法本身并不新鲜(已经在数据库领域提出了),但是开发查询转换规则面临着一系列全新的挑战。在本文中,我们首先提出了一个包含两个关键流操作符的聚合连接查询的查询处理模型:(1)聚合集更新,它产生一个元组的聚合集(每组一个元组),并在新的元组到达时增量更新它;(2)聚合集连接,即流和元组的聚合集之间的连接。然后,我们介绍了专门用于处理流的具体查询转换规则。这些规则比在数据库领域提出的规则要紧凑得多,但也更通用。然后,我们提出了一种通用于所有可通过转换生成的备选查询执行计划的查询处理算法,并通过大量的实验研究了备选查询执行计划的性能。
{"title":"Transformation of Continuous Aggregation Join Queries over Data Streams","authors":"T. Tran, B. Lee","doi":"10.5626/jcse.2009.3.1.027","DOIUrl":"https://doi.org/10.5626/jcse.2009.3.1.027","url":null,"abstract":"We address continuously processing an aggregation join query over data streams. Queries of this type involve both join and aggregation operations, with windows specified on join input streams. To our knowledge, the existing researches address join query optimization and aggregation query optimization as separate problems. Our observation, however, is that by putting them within the same scope of query optimization we can generate more efficient query execution plans. This is through more versatile query transformations, the key idea of which is to perform aggregation before join so join execution time may be reduced. This idea itself is not new (already proposed in the database area), but developing the query transformation rules faces a completely new set of challenges. In this paper, we first propose a query processing model of an aggregation join query with two key stream operators: (1) aggregation set update, which produces an aggregation set of tuples (one tuple per group) and updates it incrementally as new tuples arrive, and (2) aggregation set join, i.e., join between a stream and an aggregation set of tuples. Then, we introduce the concrete query transformation rules specialized to work with streams. The rules are far more compact and yet more general than the rules proposed in the database area. Then, we present a query processing algorithm generic to all alternative query execution plans that can be generated through the transformations, and study the performances of alternative query execution plans through extensive experiments.","PeriodicalId":37773,"journal":{"name":"Journal of Computing Science and Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89424962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Journal of Computing Science and Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1