首页 > 最新文献

Acta Universitatis Sapientiae Informatica最新文献

英文 中文
Connecting the Last.fm Dataset to LyricWiki and MusicBrainz. Lyrics-based experiments in genre classification 连接最后一个。fm数据集到LyricWiki和MusicBrainz。基于歌词的体裁分类实验
IF 0.3 Pub Date : 2018-12-01 DOI: 10.2478/ausi-2018-0009
Z. Bodó, Eszter Szilágyi
Abstract Music information retrieval has lately become an important field of information retrieval, because by profound analysis of music pieces important information can be collected: genre labels, mood prediction, artist identification, just to name a few. The lack of large-scale music datasets containing audio features and metadata has lead to the construction and publication of the Million Song Dataset (MSD) and its satellite datasets. Nonetheless, mainly because of licensing limitations, no freely available lyrics datasets have been published for research. In this paper we describe the construction of an English lyrics dataset based on the Last.fm Dataset, connected to LyricWiki’s database and MusicBrainz’s encyclopedia. To avoid copyright issues, only the URLs to the lyrics are stored in the database. In order to demonstrate the eligibility of the compiled dataset, in the second part of the paper we present genre classification experiments with lyrics-based features, including bagof-n-grams, as well as higher-level features such as rhyme-based and statistical text features. We obtained results similar to the experimental outcomes presented in other works, showing that more sophisticated textual features can improve genre classification performance, and indicating the superiority of the binary weighting scheme compared to tf–idf.
摘要音乐信息检索近年来成为信息检索的一个重要领域,因为通过对音乐作品的深入分析可以收集到重要的信息:流派标签、情绪预测、艺术家识别等等。由于缺乏包含音频特征和元数据的大规模音乐数据集,导致了百万歌曲数据集(MSD)及其卫星数据集的构建和发布。然而,主要是因为许可限制,没有免费提供的歌词数据集已发表用于研究。在本文中,我们描述了基于Last的英语歌词数据集的构建。fm数据库,连接到LyricWiki的数据库和MusicBrainz的百科全书。为了避免版权问题,数据库中只存储歌词的url。为了证明编译数据集的合格性,在论文的第二部分,我们提出了基于歌词的特征(包括n-gram袋)以及更高级别的特征(如基于押韵的和统计文本特征)的类型分类实验。我们得到的结果与其他作品的实验结果相似,表明更复杂的文本特征可以提高类型分类性能,并且表明二元加权方案相对于tf-idf具有优越性。
{"title":"Connecting the Last.fm Dataset to LyricWiki and MusicBrainz. Lyrics-based experiments in genre classification","authors":"Z. Bodó, Eszter Szilágyi","doi":"10.2478/ausi-2018-0009","DOIUrl":"https://doi.org/10.2478/ausi-2018-0009","url":null,"abstract":"Abstract Music information retrieval has lately become an important field of information retrieval, because by profound analysis of music pieces important information can be collected: genre labels, mood prediction, artist identification, just to name a few. The lack of large-scale music datasets containing audio features and metadata has lead to the construction and publication of the Million Song Dataset (MSD) and its satellite datasets. Nonetheless, mainly because of licensing limitations, no freely available lyrics datasets have been published for research. In this paper we describe the construction of an English lyrics dataset based on the Last.fm Dataset, connected to LyricWiki’s database and MusicBrainz’s encyclopedia. To avoid copyright issues, only the URLs to the lyrics are stored in the database. In order to demonstrate the eligibility of the compiled dataset, in the second part of the paper we present genre classification experiments with lyrics-based features, including bagof-n-grams, as well as higher-level features such as rhyme-based and statistical text features. We obtained results similar to the experimental outcomes presented in other works, showing that more sophisticated textual features can improve genre classification performance, and indicating the superiority of the binary weighting scheme compared to tf–idf.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"1 1","pages":"158 - 182"},"PeriodicalIF":0.3,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87719868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Statistical complexity of the quasiperiodical damped systems 准周期阻尼系统的统计复杂性
IF 0.3 Pub Date : 2018-12-01 DOI: 10.2478/ausi-2018-0012
Á. Fülöp
Abstract We consider the concept of statistical complexity to write the quasiperiodical damped systems applying the snapshot attractors. This allows us to understand the behaviour of these dynamical systems by the probability distribution of the time series making a difference between the regular, random and structural complexity on finite measurements. We interpreted the statistical complexity on snapshot attractor and determined it on the quasiperiodical forced pendulum.
摘要考虑统计复杂度的概念,用快照吸引子来描述准周期阻尼系统。这使我们能够通过时间序列的概率分布来理解这些动力系统的行为,从而在有限测量上区分规则、随机和结构复杂性。我们解释了快照吸引子的统计复杂性,并在准周期强迫摆上确定了它。
{"title":"Statistical complexity of the quasiperiodical damped systems","authors":"Á. Fülöp","doi":"10.2478/ausi-2018-0012","DOIUrl":"https://doi.org/10.2478/ausi-2018-0012","url":null,"abstract":"Abstract We consider the concept of statistical complexity to write the quasiperiodical damped systems applying the snapshot attractors. This allows us to understand the behaviour of these dynamical systems by the probability distribution of the time series making a difference between the regular, random and structural complexity on finite measurements. We interpreted the statistical complexity on snapshot attractor and determined it on the quasiperiodical forced pendulum.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"27 1","pages":"241 - 256"},"PeriodicalIF":0.3,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75557057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimum covering reciprocal distance signless Laplacian energy of graphs 图的最小覆盖倒数距离无符号拉普拉斯能量
IF 0.3 Pub Date : 2018-12-01 DOI: 10.2478/ausi-2018-0011
A. Alhevaz, M. Baghipur, E. Hashemi, Y. Alizadeh
Abstract Let G be a simple connected graph. The reciprocal transmission Tr′G(ν) of a vertex ν is defined as TrG′(ν)=∑u∈V(G)1dG(u,ν),           u≠ν. $${rm{Tr}}_{rm{G}}^prime ({rm{nu }}) = sumlimits_{{rm{u}} in {rm{V}}(G)} {{1 over {{{rm{d}}_{rm{G}}}(u,{rm{nu }})}}{rm{u}} ne {rm{nu }}.} $$ The reciprocal distance signless Laplacian (briefly RDSL) matrix of a connected graph G is defined as RQ(G)= diag(Tr′ (G)) + RD(G), where RD(G) is the Harary matrix (reciprocal distance matrix) of G and diag(Tr′ (G)) is the diagonal matrix of the vertex reciprocal transmissions in G. In this paper, we investigate the RDSL spectrum of some classes of graphs that are arisen from graph operations such as cartesian product, extended double cover product and InduBala product. We introduce minimum covering reciprocal distance signless Laplacian matrix (or briey MCRDSL matrix) of G as the square matrix of order n, RQC(G) := (qi;j), qij={1+Tr′(νi)ifi=jandνi∈CTr′(νi)ifi=jandνi∉C1d(νi,νj)otherwise $${{rm{q}}_{{rm{ij}}}} = left{ {matrix{ {1 + {rm{Tr}}prime ({{rm{nu }}_{rm{i}}})} & {{rm{if}}} & {{rm{i = j}}} & {{rm{and}}} & {{{rm{nu }}_{rm{i}}} in {rm{C}}} cr {{rm{Tr}}prime ({{rm{nu }}_{rm{i}}})} & {{rm{if}}} & {{rm{i = j}}} & {{rm{and}}} & {{{rm{nu }}_{rm{i}}} notin {rm{C}}} cr {{1 over {{rm{d(}}{{rm{nu }}_{rm{i}}},{{rm{nu }}_{rm{j}}})}}} & {{rm{otherwise}}} & {} & {} & {} cr } } right.$$ where C is a minimum vertex cover set of G. MCRDSL energy of a graph G is defined as sum of eigenvalues of RQC. Extremal graphs with respect to MCRDSL energy of graph are characterized. We also obtain some bounds on MCRDSL energy of a graph and MCRDSL spectral radius of 𝒢, which is the largest eigenvalue of the matrix RQC (G) of graphs.
设G是一个简单连通图。相互传输Tr石头(ν)的一个顶点ν被定义为丹”(ν)=∑u∈V (G) 1 dg (u,ν ),            u≠ν。$ $ { rm {Tr}} _ { rm {G}} ^ ' ({ rm{ν}})= limits_总和{{ rm{你}}在{ rm {V}} (G)} {{1 / {{{rm d {}} _ { rm {G}}} (u, { rm{ν}})}}{ rm{你}} { rm{ν}}。定义连通图G的互反距离无符号拉普拉斯(简称RDSL)矩阵为RQ(G)= diag(Tr′(G)) + RD(G),其中RD(G)是G的Harary矩阵(互反距离矩阵),diag(Tr′(G))是G中顶点互反传输的对角矩阵。本文研究了由笛卡尔积、扩展双盖积和InduBala积等图运算产生的几类图的RDSL谱。引入G的最小覆盖倒数距离无符号拉普拉斯矩阵(或简写MCRDSL矩阵)为n阶方阵,RQC(G):= (qi;j),qij = {1 + Tr的金融机构(νi) = jandν我∈CTr的金融机构(νi) = jandν我∉C1d(ν我,νj)否则$ $ {{ rm {q}} _ {{ rm {ij}}}} = 左{{矩阵{{1 + { rm {Tr}} ' ({{ rm{ν}}_ { rm{我 }}})} & {{ rm{如果}}}和{{ rm {i = j}}}和{{ rm{和 }}} & {{{ rm{ν}}_ { rm{我}}}在{ rm {C}}} cr {{ rm {Tr}} ' ({{ rm{ν}}_ { rm{我 }}})} & {{ rm{如果}}}和{{ rm {i = j}}}和{{ rm{和 }}} & {{{ rm{ν}}_ { rm{我}}}范围内随意抽查, { rm {C}}} cr {{1 / {{ rm {d (}} {{ rm{ν}}_ { rm{我}}},{{ rm{ν}}_ { rm {j }}})}}} & {{ rm{否则 }}} & {} & {} & {} cr}}、正确的。其中C是G的最小顶点覆盖集,定义图G的MCRDSL能量为RQC的特征值之和。利用图的MCRDSL能量对极值图进行了刻画。我们还得到了图的MCRDSL能量的一些界和图的矩阵RQC (G)的最大特征值𝒢的MCRDSL谱半径。
{"title":"Minimum covering reciprocal distance signless Laplacian energy of graphs","authors":"A. Alhevaz, M. Baghipur, E. Hashemi, Y. Alizadeh","doi":"10.2478/ausi-2018-0011","DOIUrl":"https://doi.org/10.2478/ausi-2018-0011","url":null,"abstract":"Abstract Let G be a simple connected graph. The reciprocal transmission Tr′G(ν) of a vertex ν is defined as TrG′(ν)=∑u∈V(G)1dG(u,ν),           u≠ν. $${rm{Tr}}_{rm{G}}^prime ({rm{nu }}) = sumlimits_{{rm{u}} in {rm{V}}(G)} {{1 over {{{rm{d}}_{rm{G}}}(u,{rm{nu }})}}{rm{u}} ne {rm{nu }}.} $$ The reciprocal distance signless Laplacian (briefly RDSL) matrix of a connected graph G is defined as RQ(G)= diag(Tr′ (G)) + RD(G), where RD(G) is the Harary matrix (reciprocal distance matrix) of G and diag(Tr′ (G)) is the diagonal matrix of the vertex reciprocal transmissions in G. In this paper, we investigate the RDSL spectrum of some classes of graphs that are arisen from graph operations such as cartesian product, extended double cover product and InduBala product. We introduce minimum covering reciprocal distance signless Laplacian matrix (or briey MCRDSL matrix) of G as the square matrix of order n, RQC(G) := (qi;j), qij={1+Tr′(νi)ifi=jandνi∈CTr′(νi)ifi=jandνi∉C1d(νi,νj)otherwise $${{rm{q}}_{{rm{ij}}}} = left{ {matrix{ {1 + {rm{Tr}}prime ({{rm{nu }}_{rm{i}}})} & {{rm{if}}} & {{rm{i = j}}} & {{rm{and}}} & {{{rm{nu }}_{rm{i}}} in {rm{C}}} cr {{rm{Tr}}prime ({{rm{nu }}_{rm{i}}})} & {{rm{if}}} & {{rm{i = j}}} & {{rm{and}}} & {{{rm{nu }}_{rm{i}}} notin {rm{C}}} cr {{1 over {{rm{d(}}{{rm{nu }}_{rm{i}}},{{rm{nu }}_{rm{j}}})}}} & {{rm{otherwise}}} & {} & {} & {} cr } } right.$$ where C is a minimum vertex cover set of G. MCRDSL energy of a graph G is defined as sum of eigenvalues of RQC. Extremal graphs with respect to MCRDSL energy of graph are characterized. We also obtain some bounds on MCRDSL energy of a graph and MCRDSL spectral radius of 𝒢, which is the largest eigenvalue of the matrix RQC (G) of graphs.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"40 12","pages":"218 - 240"},"PeriodicalIF":0.3,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72469105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exact fit problem generator for cutting and packing, revisiting of the upper deck placement algorithm 切割和包装的精确拟合问题生成器,重新审视上层甲板放置算法
IF 0.3 Pub Date : 2018-08-01 DOI: 10.2478/ausi-2018-0005
Levente Filep, L. Illyés
Abstract Problem generators are practical solutions for generating a set of inputs to specific problems. These inputs are widely used for testing, comparing and optimizing placement algorithms. The problem generator presented in this paper fills the gap in the area of 2D Cutting & Packing as the sum of the area of the small objects is equal to the area of the Large Object and has at least one perfect solution. In this paper, the already proposed Upper Deck algorithm is revisited and used to test the proposed generator outputs. This algorithm bypasses the dead area problem that occurs in most of all well-known strategies of the 2D Single Knapsack Problem where we have a single large rectangle to cover with small, heterogeneous rectangle shapes, whom total area exceeds the large object’s area. The idea of placing the small shapes in a free corner simplifies and speeds the placement algorithm as only the available angles are checked for possible placements, and collision detection only requires the checking of corners and edges of the placed shape. Since the proposed generator output has at least one exact solution, a series of optimization performed on the algorithm is also presented.
问题生成器是为特定问题生成一组输入的实用解决方案。这些输入被广泛用于测试、比较和优化放置算法。本文提出的问题生成器填补了二维切割和包装的面积空白,因为小物体的面积之和等于大物体的面积,并且至少有一个完美解。在本文中,已经提出的上层甲板算法被重新审视,并用于测试所提出的发电机输出。该算法绕过了大多数众所周知的2D单背包问题策略中出现的死区问题,在这种策略中,我们有一个单一的大矩形来覆盖小的、异构的矩形形状,这些矩形的总面积超过了大物体的面积。将小形状放置在自由角上的想法简化并加快了放置算法,因为只检查可用的角度来确定可能的放置位置,而碰撞检测只需要检查放置形状的角和边缘。由于所提出的发电机输出至少有一个精确解,因此还对算法进行了一系列优化。
{"title":"Exact fit problem generator for cutting and packing, revisiting of the upper deck placement algorithm","authors":"Levente Filep, L. Illyés","doi":"10.2478/ausi-2018-0005","DOIUrl":"https://doi.org/10.2478/ausi-2018-0005","url":null,"abstract":"Abstract Problem generators are practical solutions for generating a set of inputs to specific problems. These inputs are widely used for testing, comparing and optimizing placement algorithms. The problem generator presented in this paper fills the gap in the area of 2D Cutting & Packing as the sum of the area of the small objects is equal to the area of the Large Object and has at least one perfect solution. In this paper, the already proposed Upper Deck algorithm is revisited and used to test the proposed generator outputs. This algorithm bypasses the dead area problem that occurs in most of all well-known strategies of the 2D Single Knapsack Problem where we have a single large rectangle to cover with small, heterogeneous rectangle shapes, whom total area exceeds the large object’s area. The idea of placing the small shapes in a free corner simplifies and speeds the placement algorithm as only the available angles are checked for possible placements, and collision detection only requires the checking of corners and edges of the placed shape. Since the proposed generator output has at least one exact solution, a series of optimization performed on the algorithm is also presented.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"90 1","pages":"73 - 85"},"PeriodicalIF":0.3,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86153996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on sentiment classification algorithms, challenges and applications 情感分类算法、挑战及应用综述
IF 0.3 Pub Date : 2018-08-01 DOI: 10.2478/ausi-2018-0004
Muhammad Rizwan Rashid Rana, Asif Nawaz, J. Iqbal
Abstract Sentiment classification is the process of exploring sentiments, emotions, ideas and thoughts in the sentences which are expressed by the people. Sentiment classification allows us to judge the sentiments and feelings of the peoples by analyzing their reviews, social media comments etc. about all the aspects. Machine learning techniques and Lexicon based techniques are being mostly used in sentiment classification to predict sentiments from customers reviews and comments. Machine learning techniques includes several learning algorithms to judge the sentiments i.e Navie bayes, support vector machines etc whereas Lexicon Based techniques includes SentiWordnet, Wordnet etc. The main target of this survey is to give nearly full image of sentiment classification techniques. Survey paper provides the comprehensive overview of recent and past research on sentiment classification and provides excellent research queries and approaches for future aspects
情感分类是对人们在句子中所表达的情感、情绪、观念和思想进行挖掘的过程。情绪分类可以让我们通过分析人们的评论、社交媒体评论等来判断人们的情绪和感受。机器学习技术和基于Lexicon的技术主要用于情感分类,以预测客户评论和评论的情绪。机器学习技术包括几种判断情感的学习算法,如Navie bayes,支持向量机等,而基于词典的技术包括SentiWordnet, Wordnet等。本调查的主要目标是给出情感分类技术的近乎完整的图像。调查论文提供了近期和过去的情绪分类研究的全面概述,并为未来的方面提供了优秀的研究问题和方法
{"title":"A survey on sentiment classification algorithms, challenges and applications","authors":"Muhammad Rizwan Rashid Rana, Asif Nawaz, J. Iqbal","doi":"10.2478/ausi-2018-0004","DOIUrl":"https://doi.org/10.2478/ausi-2018-0004","url":null,"abstract":"Abstract Sentiment classification is the process of exploring sentiments, emotions, ideas and thoughts in the sentences which are expressed by the people. Sentiment classification allows us to judge the sentiments and feelings of the peoples by analyzing their reviews, social media comments etc. about all the aspects. Machine learning techniques and Lexicon based techniques are being mostly used in sentiment classification to predict sentiments from customers reviews and comments. Machine learning techniques includes several learning algorithms to judge the sentiments i.e Navie bayes, support vector machines etc whereas Lexicon Based techniques includes SentiWordnet, Wordnet etc. The main target of this survey is to give nearly full image of sentiment classification techniques. Survey paper provides the comprehensive overview of recent and past research on sentiment classification and provides excellent research queries and approaches for future aspects","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"218 1","pages":"58 - 72"},"PeriodicalIF":0.3,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73806815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Low and high grade glioma segmentation in multispectral brain MRI data 脑MRI多光谱数据中低级别和高级别胶质瘤的分割
IF 0.3 Pub Date : 2018-08-01 DOI: 10.2478/ausi-2018-0007
L. Szilágyi, David Iclanzan, Zoltán Kapás, Z. Szabó, Ágnes Győrfi, László Lefkovits
Abstract Several hundreds of thousand humans are diagnosed with brain cancer every year, and the majority dies within the next two years. The chances of survival could be easiest improved by early diagnosis. This is why there is a strong need for reliable algorithms that can detect the presence of gliomas in their early stage. While an automatic tumor detection algorithm can support a mass screening system, the precise segmentation of the tumor can assist medical staff at therapy planning and patient monitoring. This paper presents a random forest based procedure trained to segment gliomas in multispectral volumetric MRI records. Beside the four observed features, the proposed solution uses 100 further features extracted via morphological operations and Gabor wavelet filtering. A neighborhood-based post-processing was designed to regularize and improve the output of the classifier. The proposed algorithm was trained and tested separately with the 54 low-grade and 220 high-grade tumor volumes of the MICCAI BRATS 2016 training database. For both data sets, the achieved accuracy is characterized by an overall mean Dice score > 83%, sensitivity > 85%, and specificity > 98%. The proposed method is likely to detect all gliomas larger than 10 mL.
每年有数十万人被诊断患有脑癌,大多数人在接下来的两年内死亡。早期诊断最容易提高患者的生存机会。这就是为什么迫切需要可靠的算法来检测早期胶质瘤的存在。虽然自动肿瘤检测算法可以支持大规模筛查系统,但肿瘤的精确分割可以帮助医务人员制定治疗计划和监测患者。本文提出了一种基于随机森林的程序,训练以分割多光谱体积MRI记录中的胶质瘤。除了四个观察到的特征外,该方案还使用了通过形态学操作和Gabor小波滤波提取的100个进一步的特征。设计了基于邻域的后处理来正则化和改进分类器的输出。采用MICCAI BRATS 2016训练数据库中的54个低分级肿瘤和220个高分级肿瘤分别进行训练和测试。对于这两个数据集,达到的准确性的特征是总体平均Dice评分> 83%,灵敏度> 85%,特异性> 98%。所提出的方法可能检测到所有大于10ml的胶质瘤。
{"title":"Low and high grade glioma segmentation in multispectral brain MRI data","authors":"L. Szilágyi, David Iclanzan, Zoltán Kapás, Z. Szabó, Ágnes Győrfi, László Lefkovits","doi":"10.2478/ausi-2018-0007","DOIUrl":"https://doi.org/10.2478/ausi-2018-0007","url":null,"abstract":"Abstract Several hundreds of thousand humans are diagnosed with brain cancer every year, and the majority dies within the next two years. The chances of survival could be easiest improved by early diagnosis. This is why there is a strong need for reliable algorithms that can detect the presence of gliomas in their early stage. While an automatic tumor detection algorithm can support a mass screening system, the precise segmentation of the tumor can assist medical staff at therapy planning and patient monitoring. This paper presents a random forest based procedure trained to segment gliomas in multispectral volumetric MRI records. Beside the four observed features, the proposed solution uses 100 further features extracted via morphological operations and Gabor wavelet filtering. A neighborhood-based post-processing was designed to regularize and improve the output of the classifier. The proposed algorithm was trained and tested separately with the 54 low-grade and 220 high-grade tumor volumes of the MICCAI BRATS 2016 training database. For both data sets, the achieved accuracy is characterized by an overall mean Dice score > 83%, sensitivity > 85%, and specificity > 98%. The proposed method is likely to detect all gliomas larger than 10 mL.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"23 1","pages":"110 - 132"},"PeriodicalIF":0.3,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85677132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
On the use of model transformation for the automation of product derivation process in SPL 模型转换在SPL产品派生过程自动化中的应用
IF 0.3 Pub Date : 2018-08-01 DOI: 10.2478/ausi-2018-0003
Nesrine Lahiani, Djamal Bennouar
Abstract Product Derivation represents one of the main challenges that Software Product Line (SPL) faces. Deriving individual products from shared software assets is a time-consuming and an expensive activity. In this paper, we (1) present an MDE approach for engineering SPL and (2) propose to leverage model-to-model transformations (MMT) and model-to-text (MTT) transformations for supporting both domain engineering and application engineering processes. In this work, we use ATL as a model-to-model transformation language and Acceleo as a model-to-text transformation language.The proposed approach is discussed with e-Health product line applications.
摘要产品派生是软件产品线(SPL)面临的主要挑战之一。从共享的软件资产中获得单独的产品是一项耗时且昂贵的活动。在本文中,我们(1)提出了用于工程SPL的MDE方法,(2)建议利用模型到模型转换(MMT)和模型到文本转换(MTT)来支持领域工程和应用工程过程。在这项工作中,我们使用ATL作为模型到模型的转换语言,使用Acceleo作为模型到文本的转换语言。通过电子健康产品线的应用讨论了所提出的方法。
{"title":"On the use of model transformation for the automation of product derivation process in SPL","authors":"Nesrine Lahiani, Djamal Bennouar","doi":"10.2478/ausi-2018-0003","DOIUrl":"https://doi.org/10.2478/ausi-2018-0003","url":null,"abstract":"Abstract Product Derivation represents one of the main challenges that Software Product Line (SPL) faces. Deriving individual products from shared software assets is a time-consuming and an expensive activity. In this paper, we (1) present an MDE approach for engineering SPL and (2) propose to leverage model-to-model transformations (MMT) and model-to-text (MTT) transformations for supporting both domain engineering and application engineering processes. In this work, we use ATL as a model-to-model transformation language and Acceleo as a model-to-text transformation language.The proposed approach is discussed with e-Health product line applications.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"18 1","pages":"43 - 57"},"PeriodicalIF":0.3,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82284750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modular strategic SMT solving with SMT-RAT 基于SMT- rat的模块化策略SMT求解
IF 0.3 Pub Date : 2018-08-01 DOI: 10.2478/ausi-2018-0001
Gereon Kremer, E. Ábrahám
Abstract In this paper we present the latest developments in SMT-RAT, a tool for the automated check of quantifier-free real and integer arithmetic formulas for satisfiability. As a distinguishing feature, SMT-RAT provides a set of solving modules and supports their strategic combination. We describe our CArL library for arithmetic computations, the available modules implemented on top of CArL, and how modules can be combined to satisfiability-modulo-theories (SMT) solvers. Besides the traditional SMT approach, some new modules support also the recently proposed and highly promising model-constructing satisfiability calculus approach.
摘要本文介绍了SMT-RAT的最新进展,SMT-RAT是一个自动检查无量词的实数和整数算术公式的可满足性的工具。SMT-RAT的一个显著特点是提供了一组求解模块,并支持它们的战略组合。我们描述了用于算术计算的CArL库,在CArL之上实现的可用模块,以及如何将模块组合到可满足性模块理论(SMT)求解器中。除了传统的SMT方法外,一些新的模块还支持最近提出的很有前途的模型构造可满足性演算方法。
{"title":"Modular strategic SMT solving with SMT-RAT","authors":"Gereon Kremer, E. Ábrahám","doi":"10.2478/ausi-2018-0001","DOIUrl":"https://doi.org/10.2478/ausi-2018-0001","url":null,"abstract":"Abstract In this paper we present the latest developments in SMT-RAT, a tool for the automated check of quantifier-free real and integer arithmetic formulas for satisfiability. As a distinguishing feature, SMT-RAT provides a set of solving modules and supports their strategic combination. We describe our CArL library for arithmetic computations, the available modules implemented on top of CArL, and how modules can be combined to satisfiability-modulo-theories (SMT) solvers. Besides the traditional SMT approach, some new modules support also the recently proposed and highly promising model-constructing satisfiability calculus approach.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"31 1","pages":"25 - 5"},"PeriodicalIF":0.3,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77559893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Hierarchical clustering with deep Q-learning 基于深度q -学习的分层聚类
IF 0.3 Pub Date : 2018-05-28 DOI: 10.2478/ausi-2018-0006
Richard Forster, A. Fulop
Abstract Following up on our previous study on applying hierarchical clustering algorithms to high energy particle physics, this paper explores the possibilities to use deep learning to generate models capable of processing the clusterization themselves. The technique chosen for training is reinforcement learning, that allows the system to evolve based on interactions between the model and the underlying graph. The result is a model, that by learning on a modest dataset of 10, 000 nodes during 70 epochs can reach 83, 77% precision for hierarchical and 86, 33% for high energy jet physics datasets in predicting the appropriate clusters.
在我们之前将分层聚类算法应用于高能粒子物理的研究之后,本文探索了使用深度学习来生成能够自行处理聚类的模型的可能性。选择用于训练的技术是强化学习,它允许系统基于模型和底层图之间的交互而进化。结果是一个模型,通过在70个epoch的10,000个节点的适度数据集上学习,在预测适当的集群时,分层数据集的精度达到83,77%,高能射流数据集的精度达到86,33%。
{"title":"Hierarchical clustering with deep Q-learning","authors":"Richard Forster, A. Fulop","doi":"10.2478/ausi-2018-0006","DOIUrl":"https://doi.org/10.2478/ausi-2018-0006","url":null,"abstract":"Abstract Following up on our previous study on applying hierarchical clustering algorithms to high energy particle physics, this paper explores the possibilities to use deep learning to generate models capable of processing the clusterization themselves. The technique chosen for training is reinforcement learning, that allows the system to evolve based on interactions between the model and the underlying graph. The result is a model, that by learning on a modest dataset of 10, 000 nodes during 70 epochs can reach 83, 77% precision for hierarchical and 86, 33% for high energy jet physics datasets in predicting the appropriate clusters.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"24 1","pages":"109 - 86"},"PeriodicalIF":0.3,"publicationDate":"2018-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73841365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical kt jet clustering for parallel architectures 并行体系结构的分层kt射流聚类
IF 0.3 Pub Date : 2017-12-20 DOI: 10.1515/ausi-2017-0012
Richard Forster, Á. Fülöp
Abstract The reconstruction and analyze of measured data play important role in the research of high energy particle physics. This leads to new results in both experimental and theoretical physics. This requires algorithm improvements and high computer capacity. Clustering algorithm makes it possible to get to know the jet structure more accurately. More granular parallelization of the kt cluster algorithms was explored by combining it with the hierarchical clustering methods used in network evaluations. The kt method allows to know the development of particles due to the collision of high-energy nucleus-nucleus. The hierarchical clustering algorithms works on graphs, so the particle information used by the standard kt algorithm was first transformed into an appropriate graph, representing the network of particles. Testing was done using data samples from the Alice offine library, which contains the required modules to simulate the ALICE detector that is a dedicated Pb-Pb detector. The proposed algorithm was compared to the FastJet toolkit's standard longitudinal invariant kt implementation. Parallelizing the standard non-optimized version of this algorithm utilizing the available CPU architecture proved to be 1:6 times faster, than the standard implementation, while the proposed solution in this paper was able to achieve a 12 times faster computing performance, also being scalable enough to efficiently run on GPUs.
摘要测量数据的重构与分析在高能粒子物理研究中具有重要作用。这导致了实验和理论物理学的新结果。这需要改进算法和提高计算机容量。聚类算法使得更准确地了解射流结构成为可能。通过将kt聚类算法与网络评估中使用的分层聚类方法相结合,探索了kt聚类算法的更细粒度并行化。kt方法可以使我们知道由于高能原子核与原子核的碰撞而产生的粒子的发展。层次聚类算法是在图上工作的,因此首先将标准kt算法使用的粒子信息转换成合适的图,表示粒子网络。测试使用来自Alice离线库的数据样本完成,该库包含模拟专用Pb-Pb检测器Alice检测器所需的模块。将提出的算法与FastJet工具包的标准纵向不变量kt实现进行了比较。利用可用的CPU架构并行化该算法的标准非优化版本被证明比标准实现快1:6倍,而本文提出的解决方案能够实现12倍的计算性能,并且具有足够的可扩展性,可以有效地在gpu上运行。
{"title":"Hierarchical kt jet clustering for parallel architectures","authors":"Richard Forster, Á. Fülöp","doi":"10.1515/ausi-2017-0012","DOIUrl":"https://doi.org/10.1515/ausi-2017-0012","url":null,"abstract":"Abstract The reconstruction and analyze of measured data play important role in the research of high energy particle physics. This leads to new results in both experimental and theoretical physics. This requires algorithm improvements and high computer capacity. Clustering algorithm makes it possible to get to know the jet structure more accurately. More granular parallelization of the kt cluster algorithms was explored by combining it with the hierarchical clustering methods used in network evaluations. The kt method allows to know the development of particles due to the collision of high-energy nucleus-nucleus. The hierarchical clustering algorithms works on graphs, so the particle information used by the standard kt algorithm was first transformed into an appropriate graph, representing the network of particles. Testing was done using data samples from the Alice offine library, which contains the required modules to simulate the ALICE detector that is a dedicated Pb-Pb detector. The proposed algorithm was compared to the FastJet toolkit's standard longitudinal invariant kt implementation. Parallelizing the standard non-optimized version of this algorithm utilizing the available CPU architecture proved to be 1:6 times faster, than the standard implementation, while the proposed solution in this paper was able to achieve a 12 times faster computing performance, also being scalable enough to efficiently run on GPUs.","PeriodicalId":41480,"journal":{"name":"Acta Universitatis Sapientiae Informatica","volume":"8 1","pages":"195 - 213"},"PeriodicalIF":0.3,"publicationDate":"2017-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75889962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Acta Universitatis Sapientiae Informatica
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1