Pub Date : 2024-02-13DOI: 10.1007/s00354-024-00240-x
Abstract
COVID-19—A pandemic declared by WHO in 2019 has spread worldwide, leading to many infections and deaths. The disease is fatal, and the patient develops symptoms within 14 days of the window. Diagnosis based on CT scans involves rapid and accurate detection of symptoms, and much work has already been done on segmenting infections in CT scans. However, the existing work on infection segmentation must be more efficient to segment the infection area. Therefore, this work proposes an automatic Deep Learning based model using Transfer Learning and Hierarchical techniques to segment COVID-19 infections. The proposed architecture, Transfer Learning with Hierarchical Segmentation Network (TLH-Net), comprises two encoder–decoder architectures connected in series. The encoder–decoder architecture is similar to the U-Net except for the modified 2D convolutional block, attention block and spectral pooling. In TLH-Net, the first part segments the lung contour from the CT scan slices, and the second part generates the infection mask from the lung contour maps. The model trains with the loss function TV_bin, penalizing False-Negative and False-Positive predictions. The model achieves a Dice Coefficient of 98.87% for Lung Segmentation and 86% for Infection Segmentation. The model was also tested with the unseen dataset and has achieved a 56% Dice value.
{"title":"Transfer Learning-Hierarchical Segmentation on COVID CT Scans","authors":"","doi":"10.1007/s00354-024-00240-x","DOIUrl":"https://doi.org/10.1007/s00354-024-00240-x","url":null,"abstract":"<h3>Abstract</h3> <p>COVID-19—A pandemic declared by WHO in 2019 has spread worldwide, leading to many infections and deaths. The disease is fatal, and the patient develops symptoms within 14 days of the window. Diagnosis based on CT scans involves rapid and accurate detection of symptoms, and much work has already been done on segmenting infections in CT scans. However, the existing work on infection segmentation must be more efficient to segment the infection area. Therefore, this work proposes an automatic Deep Learning based model using Transfer Learning and Hierarchical techniques to segment COVID-19 infections. The proposed architecture, Transfer Learning with Hierarchical Segmentation Network (TLH-Net), comprises two encoder–decoder architectures connected in series. The encoder–decoder architecture is similar to the U-Net except for the modified 2D convolutional block, attention block and spectral pooling. In TLH-Net, the first part segments the lung contour from the CT scan slices, and the second part generates the infection mask from the lung contour maps. The model trains with the loss function TV_bin, penalizing False-Negative and False-Positive predictions. The model achieves a Dice Coefficient of 98.87% for Lung Segmentation and 86% for Infection Segmentation. The model was also tested with the unseen dataset and has achieved a 56% Dice value.</p>","PeriodicalId":54726,"journal":{"name":"New Generation Computing","volume":"144 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139769250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An increasing number of exchanges, mainly in the U.S., have adopted a commission structure called maker–taker fees in which traders placing limit orders (makers) are paid a rebate (negative trading commission) and traders placing market orders (takers) are charged a trading fee. The reason is that by paying rebates to makers, exchanges can expect to receive a large number of maker’s orders and gain market share. Makers include arbitrageurs who make large transactions. Maker–taker fees constitute one of the most important commission structures for exchanges, because they are expected to attract arbitrageurs who are looking for rebate profits, on top of their trading profits. There have been many studies about arbitrage trading, but none we could find focused on the impact of arbitrage trading between markets with maker–taker fees where arbitrage traders place limit orders and markets without maker–taker fees where they place market orders. In this study, we investigated volatility and market liquidity by changing the amount of rebate under our proposed artificial markets, where there are or are not maker–taker fees. Then we checked the performance of arbitrage trading when the rebate increased. The results were that volatility in the market with maker–taker fees decreased and that in the market without maker–taker fees increased, and that market liquidity and arbitrage performance both increased in the market with maker–taker fees when rebates increased. The above results indicate that exchanges that operate markets adopting maker–taker fees can provide investors with more attractive markets than those that do not adopt them. However, if more arbitrageurs participate in the market with maker–taker fees to take advantage of these rebates, the cost burden on exchanges may increase unnecessarily.
{"title":"The Impact of Arbitrage Between Stock Markets With and Without Maker–Taker Fees Using an Agent-Based Simulation","authors":"Xin Guan, Mahiro Hoshino, Takanobu Mizuta, Isao Yagi","doi":"10.1007/s00354-023-00239-w","DOIUrl":"https://doi.org/10.1007/s00354-023-00239-w","url":null,"abstract":"<p>An increasing number of exchanges, mainly in the U.S., have adopted a commission structure called maker–taker fees in which traders placing limit orders (makers) are paid a rebate (negative trading commission) and traders placing market orders (takers) are charged a trading fee. The reason is that by paying rebates to makers, exchanges can expect to receive a large number of maker’s orders and gain market share. Makers include arbitrageurs who make large transactions. Maker–taker fees constitute one of the most important commission structures for exchanges, because they are expected to attract arbitrageurs who are looking for rebate profits, on top of their trading profits. There have been many studies about arbitrage trading, but none we could find focused on the impact of arbitrage trading between markets with maker–taker fees where arbitrage traders place limit orders and markets without maker–taker fees where they place market orders. In this study, we investigated volatility and market liquidity by changing the amount of rebate under our proposed artificial markets, where there are or are not maker–taker fees. Then we checked the performance of arbitrage trading when the rebate increased. The results were that volatility in the market with maker–taker fees decreased and that in the market without maker–taker fees increased, and that market liquidity and arbitrage performance both increased in the market with maker–taker fees when rebates increased. The above results indicate that exchanges that operate markets adopting maker–taker fees can provide investors with more attractive markets than those that do not adopt them. However, if more arbitrageurs participate in the market with maker–taker fees to take advantage of these rebates, the cost burden on exchanges may increase unnecessarily.</p>","PeriodicalId":54726,"journal":{"name":"New Generation Computing","volume":"35 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139769255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.1007/s00354-023-00238-x
Tajinder singh, Madhu Kumari, Daya Sagar Gupta
Bayesian belief network is an effective and practical approach that is widely acceptable for real-time series prediction and decision making. However, its computational efforts and complexity increased exponentially with increased number of states. Hence, this research paper a proposed approach inspired by context-based persuasion analysis of sentiment analysis and its impact on the propagation of false information is designed. As social media text consist of unwanted information and needs to be addressed including effective polarity prediction of a sentimentwise ambiguous word in generic contexts. Therefore, in proposed approach persuasion-based strategy based on social media crowd is considered for analyzing the impact of sentimental contextual polarity in social media including pre-processing. For analyzing the polarity of sentiment, Bayesian belief network is used, whereas Turbo Parser is implemented for visual representation of diverse feature class and spontaneous hold of the relationships between features. Furthermore, to analyze the lexicons dependency on each word in terms of context, a tree-based dependency parser representation is used to count the dependency score. Features associated with sentimental words are extracted using Penn tree bank for sentiment polarity disambiguation. Therefore, a graphical model known as Bayesian network learning is opted to design a proposed approach which take care the dependency among various lexicons. Various predictors, namely, (1) pre-processing and subjectivity normalization, (2) computation of threshold and persuasion factor, and (3) extraction of sentiments from dependency parsing from the retrieved text are introduced. The findings of this study indicate that it is most important to compute the local and global context of various sentimental words to analyze the polarity of text. Furthermore, we have tested our proposed method with a standard data set and a real case study is also implemented based on COVID-19, Olympics-2020 and Russia–Ukraine war for the feasibility analysis of the proposed approach. The findings of this study imply a complex and context-dependent mechanism behind the sentiment analysis which shed lights on the efforts for resolving contextual polarity disambiguation in social media.
{"title":"Context-Based Persuasion Analysis of Sentiment Polarity Disambiguation in Social Media Text Streams","authors":"Tajinder singh, Madhu Kumari, Daya Sagar Gupta","doi":"10.1007/s00354-023-00238-x","DOIUrl":"https://doi.org/10.1007/s00354-023-00238-x","url":null,"abstract":"<p>Bayesian belief network is an effective and practical approach that is widely acceptable for real-time series prediction and decision making. However, its computational efforts and complexity increased exponentially with increased number of states. Hence, this research paper a proposed approach inspired by context-based persuasion analysis of sentiment analysis and its impact on the propagation of false information is designed. As social media text consist of unwanted information and needs to be addressed including effective polarity prediction of a sentimentwise ambiguous word in generic contexts. Therefore, in proposed approach persuasion-based strategy based on social media crowd is considered for analyzing the impact of sentimental contextual polarity in social media including pre-processing. For analyzing the polarity of sentiment, Bayesian belief network is used, whereas Turbo Parser is implemented for visual representation of diverse feature class and spontaneous hold of the relationships between features. Furthermore, to analyze the lexicons dependency on each word in terms of context, a tree-based dependency parser representation is used to count the dependency score. Features associated with sentimental words are extracted using Penn tree bank for sentiment polarity disambiguation. Therefore, a graphical model known as Bayesian network learning is opted to design a proposed approach which take care the dependency among various lexicons. Various predictors, namely, (1) pre-processing and subjectivity normalization, (2) computation of threshold and persuasion factor, and (3) extraction of sentiments from dependency parsing from the retrieved text are introduced. The findings of this study indicate that it is most important to compute the local and global context of various sentimental words to analyze the polarity of text. Furthermore, we have tested our proposed method with a standard data set and a real case study is also implemented based on COVID-19, Olympics-2020 and Russia–Ukraine war for the feasibility analysis of the proposed approach. The findings of this study imply a complex and context-dependent mechanism behind the sentiment analysis which shed lights on the efforts for resolving contextual polarity disambiguation in social media.</p>","PeriodicalId":54726,"journal":{"name":"New Generation Computing","volume":"61 10","pages":""},"PeriodicalIF":2.6,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138496801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1007/s00354-023-00237-y
Sumit Dalal, Sarika Jain, Mayank Dave
{"title":"Convolution Neural Network Having Multiple Channels with Own Attention Layer for Depression Detection from Social Data","authors":"Sumit Dalal, Sarika Jain, Mayank Dave","doi":"10.1007/s00354-023-00237-y","DOIUrl":"https://doi.org/10.1007/s00354-023-00237-y","url":null,"abstract":"","PeriodicalId":54726,"journal":{"name":"New Generation Computing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135271422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1007/s00354-023-00233-2
Hiroki Sakaji, Kiyoshi Izumi
Abstract This paper proposes a method to extract financial causal knowledge from bi-lingual text data. Domain-specific causal knowledge plays an important role in human intellectual activities, especially expert decision making. Especially, in the financial area, fund managers, financial analysts, etc. need causal knowledge for their works. Natural language processing is highly effective for extracting human-perceived causality; however, there are two major problems with existing methods. First, causality relative to global activities must be extracted from text data in multiple languages; however, multilingual causality extraction has not been established to date. Second, technologies to extract complex causal structures, e.g., nested causalities, are insufficient. We consider that a model using universal dependencies can extract bi-lingual and nested causalities can be established using clues, e.g., “because” and “since.” Thus, to solve these problems, the proposed model extracts nested causalities based on such clues and universal dependencies in multilingual text data. The proposed financial causality extraction method was evaluated on bi-lingual text data from the financial domain, and the results demonstrated that the proposed model outperformed existing models in the experiment.
{"title":"Financial Causality Extraction Based on Universal Dependencies and Clue Expressions","authors":"Hiroki Sakaji, Kiyoshi Izumi","doi":"10.1007/s00354-023-00233-2","DOIUrl":"https://doi.org/10.1007/s00354-023-00233-2","url":null,"abstract":"Abstract This paper proposes a method to extract financial causal knowledge from bi-lingual text data. Domain-specific causal knowledge plays an important role in human intellectual activities, especially expert decision making. Especially, in the financial area, fund managers, financial analysts, etc. need causal knowledge for their works. Natural language processing is highly effective for extracting human-perceived causality; however, there are two major problems with existing methods. First, causality relative to global activities must be extracted from text data in multiple languages; however, multilingual causality extraction has not been established to date. Second, technologies to extract complex causal structures, e.g., nested causalities, are insufficient. We consider that a model using universal dependencies can extract bi-lingual and nested causalities can be established using clues, e.g., “because” and “since.” Thus, to solve these problems, the proposed model extracts nested causalities based on such clues and universal dependencies in multilingual text data. The proposed financial causality extraction method was evaluated on bi-lingual text data from the financial domain, and the results demonstrated that the proposed model outperformed existing models in the experiment.","PeriodicalId":54726,"journal":{"name":"New Generation Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135855014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-11DOI: 10.1007/s00354-023-00231-4
Rei Taguchi, Hiroki Sakaji, Kiyoshi Izumi, Yuri Murayama
Abstract This study demonstrates whether financial text is useful for the tactical asset allocation method using stocks. This can be achieved using natural language processing to create polarity indexes in financial news. We perform clustering of the created polarity indexes using the change point detection algorithm. In addition, we construct a stock portfolio and rebalanced it at each change point using an optimization algorithm. Consequently, the proposed asset allocation method outperforms the comparative approach. This result suggests that the polarity index is useful for constructing the equity asset allocation method.
{"title":"Constructing Sentiment Signal-Based Asset Allocation Method with Causality Information","authors":"Rei Taguchi, Hiroki Sakaji, Kiyoshi Izumi, Yuri Murayama","doi":"10.1007/s00354-023-00231-4","DOIUrl":"https://doi.org/10.1007/s00354-023-00231-4","url":null,"abstract":"Abstract This study demonstrates whether financial text is useful for the tactical asset allocation method using stocks. This can be achieved using natural language processing to create polarity indexes in financial news. We perform clustering of the created polarity indexes using the change point detection algorithm. In addition, we construct a stock portfolio and rebalanced it at each change point using an optimization algorithm. Consequently, the proposed asset allocation method outperforms the comparative approach. This result suggests that the polarity index is useful for constructing the equity asset allocation method.","PeriodicalId":54726,"journal":{"name":"New Generation Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135981692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-27DOI: 10.1007/s00354-023-00230-5
A. Zadgaonkar, A. Agrawal
{"title":"An Approach for Analyzing Unstructured Text Data Using Topic Modeling Techniques for Efficient Information Extraction","authors":"A. Zadgaonkar, A. Agrawal","doi":"10.1007/s00354-023-00230-5","DOIUrl":"https://doi.org/10.1007/s00354-023-00230-5","url":null,"abstract":"","PeriodicalId":54726,"journal":{"name":"New Generation Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2023-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48392940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}