In this paper, we present a new method for multitask learning applied to ultrasound beamforming. Beamforming is a critical component in the ultrasound image formation pipeline. Ultrasound images are constructed using sensor readings from multiple transducer elements, with each element typically capturing multiple acquisitions per frame. Hence, the beamformer is crucial for framerate performance and overall image quality. Furthermore, post-processing, such as image denoising, is usually applied to the beamformed image to achieve high clarity for diagnosis. This work shows a fully convolutional neural network that can learn different tasks by applying a new weight normalization scheme. We adapt our model to both high frame rate requirements by fitting weight normalization parameters for the sub-sampling task and image denoising by optimizing the normalization parameters for the speckle reduction task. Our model outperforms single-angle delay and sum on pixel-level measures for speckle noise reduction, subsampling, and single-angle reconstruction.
{"title":"Deep-Learning-Based Multitask Ultrasound Beamforming","authors":"Elay Dahan, Israel Cohen","doi":"10.3390/info14100582","DOIUrl":"https://doi.org/10.3390/info14100582","url":null,"abstract":"In this paper, we present a new method for multitask learning applied to ultrasound beamforming. Beamforming is a critical component in the ultrasound image formation pipeline. Ultrasound images are constructed using sensor readings from multiple transducer elements, with each element typically capturing multiple acquisitions per frame. Hence, the beamformer is crucial for framerate performance and overall image quality. Furthermore, post-processing, such as image denoising, is usually applied to the beamformed image to achieve high clarity for diagnosis. This work shows a fully convolutional neural network that can learn different tasks by applying a new weight normalization scheme. We adapt our model to both high frame rate requirements by fitting weight normalization parameters for the sub-sampling task and image denoising by optimizing the normalization parameters for the speckle reduction task. Our model outperforms single-angle delay and sum on pixel-level measures for speckle noise reduction, subsampling, and single-angle reconstruction.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135412987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandros Z. Spyropoulos, Evangelos Ioannidis, Ioannis Antoniou
The early intervention of law enforcement authorities to prevent an impending terrorist attack is of utmost importance to ensuring economic, financial, and social stability. From our previously published research, the key individuals who play a vital role in terrorist organizations can be timely revealed. The problem now is to identify which attack strategy (node removal) is the most damaging to terrorist networks, making them fragmented and therefore, unable to operate under real-world conditions. We examine several attack strategies on 4 real terrorist networks. Each node removal strategy is based on: (i) randomness (random node removal), (ii) high strength centrality, (iii) high betweenness centrality, (iv) high clustering coefficient centrality, (v) high recalculated strength centrality, (vi) high recalculated betweenness centrality, (vii) high recalculated clustering coefficient centrality. The damage of each attack strategy is evaluated in terms of Interoperability, which is defined based on the size of the giant component. We also examine a greedy algorithm, which removes the node corresponding to the maximal decrease of Interoperability at each step. Our analysis revealed that removing nodes based on high recalculated betweenness centrality is the most harmful. In this way, the Interoperability of the communication network drops dramatically, even if only two nodes are removed. This valuable insight can help law enforcement authorities in developing more effective intervention strategies for the early prevention of impending terrorist attacks. Results were obtained based on real data on social ties between terrorists (physical face-to-face social interactions).
{"title":"Interoperability and Targeted Attacks on Terrorist Organizations Using Intelligent Tools From Network Science","authors":"Alexandros Z. Spyropoulos, Evangelos Ioannidis, Ioannis Antoniou","doi":"10.3390/info14100580","DOIUrl":"https://doi.org/10.3390/info14100580","url":null,"abstract":"The early intervention of law enforcement authorities to prevent an impending terrorist attack is of utmost importance to ensuring economic, financial, and social stability. From our previously published research, the key individuals who play a vital role in terrorist organizations can be timely revealed. The problem now is to identify which attack strategy (node removal) is the most damaging to terrorist networks, making them fragmented and therefore, unable to operate under real-world conditions. We examine several attack strategies on 4 real terrorist networks. Each node removal strategy is based on: (i) randomness (random node removal), (ii) high strength centrality, (iii) high betweenness centrality, (iv) high clustering coefficient centrality, (v) high recalculated strength centrality, (vi) high recalculated betweenness centrality, (vii) high recalculated clustering coefficient centrality. The damage of each attack strategy is evaluated in terms of Interoperability, which is defined based on the size of the giant component. We also examine a greedy algorithm, which removes the node corresponding to the maximal decrease of Interoperability at each step. Our analysis revealed that removing nodes based on high recalculated betweenness centrality is the most harmful. In this way, the Interoperability of the communication network drops dramatically, even if only two nodes are removed. This valuable insight can help law enforcement authorities in developing more effective intervention strategies for the early prevention of impending terrorist attacks. Results were obtained based on real data on social ties between terrorists (physical face-to-face social interactions).","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"119 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135511101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francis Marco Maria Cirianni, Antonio Comi, Agata Quattrone
The application of artificial intelligence (AI) to dynamic mobility management can support the achievement of efficiency and sustainability goals. AI can help to model alternative mobility system scenarios in real time (by processing big data from heterogeneous sources in a very short time) and to identify network and service configurations by comparing phenomena in similar contexts, as well as support the implementation of measures for managing demand that achieve sustainable goals. In this paper, an in-depth analysis of scenarios, with an IT (Information Technology) framework based on emerging technologies and AI to support sustainable and cooperative digital mobility, is provided. Therefore, the definition of the functional architecture of an AI-based mobility control centre is defined, and the process that has been implemented in a medium-large city is presented.
{"title":"Mobility Control Centre and Artificial Intelligence for Sustainable Urban Districts","authors":"Francis Marco Maria Cirianni, Antonio Comi, Agata Quattrone","doi":"10.3390/info14100581","DOIUrl":"https://doi.org/10.3390/info14100581","url":null,"abstract":"The application of artificial intelligence (AI) to dynamic mobility management can support the achievement of efficiency and sustainability goals. AI can help to model alternative mobility system scenarios in real time (by processing big data from heterogeneous sources in a very short time) and to identify network and service configurations by comparing phenomena in similar contexts, as well as support the implementation of measures for managing demand that achieve sustainable goals. In this paper, an in-depth analysis of scenarios, with an IT (Information Technology) framework based on emerging technologies and AI to support sustainable and cooperative digital mobility, is provided. Therefore, the definition of the functional architecture of an AI-based mobility control centre is defined, and the process that has been implemented in a medium-large city is presented.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"98 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135512390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Poetic devices implicitly work towards inducing the reader to associate intended and expressed meaning to the sounds of the poem. In turn, sounds may be organized a priori into categories and assigned presumed meaning as suggested by traditional literary studies. To compute the degree of harmony and disharmony, I have automatically extracted the sound grids of all the sonnets by William Shakespeare and have combined them with the themes expressed by their contents. In a first experiment, sounds have been associated with lexically and semantically based sentiment analysis, obtaining an 80% of agreement. In a second experiment, sentiment analysis has been substituted by Appraisal Theory, thus obtaining a more fine-grained interpretation that combines dis-harmony with irony. The computation for Francis Webb is based on his most popular 100 poems and combines automatic semantically and lexically based sentiment analysis with sound grids. The results produce visual maps that clearly separate poems into three clusters: negative harmony, positive harmony and disharmony, where the latter instantiates the need by the poet to encompass the opposites in a desperate attempt to reconcile them. Shakespeare and Webb have been chosen to prove the applicability of the method proposed in general contexts of poetry, exhibiting the widest possible gap at all linguistic and poetic levels.
{"title":"Computing the Sound–Sense Harmony: A Case Study of William Shakespeare’s Sonnets and Francis Webb’s Most Popular Poems","authors":"Rodolfo Delmonte","doi":"10.3390/info14100576","DOIUrl":"https://doi.org/10.3390/info14100576","url":null,"abstract":"Poetic devices implicitly work towards inducing the reader to associate intended and expressed meaning to the sounds of the poem. In turn, sounds may be organized a priori into categories and assigned presumed meaning as suggested by traditional literary studies. To compute the degree of harmony and disharmony, I have automatically extracted the sound grids of all the sonnets by William Shakespeare and have combined them with the themes expressed by their contents. In a first experiment, sounds have been associated with lexically and semantically based sentiment analysis, obtaining an 80% of agreement. In a second experiment, sentiment analysis has been substituted by Appraisal Theory, thus obtaining a more fine-grained interpretation that combines dis-harmony with irony. The computation for Francis Webb is based on his most popular 100 poems and combines automatic semantically and lexically based sentiment analysis with sound grids. The results produce visual maps that clearly separate poems into three clusters: negative harmony, positive harmony and disharmony, where the latter instantiates the need by the poet to encompass the opposites in a desperate attempt to reconcile them. Shakespeare and Webb have been chosen to prove the applicability of the method proposed in general contexts of poetry, exhibiting the widest possible gap at all linguistic and poetic levels.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135616765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raquel Hijón-Neira, Celeste Pizarro, John French, Pedro Paredes-Barragán, Michael Duignan
Students in their first year of computer science (CS1) at universities typically struggle to grasp fundamental programming concepts. This paper discusses research carried out using a Java-based visual execution environment (VEE) to introduce fundamental programming concepts to CS1 students. The VEE guides beginner programmers through the fundamentals of programming, utilizing visual metaphors to explain and direct interactive tasks implemented in Java. The study’s goal was to determine if the use of the VEE in the instruction of a group of 63 CS1 students from four different groups enrolled in two academic institutions (based in Madrid, Spain and Galway, Ireland) results in an improvement in their grasp of fundamental programming concepts. The programming concepts covered included those typically found in an introductory programming course, e.g., input and output, conditionals, loops, functions, arrays, recursion, and files. A secondary goal of this research was to examine if the use of the VEE enhances students’ understanding of particular concepts more than others, i.e., whether there exists a topic-dependent benefit to the use of the VEE. The results of the study found that use of the VEE in the instruction of these students resulted in a significant improvement in their grasp of fundamental programming concepts compared with a control group who received instruction without the use of the VEE. The study also found a pronounced improvement in the students’ grasp of particular concepts (e.g., operators, conditionals, and loops), suggesting the presence of a topic-dependent benefit to the use of the VEE.
{"title":"Improving CS1 Programming Learning with Visual Execution Environments","authors":"Raquel Hijón-Neira, Celeste Pizarro, John French, Pedro Paredes-Barragán, Michael Duignan","doi":"10.3390/info14100579","DOIUrl":"https://doi.org/10.3390/info14100579","url":null,"abstract":"Students in their first year of computer science (CS1) at universities typically struggle to grasp fundamental programming concepts. This paper discusses research carried out using a Java-based visual execution environment (VEE) to introduce fundamental programming concepts to CS1 students. The VEE guides beginner programmers through the fundamentals of programming, utilizing visual metaphors to explain and direct interactive tasks implemented in Java. The study’s goal was to determine if the use of the VEE in the instruction of a group of 63 CS1 students from four different groups enrolled in two academic institutions (based in Madrid, Spain and Galway, Ireland) results in an improvement in their grasp of fundamental programming concepts. The programming concepts covered included those typically found in an introductory programming course, e.g., input and output, conditionals, loops, functions, arrays, recursion, and files. A secondary goal of this research was to examine if the use of the VEE enhances students’ understanding of particular concepts more than others, i.e., whether there exists a topic-dependent benefit to the use of the VEE. The results of the study found that use of the VEE in the instruction of these students resulted in a significant improvement in their grasp of fundamental programming concepts compared with a control group who received instruction without the use of the VEE. The study also found a pronounced improvement in the students’ grasp of particular concepts (e.g., operators, conditionals, and loops), suggesting the presence of a topic-dependent benefit to the use of the VEE.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135616968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Analysing and understanding donor behaviour in nonprofit organisations (NPOs) is challenging due to the lack of human and technical resources. Machine learning (ML) techniques can analyse and understand donor behaviour at a certain level; however, it remains to be seen how to build and design an artificial-intelligence-enabled decision-support system (AI-enabled DSS) to analyse donor behaviour. Thus, this paper proposes an AI-enabled DSS conceptual design to analyse donor behaviour in NPOs. A conceptual design is created following a design science research approach to evaluate an AI-enabled DSS’s initial DPs and features to analyse donor behaviour in NPOs. The evaluation process of the conceptual design applied formative assessment by conducting interviews with stakeholders from NPOs. The interviews were conducted using the Appreciative Inquiry framework to facilitate the process of interviews. The evaluation of the conceptual design results led to the recommendation for efficiency, effectiveness, flexibility, and usability in the requirements of the AI-enabled DSS. This research contributes to the design knowledge base of AI-enabled DSSs for analysing donor behaviour in NPOs. Future research will combine theoretical components to introduce a practical AI-enabled DSS for analysing donor behaviour in NPOs. This research is limited to such an analysis of donors who donate money or volunteer time for NPOs.
{"title":"A Conceptual Design of an AI-Enabled Decision Support System for Analysing Donor Behaviour in Nonprofit Organisations","authors":"Idrees Alsolbi, Renu Agarwal, Bhuvan Unhelkar, Tareq Al-Jabri, Mahendra Samarawickrama, Siamak Tafavogh, Mukesh Prasad","doi":"10.3390/info14100578","DOIUrl":"https://doi.org/10.3390/info14100578","url":null,"abstract":"Analysing and understanding donor behaviour in nonprofit organisations (NPOs) is challenging due to the lack of human and technical resources. Machine learning (ML) techniques can analyse and understand donor behaviour at a certain level; however, it remains to be seen how to build and design an artificial-intelligence-enabled decision-support system (AI-enabled DSS) to analyse donor behaviour. Thus, this paper proposes an AI-enabled DSS conceptual design to analyse donor behaviour in NPOs. A conceptual design is created following a design science research approach to evaluate an AI-enabled DSS’s initial DPs and features to analyse donor behaviour in NPOs. The evaluation process of the conceptual design applied formative assessment by conducting interviews with stakeholders from NPOs. The interviews were conducted using the Appreciative Inquiry framework to facilitate the process of interviews. The evaluation of the conceptual design results led to the recommendation for efficiency, effectiveness, flexibility, and usability in the requirements of the AI-enabled DSS. This research contributes to the design knowledge base of AI-enabled DSSs for analysing donor behaviour in NPOs. Future research will combine theoretical components to introduce a practical AI-enabled DSS for analysing donor behaviour in NPOs. This research is limited to such an analysis of donors who donate money or volunteer time for NPOs.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135617896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This literature review explores the existing work and practices in applying thematic analysis natural language processing techniques to financial data in cloud environments. This work aims to improve two of the five Vs of the big data system. We used the PRISMA approach (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for the review. We analyzed the research papers published over the last 10 years about the topic in question using a keyword-based search and bibliometric analysis. The systematic literature review was conducted in multiple phases, and filters were applied to exclude papers based on the title and abstract initially, then based on the methodology/conclusion, and, finally, after reading the full text. The remaining papers were then considered and are discussed here. We found that automated data discovery methods can be augmented by applying an NLP-based thematic analysis on the financial data in cloud environments. This can help identify the correct classification/categorization and measure data quality for a sentiment analysis.
{"title":"Thematic Analysis of Big Data in Financial Institutions Using NLP Techniques with a Cloud Computing Perspective: A Systematic Literature Review","authors":"Ratnesh Kumar Sharma, Gnana Bharathy, Faezeh Karimi, Anil V. Mishra, Mukesh Prasad","doi":"10.3390/info14100577","DOIUrl":"https://doi.org/10.3390/info14100577","url":null,"abstract":"This literature review explores the existing work and practices in applying thematic analysis natural language processing techniques to financial data in cloud environments. This work aims to improve two of the five Vs of the big data system. We used the PRISMA approach (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for the review. We analyzed the research papers published over the last 10 years about the topic in question using a keyword-based search and bibliometric analysis. The systematic literature review was conducted in multiple phases, and filters were applied to exclude papers based on the title and abstract initially, then based on the methodology/conclusion, and, finally, after reading the full text. The remaining papers were then considered and are discussed here. We found that automated data discovery methods can be augmented by applying an NLP-based thematic analysis on the financial data in cloud environments. This can help identify the correct classification/categorization and measure data quality for a sentiment analysis.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135617113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reducing the size of the training set, which involves replacing it with a condensed set, is a widely adopted practice to enhance the efficiency of instance-based classifiers while trying to maintain high classification accuracy. This objective can be achieved through the use of data reduction techniques, also known as prototype selection or generation algorithms. Although there are numerous algorithms available in the literature that effectively address single-label classification problems, most of them are not applicable to multilabel data, where an instance can belong to multiple classes. Well-known transformation methods cannot be combined with a data reduction technique due to different reasons. The Condensed Nearest Neighbor rule is a popular parameter-free single-label prototype selection algorithm. The IB2 algorithm is the one-pass variation of the Condensed Nearest Neighbor rule. This paper proposes variations of these algorithms for multilabel data. Through an experimental study conducted on nine distinct datasets as well as statistical tests, we demonstrate that the eight proposed approaches (four for each algorithm) offer significant reduction rates without compromising the classification accuracy.
{"title":"Prototype Selection for Multilabel Instance-Based Learning","authors":"Panagiotis Filippakis, Stefanos Ougiaroglou, Georgios Evangelidis","doi":"10.3390/info14100572","DOIUrl":"https://doi.org/10.3390/info14100572","url":null,"abstract":"Reducing the size of the training set, which involves replacing it with a condensed set, is a widely adopted practice to enhance the efficiency of instance-based classifiers while trying to maintain high classification accuracy. This objective can be achieved through the use of data reduction techniques, also known as prototype selection or generation algorithms. Although there are numerous algorithms available in the literature that effectively address single-label classification problems, most of them are not applicable to multilabel data, where an instance can belong to multiple classes. Well-known transformation methods cannot be combined with a data reduction technique due to different reasons. The Condensed Nearest Neighbor rule is a popular parameter-free single-label prototype selection algorithm. The IB2 algorithm is the one-pass variation of the Condensed Nearest Neighbor rule. This paper proposes variations of these algorithms for multilabel data. Through an experimental study conducted on nine distinct datasets as well as statistical tests, we demonstrate that the eight proposed approaches (four for each algorithm) offer significant reduction rates without compromising the classification accuracy.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135730228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid global expansion of ChatGPT, which plays a crucial role in interactive knowledge sharing and translation, underscores the importance of comparative performance assessments in artificial intelligence (AI) technology. This study concentrated on this crucial issue by exploring and contrasting the translation performances of large language models (LLMs) and neural machine translation (NMT) systems. For this aim, the APIs of Google Translate, Microsoft Translator, and OpenAI’s ChatGPT were utilized, leveraging parallel corpora from the Workshop on Machine Translation (WMT) 2018 and 2020 benchmarks. By applying recognized evaluation metrics such as BLEU, chrF, and TER, a comprehensive performance analysis across a variety of language pairs, translation directions, and reference token sizes was conducted. The findings reveal that while Google Translate and Microsoft Translator generally surpass ChatGPT in terms of their BLEU, chrF, and TER scores, ChatGPT exhibits superior performance in specific language pairs. Translations from non-English to English consistently yielded better results across all three systems compared with translations from English to non-English. Significantly, an improvement in translation system performance was observed as the token size increased, hinting at the potential benefits of training models on larger token sizes.
{"title":"Translation Performance from the User’s Perspective of Large Language Models and Neural Machine Translation Systems","authors":"Jungha Son, Boyoung Kim","doi":"10.3390/info14100574","DOIUrl":"https://doi.org/10.3390/info14100574","url":null,"abstract":"The rapid global expansion of ChatGPT, which plays a crucial role in interactive knowledge sharing and translation, underscores the importance of comparative performance assessments in artificial intelligence (AI) technology. This study concentrated on this crucial issue by exploring and contrasting the translation performances of large language models (LLMs) and neural machine translation (NMT) systems. For this aim, the APIs of Google Translate, Microsoft Translator, and OpenAI’s ChatGPT were utilized, leveraging parallel corpora from the Workshop on Machine Translation (WMT) 2018 and 2020 benchmarks. By applying recognized evaluation metrics such as BLEU, chrF, and TER, a comprehensive performance analysis across a variety of language pairs, translation directions, and reference token sizes was conducted. The findings reveal that while Google Translate and Microsoft Translator generally surpass ChatGPT in terms of their BLEU, chrF, and TER scores, ChatGPT exhibits superior performance in specific language pairs. Translations from non-English to English consistently yielded better results across all three systems compared with translations from English to non-English. Significantly, an improvement in translation system performance was observed as the token size increased, hinting at the potential benefits of training models on larger token sizes.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135779169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marian Pompiliu Cristescu, Dumitru Alexandru Mara, Lia Cornelia Culda, Raluca Andreea Nerișanu, Adela Bâra, Simona-Vasilica Oprea
This study explores the potential of data science software solutions like Customer Relationship Management Software (CRM) for increasing the revenue generation of businesses. We focused on those businesses in the accommodation and food service sector across the European Union (EU). The investigation is contextualized within the rising trend of data-driven decision-making, examining the potential correlation between data science applications and business revenues. By employing a comprehensive evaluation of Eurostat datasets from 2014 to 2021, we used both univariate and multivariate analyses, assessing the percentage of companies that have e-commerce sales across the EU countries, focusing on the usage of big data analytics from any source and the use of CRM tools for marketing purposes or other activities. Big data utilization showed a clear, positive relationship with enhanced e-commerce sales. However, CRM tools exhibited a dualistic impact: while their use in marketing showed no significant effect on sales, their application in non-marketing functions had negative effects on sales. These findings underscore the potential role of CRM and data science solutions in enhancing business performance in the EU’s accommodation and food service industry.
{"title":"The Impact of Data Science Solutions on the Company Turnover","authors":"Marian Pompiliu Cristescu, Dumitru Alexandru Mara, Lia Cornelia Culda, Raluca Andreea Nerișanu, Adela Bâra, Simona-Vasilica Oprea","doi":"10.3390/info14100573","DOIUrl":"https://doi.org/10.3390/info14100573","url":null,"abstract":"This study explores the potential of data science software solutions like Customer Relationship Management Software (CRM) for increasing the revenue generation of businesses. We focused on those businesses in the accommodation and food service sector across the European Union (EU). The investigation is contextualized within the rising trend of data-driven decision-making, examining the potential correlation between data science applications and business revenues. By employing a comprehensive evaluation of Eurostat datasets from 2014 to 2021, we used both univariate and multivariate analyses, assessing the percentage of companies that have e-commerce sales across the EU countries, focusing on the usage of big data analytics from any source and the use of CRM tools for marketing purposes or other activities. Big data utilization showed a clear, positive relationship with enhanced e-commerce sales. However, CRM tools exhibited a dualistic impact: while their use in marketing showed no significant effect on sales, their application in non-marketing functions had negative effects on sales. These findings underscore the potential role of CRM and data science solutions in enhancing business performance in the EU’s accommodation and food service industry.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135778715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}