Pub Date : 2023-12-12DOI: 10.3390/informatics10040090
Egor Ushakov, A. Naumov, Vladislav Fomberg, P. Vishnyakova, A. Asaturova, Alina Badlaeva, A. Tregubova, E. Karpulevich, Gennady Sukhikh, Timur Fatkhudinov
H-score is a semi-quantitative method used to assess the presence and distribution of proteins in tissue samples by combining the intensity of staining and the percentage of stained nuclei. It is widely used but time-consuming and can be limited in terms of accuracy and precision. Computer-aided methods may help overcome these limitations and improve the efficiency of pathologists’ workflows. In this work, we developed a model EndoNet for automatic H-score calculation on histological slides. Our proposed method uses neural networks and consists of two main parts. The first is a detection model which predicts the keypoints of centers of nuclei. The second is an H-score module that calculates the value of the H-score using mean pixel values of predicted keypoints. Our model was trained and validated on 1780 annotated tiles with a shape of 100 × 100 µm and we achieved 0.77 mAP on a test dataset. We obtained our best results in H-score calculation; these results proved superior to QuPath predictions. Moreover, the model can be adjusted to a specific specialist or whole laboratory to reproduce the manner of calculating the H-score. Thus, EndoNet is effective and robust in the analysis of histology slides, which can improve and significantly accelerate the work of pathologists.
H 评分是一种半定量方法,通过结合染色强度和染色细胞核的百分比来评估组织样本中蛋白质的存在和分布情况。该方法应用广泛,但耗时较长,在准确性和精确度方面也有局限性。计算机辅助方法有助于克服这些局限性,提高病理学家工作流程的效率。在这项工作中,我们开发了一种 EndoNet 模型,用于自动计算组织切片上的 H 分数。我们提出的方法使用神经网络,由两个主要部分组成。第一部分是检测模型,用于预测细胞核中心的关键点。第二部分是 H 分数模块,利用预测关键点的平均像素值计算 H 分数值。我们的模型在 1780 块形状为 100 × 100 µm 的注释瓷砖上进行了训练和验证,并在测试数据集上取得了 0.77 mAP 的成绩。我们在 H 分数计算方面取得了最佳结果;这些结果证明优于 QuPath 预测。此外,该模型可根据特定专家或整个实验室进行调整,以重现 H 分数的计算方式。因此,EndoNet 在组织学切片分析中既有效又稳健,可以改善并大大加快病理学家的工作。
{"title":"EndoNet: A Model for the Automatic Calculation of H-Score on Histological Slides","authors":"Egor Ushakov, A. Naumov, Vladislav Fomberg, P. Vishnyakova, A. Asaturova, Alina Badlaeva, A. Tregubova, E. Karpulevich, Gennady Sukhikh, Timur Fatkhudinov","doi":"10.3390/informatics10040090","DOIUrl":"https://doi.org/10.3390/informatics10040090","url":null,"abstract":"H-score is a semi-quantitative method used to assess the presence and distribution of proteins in tissue samples by combining the intensity of staining and the percentage of stained nuclei. It is widely used but time-consuming and can be limited in terms of accuracy and precision. Computer-aided methods may help overcome these limitations and improve the efficiency of pathologists’ workflows. In this work, we developed a model EndoNet for automatic H-score calculation on histological slides. Our proposed method uses neural networks and consists of two main parts. The first is a detection model which predicts the keypoints of centers of nuclei. The second is an H-score module that calculates the value of the H-score using mean pixel values of predicted keypoints. Our model was trained and validated on 1780 annotated tiles with a shape of 100 × 100 µm and we achieved 0.77 mAP on a test dataset. We obtained our best results in H-score calculation; these results proved superior to QuPath predictions. Moreover, the model can be adjusted to a specific specialist or whole laboratory to reproduce the manner of calculating the H-score. Thus, EndoNet is effective and robust in the analysis of histology slides, which can improve and significantly accelerate the work of pathologists.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"8 4","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139007623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.3390/informatics10040089
Jaskaran Gill, Madhu Chetty, Suryani Lim, Jennifer Hallinan
Relation extraction from biological publications plays a pivotal role in accelerating scientific discovery and advancing medical research. While vast amounts of this knowledge is stored within the published literature, extracting it manually from this continually growing volume of documents is becoming increasingly arduous. Recently, attention has been focused towards automatically extracting such knowledge using pre-trained Large Language Models (LLM) and deep-learning algorithms for automated relation extraction. However, the complex syntactic structure of biological sentences, with nested entities and domain-specific terminology, and insufficient annotated training corpora, poses major challenges in accurately capturing entity relationships from the unstructured data. To address these issues, in this paper, we propose a Knowledge-based Intelligent Text Simplification (KITS) approach focused on the accurate extraction of biological relations. KITS is able to precisely and accurately capture the relational context among various binary relations within the sentence, alongside preventing any potential changes in meaning for those sentences being simplified by KITS. The experiments show that the proposed technique, using well-known performance metrics, resulted in a 21% increase in precision, with only 25% of sentences simplified in the Learning Language in Logic (LLL) dataset. Combining the proposed method with BioBERT, the popular pre-trained LLM was able to outperform other state-of-the-art methods.
{"title":"Knowledge-Based Intelligent Text Simplification for Biological Relation Extraction","authors":"Jaskaran Gill, Madhu Chetty, Suryani Lim, Jennifer Hallinan","doi":"10.3390/informatics10040089","DOIUrl":"https://doi.org/10.3390/informatics10040089","url":null,"abstract":"Relation extraction from biological publications plays a pivotal role in accelerating scientific discovery and advancing medical research. While vast amounts of this knowledge is stored within the published literature, extracting it manually from this continually growing volume of documents is becoming increasingly arduous. Recently, attention has been focused towards automatically extracting such knowledge using pre-trained Large Language Models (LLM) and deep-learning algorithms for automated relation extraction. However, the complex syntactic structure of biological sentences, with nested entities and domain-specific terminology, and insufficient annotated training corpora, poses major challenges in accurately capturing entity relationships from the unstructured data. To address these issues, in this paper, we propose a Knowledge-based Intelligent Text Simplification (KITS) approach focused on the accurate extraction of biological relations. KITS is able to precisely and accurately capture the relational context among various binary relations within the sentence, alongside preventing any potential changes in meaning for those sentences being simplified by KITS. The experiments show that the proposed technique, using well-known performance metrics, resulted in a 21% increase in precision, with only 25% of sentences simplified in the Learning Language in Logic (LLL) dataset. Combining the proposed method with BioBERT, the popular pre-trained LLM was able to outperform other state-of-the-art methods.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"31 5","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138978619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The identification and analysis of sentiment polarity in microblog data has drawn increased attention. Researchers and practitioners attempt to extract knowledge by evaluating public sentiment in response to global events. This study aimed to evaluate public attitudes towards the spread of COVID-19 by performing sentiment analysis on over 2.1 million tweets in English. The implications included the generation of insights for timely disease outbreak prediction and assertions regarding worldwide events, which can help policymakers take suitable actions. We investigated whether there was a correlation between public sentiment and the number of cases and deaths attributed to COVID-19. The research design integrated text preprocessing (regular expression operations, (de)tokenization, stopwords), sentiment polarization analysis via TextBlob, hypothesis formulation (null hypothesis testing), and statistical analysis (Pearson coefficient and p-value) to produce the results. The key findings highlight a correlation between sentiment polarity and deaths, starting at 41 days before and expanding up to 3 days after counting. Twitter users reacted to increased numbers of COVID-19-related deaths after four days by posting tweets with fading sentiment polarization. We also detected a strong correlation between COVID-19 Twitter conversation polarity and reported cases and a weak correlation between polarity and reported deaths.
{"title":"Unraveling Microblog Sentiment Dynamics: A Twitter Public Attitudes Analysis towards COVID-19 Cases and Deaths","authors":"Paraskevas Koukaras, Dimitrios Rousidis, Christos Tjortjis","doi":"10.3390/informatics10040088","DOIUrl":"https://doi.org/10.3390/informatics10040088","url":null,"abstract":"The identification and analysis of sentiment polarity in microblog data has drawn increased attention. Researchers and practitioners attempt to extract knowledge by evaluating public sentiment in response to global events. This study aimed to evaluate public attitudes towards the spread of COVID-19 by performing sentiment analysis on over 2.1 million tweets in English. The implications included the generation of insights for timely disease outbreak prediction and assertions regarding worldwide events, which can help policymakers take suitable actions. We investigated whether there was a correlation between public sentiment and the number of cases and deaths attributed to COVID-19. The research design integrated text preprocessing (regular expression operations, (de)tokenization, stopwords), sentiment polarization analysis via TextBlob, hypothesis formulation (null hypothesis testing), and statistical analysis (Pearson coefficient and p-value) to produce the results. The key findings highlight a correlation between sentiment polarity and deaths, starting at 41 days before and expanding up to 3 days after counting. Twitter users reacted to increased numbers of COVID-19-related deaths after four days by posting tweets with fading sentiment polarization. We also detected a strong correlation between COVID-19 Twitter conversation polarity and reported cases and a weak correlation between polarity and reported deaths.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"9 4","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138592576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-29DOI: 10.3390/informatics10040087
Joost C. F. de Winter, Dimitra Dodou, Arno H. A. Stienen
ChatGPT is widely used among students, a situation that challenges educators. The current paper presents two strategies that do not push educators into a defensive role but can empower them. Firstly, we show, based on statistical analysis, that ChatGPT use can be recognized from certain keywords such as ‘delves’ and ‘crucial’. This insight allows educators to detect ChatGPT-assisted work more effectively. Secondly, we illustrate that ChatGPT can be used to assess texts written by students. The latter topic was presented in two interactive workshops provided to educators and educational specialists. The results of the workshops, where prompts were tested live, indicated that ChatGPT, provided a targeted prompt is used, is good at recognizing errors in texts but not consistent in grading. Ethical and copyright concerns were raised as well in the workshops. In conclusion, the methods presented in this paper may help fortify the teaching methods of educators. The computer scripts that we used for live prompting are available and enable educators to give similar workshops.
{"title":"ChatGPT in Education: Empowering Educators through Methods for Recognition and Assessment","authors":"Joost C. F. de Winter, Dimitra Dodou, Arno H. A. Stienen","doi":"10.3390/informatics10040087","DOIUrl":"https://doi.org/10.3390/informatics10040087","url":null,"abstract":"ChatGPT is widely used among students, a situation that challenges educators. The current paper presents two strategies that do not push educators into a defensive role but can empower them. Firstly, we show, based on statistical analysis, that ChatGPT use can be recognized from certain keywords such as ‘delves’ and ‘crucial’. This insight allows educators to detect ChatGPT-assisted work more effectively. Secondly, we illustrate that ChatGPT can be used to assess texts written by students. The latter topic was presented in two interactive workshops provided to educators and educational specialists. The results of the workshops, where prompts were tested live, indicated that ChatGPT, provided a targeted prompt is used, is good at recognizing errors in texts but not consistent in grading. Ethical and copyright concerns were raised as well in the workshops. In conclusion, the methods presented in this paper may help fortify the teaching methods of educators. The computer scripts that we used for live prompting are available and enable educators to give similar workshops.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"57 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139209639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-21DOI: 10.3390/informatics10040086
Brian Rizqi Paradisiaca Darnoto, D. Siahaan, Diana Purwitasari
Persuasive content in online news contains elements that aim to persuade its readers and may not necessarily include factual information. Since a news article only has some sentences that indicate persuasiveness, it would be quite challenging to differentiate news with or without the persuasive content. Recognizing persuasive sentences with a text summarization and classification approach is important to understand persuasive messages effectively. Text summarization identifies arguments and key points, while classification separates persuasive sentences based on the linguistic and semantic features used. Our proposed architecture includes text summarization approaches to shorten sentences without persuasive content and then using classifiers model to detect those with persuasive indication. In this paper, we compare the performance of latent semantic analysis (LSA) and TextRank in text summarization methods, the latter of which has outperformed in all trials, and also two classifiers of convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM). We have prepared a dataset (±1700 data and manually persuasiveness-labeled) consisting of news articles written in the Indonesian language collected from a nationwide electronic news portal. Comparative studies in our experimental results show that the TextRank–BERT–BiLSTM model achieved the highest accuracy of 95% in detecting persuasive news. The text summarization methods were able to generate detailed and precise summaries of the news articles and the deep learning models were able to effectively differentiate between persuasive news and real news.
{"title":"Automated Detection of Persuasive Content in Electronic News","authors":"Brian Rizqi Paradisiaca Darnoto, D. Siahaan, Diana Purwitasari","doi":"10.3390/informatics10040086","DOIUrl":"https://doi.org/10.3390/informatics10040086","url":null,"abstract":"Persuasive content in online news contains elements that aim to persuade its readers and may not necessarily include factual information. Since a news article only has some sentences that indicate persuasiveness, it would be quite challenging to differentiate news with or without the persuasive content. Recognizing persuasive sentences with a text summarization and classification approach is important to understand persuasive messages effectively. Text summarization identifies arguments and key points, while classification separates persuasive sentences based on the linguistic and semantic features used. Our proposed architecture includes text summarization approaches to shorten sentences without persuasive content and then using classifiers model to detect those with persuasive indication. In this paper, we compare the performance of latent semantic analysis (LSA) and TextRank in text summarization methods, the latter of which has outperformed in all trials, and also two classifiers of convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM). We have prepared a dataset (±1700 data and manually persuasiveness-labeled) consisting of news articles written in the Indonesian language collected from a nationwide electronic news portal. Comparative studies in our experimental results show that the TextRank–BERT–BiLSTM model achieved the highest accuracy of 95% in detecting persuasive news. The text summarization methods were able to generate detailed and precise summaries of the news articles and the deep learning models were able to effectively differentiate between persuasive news and real news.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"204 ","pages":""},"PeriodicalIF":3.1,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139251248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-06DOI: 10.3390/informatics10040085
Moonkyoung Jang
This study delves into the determinants influencing individuals’ intentions to adopt telemedicine apps during the COVID-19 pandemic. The study aims to offer a comprehensive framework for understanding behavioral intentions by leveraging the Technology Acceptance Model (TAM), supplemented by e-health literacy and social influence variables. The study analyzes survey data from 364 adults using partial least squares structural equation modeling (PLS-SEM) to empirically examine the internal relationships within the model. Results indicated that e-health literacy, attitude, and social influence significantly impacted the intention to use telemedicine apps. Notably, e-health literacy positively influenced both perceived usefulness and ease of use, expanding beyond mere usage intention. The study underscored the substantial role of social influence in predicting the intention to use telemedicine apps, challenging the traditional oversight of social influence in the TAM framework. The findings will help researchers, practitioners, and governments understand how social influence and e-health literacy influence the adoption of telehealth apps and promote the use of telehealth apps through enhancing social influence and e-health literacy.
{"title":"Why Do People Use Telemedicine Apps in the Post-COVID-19 Era? Expanded TAM with E-Health Literacy and Social Influence","authors":"Moonkyoung Jang","doi":"10.3390/informatics10040085","DOIUrl":"https://doi.org/10.3390/informatics10040085","url":null,"abstract":"This study delves into the determinants influencing individuals’ intentions to adopt telemedicine apps during the COVID-19 pandemic. The study aims to offer a comprehensive framework for understanding behavioral intentions by leveraging the Technology Acceptance Model (TAM), supplemented by e-health literacy and social influence variables. The study analyzes survey data from 364 adults using partial least squares structural equation modeling (PLS-SEM) to empirically examine the internal relationships within the model. Results indicated that e-health literacy, attitude, and social influence significantly impacted the intention to use telemedicine apps. Notably, e-health literacy positively influenced both perceived usefulness and ease of use, expanding beyond mere usage intention. The study underscored the substantial role of social influence in predicting the intention to use telemedicine apps, challenging the traditional oversight of social influence in the TAM framework. The findings will help researchers, practitioners, and governments understand how social influence and e-health literacy influence the adoption of telehealth apps and promote the use of telehealth apps through enhancing social influence and e-health literacy.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"13 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135589608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.3390/informatics10040084
Evaristus D. Madyatmadja, Corinthias P. M. Sianipar, Cristofer Wijaya, David J. M. Sembiring
Crowdsourcing has gradually become an effective e-government process to gather citizen complaints over the implementation of various public services. In practice, the collected complaints form a massive dataset, making it difficult for government officers to analyze the big data effectively. It is consequently vital to use data mining algorithms to classify the citizen complaint data for efficient follow-up actions. However, different classification algorithms produce varied classification accuracies. Thus, this study aimed to compare the accuracy of several classification algorithms on crowdsourced citizen complaint data. Taking the case of the LAKSA app in Tangerang City, Indonesia, this study included k-Nearest Neighbors, Random Forest, Support Vector Machine, and AdaBoost for the accuracy assessment. The data were taken from crowdsourced citizen complaints submitted to the LAKSA app, including those aggregated from official social media channels, from May 2021 to April 2022. The results showed SVM with a linear kernel as the most accurate among the assessed algorithms (89.2%). In contrast, AdaBoost (base learner: Decision Trees) produced the lowest accuracy. Still, the accuracy levels of all algorithms varied in parallel to the amount of training data available for the actual classification categories. Overall, the assessments on all algorithms indicated that their accuracies were insignificantly different, with an overall variation of 4.3%. The AdaBoost-based classification, in particular, showed its large dependence on the choice of base learners. Looking at the method and results, this study contributes to e-government, data mining, and big data discourses. This research recommends that governments continuously conduct supervised training of classification algorithms over their crowdsourced citizen complaints to seek the highest accuracy possible, paving the way for smart and sustainable governance.
{"title":"Classifying Crowdsourced Citizen Complaints through Data Mining: Accuracy Testing of k-Nearest Neighbors, Random Forest, Support Vector Machine, and AdaBoost","authors":"Evaristus D. Madyatmadja, Corinthias P. M. Sianipar, Cristofer Wijaya, David J. M. Sembiring","doi":"10.3390/informatics10040084","DOIUrl":"https://doi.org/10.3390/informatics10040084","url":null,"abstract":"Crowdsourcing has gradually become an effective e-government process to gather citizen complaints over the implementation of various public services. In practice, the collected complaints form a massive dataset, making it difficult for government officers to analyze the big data effectively. It is consequently vital to use data mining algorithms to classify the citizen complaint data for efficient follow-up actions. However, different classification algorithms produce varied classification accuracies. Thus, this study aimed to compare the accuracy of several classification algorithms on crowdsourced citizen complaint data. Taking the case of the LAKSA app in Tangerang City, Indonesia, this study included k-Nearest Neighbors, Random Forest, Support Vector Machine, and AdaBoost for the accuracy assessment. The data were taken from crowdsourced citizen complaints submitted to the LAKSA app, including those aggregated from official social media channels, from May 2021 to April 2022. The results showed SVM with a linear kernel as the most accurate among the assessed algorithms (89.2%). In contrast, AdaBoost (base learner: Decision Trees) produced the lowest accuracy. Still, the accuracy levels of all algorithms varied in parallel to the amount of training data available for the actual classification categories. Overall, the assessments on all algorithms indicated that their accuracies were insignificantly different, with an overall variation of 4.3%. The AdaBoost-based classification, in particular, showed its large dependence on the choice of base learners. Looking at the method and results, this study contributes to e-government, data mining, and big data discourses. This research recommends that governments continuously conduct supervised training of classification algorithms over their crowdsourced citizen complaints to seek the highest accuracy possible, paving the way for smart and sustainable governance.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"38 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135220809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-31DOI: 10.3390/informatics10040083
Hendrik Ballhausen, Ludwig Christian Hinske
Privacy-preserving computation (PPC) enables encrypted computation of private data. While advantageous in theory, the complex technology has steep barriers to entry in practice. Here, we derive design goals and principles for a middleware that encapsulates the demanding cryptography server side and provides a simple-to-use interface to client-side application developers. The resulting architecture, “Federated Secure Computing”, offloads computing-intensive tasks to the server and separates concerns of cryptography and business logic. It provides microservices through an Open API 3.0 definition and hosts multiple protocols through self-discovered plugins. It requires only minimal DevSecOps capabilities and is straightforward and secure. Finally, it is small enough to work in the internet of things (IoT) and in propaedeutic settings on consumer hardware. We provide benchmarks for calculations with a secure multiparty computation (SMPC) protocol, both for vertically and horizontally partitioned data. Runtimes are in the range of seconds on both dedicated workstations and IoT devices such as Raspberry Pi or smartphones. A reference implementation is available as free and open source software under the MIT license.
{"title":"Federated Secure Computing","authors":"Hendrik Ballhausen, Ludwig Christian Hinske","doi":"10.3390/informatics10040083","DOIUrl":"https://doi.org/10.3390/informatics10040083","url":null,"abstract":"Privacy-preserving computation (PPC) enables encrypted computation of private data. While advantageous in theory, the complex technology has steep barriers to entry in practice. Here, we derive design goals and principles for a middleware that encapsulates the demanding cryptography server side and provides a simple-to-use interface to client-side application developers. The resulting architecture, “Federated Secure Computing”, offloads computing-intensive tasks to the server and separates concerns of cryptography and business logic. It provides microservices through an Open API 3.0 definition and hosts multiple protocols through self-discovered plugins. It requires only minimal DevSecOps capabilities and is straightforward and secure. Finally, it is small enough to work in the internet of things (IoT) and in propaedeutic settings on consumer hardware. We provide benchmarks for calculations with a secure multiparty computation (SMPC) protocol, both for vertically and horizontally partitioned data. Runtimes are in the range of seconds on both dedicated workstations and IoT devices such as Raspberry Pi or smartphones. A reference implementation is available as free and open source software under the MIT license.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"81 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135810602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-27DOI: 10.3390/informatics10040082
Luke Balcombe
Artificial intelligence (AI) chatbots have gained prominence since 2022. Powered by big data, natural language processing (NLP) and machine learning (ML) algorithms, they offer the potential to expand capabilities, improve productivity and provide guidance and support in various domains. Human–Artificial Intelligence (HAI) is proposed to help with the integration of human values, empathy and ethical considerations into AI in order to address the limitations of AI chatbots and enhance their effectiveness. Mental health is a critical global concern, with a substantial impact on individuals, communities and economies. Digital mental health solutions, leveraging AI and ML, have emerged to address the challenges of access, stigma and cost in mental health care. Despite their potential, ethical and legal implications surrounding these technologies remain uncertain. This narrative literature review explores the potential of AI chatbots to revolutionize digital mental health while emphasizing the need for ethical, responsible and trustworthy AI algorithms. The review is guided by three key research questions: the impact of AI chatbots on technology integration, the balance between benefits and harms, and the mitigation of bias and prejudice in AI applications. Methodologically, the review involves extensive database and search engine searches, utilizing keywords related to AI chatbots and digital mental health. Peer-reviewed journal articles and media sources were purposively selected to address the research questions, resulting in a comprehensive analysis of the current state of knowledge on this evolving topic. In conclusion, AI chatbots hold promise in transforming digital mental health but must navigate complex ethical and practical challenges. The integration of HAI principles, responsible regulation and scoping reviews are crucial to maximizing their benefits while minimizing potential risks. Collaborative approaches and modern educational solutions may enhance responsible use and mitigate biases in AI applications, ensuring a more inclusive and effective digital mental health landscape.
{"title":"AI Chatbots in Digital Mental Health","authors":"Luke Balcombe","doi":"10.3390/informatics10040082","DOIUrl":"https://doi.org/10.3390/informatics10040082","url":null,"abstract":"Artificial intelligence (AI) chatbots have gained prominence since 2022. Powered by big data, natural language processing (NLP) and machine learning (ML) algorithms, they offer the potential to expand capabilities, improve productivity and provide guidance and support in various domains. Human–Artificial Intelligence (HAI) is proposed to help with the integration of human values, empathy and ethical considerations into AI in order to address the limitations of AI chatbots and enhance their effectiveness. Mental health is a critical global concern, with a substantial impact on individuals, communities and economies. Digital mental health solutions, leveraging AI and ML, have emerged to address the challenges of access, stigma and cost in mental health care. Despite their potential, ethical and legal implications surrounding these technologies remain uncertain. This narrative literature review explores the potential of AI chatbots to revolutionize digital mental health while emphasizing the need for ethical, responsible and trustworthy AI algorithms. The review is guided by three key research questions: the impact of AI chatbots on technology integration, the balance between benefits and harms, and the mitigation of bias and prejudice in AI applications. Methodologically, the review involves extensive database and search engine searches, utilizing keywords related to AI chatbots and digital mental health. Peer-reviewed journal articles and media sources were purposively selected to address the research questions, resulting in a comprehensive analysis of the current state of knowledge on this evolving topic. In conclusion, AI chatbots hold promise in transforming digital mental health but must navigate complex ethical and practical challenges. The integration of HAI principles, responsible regulation and scoping reviews are crucial to maximizing their benefits while minimizing potential risks. Collaborative approaches and modern educational solutions may enhance responsible use and mitigate biases in AI applications, ensuring a more inclusive and effective digital mental health landscape.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"18 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136235035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-23DOI: 10.3390/informatics10040081
Célia Tavares, Luciana Oliveira, Pedro Duarte, Manuel Moreira da Silva
According to a recent study by OpenAI, Open Research, and the University of Pennsylvania, large language models (LLMs) based on artificial intelligence (AI), such as generative pretrained transformers (GPTs), may have potential implications for the job market, specifically regarding occupations that demand writing or programming skills. This research points out that interpreters and translators are one of the main occupations with greater exposure to AI in the US job market (76.5%), in a trend that is expected to affect other regions of the globe. This article, following a mixed-methods survey-based research approach, provides insights into the awareness and knowledge about AI among Portuguese language service providers (LSPs), specifically regarding neural machine translation (NMT) and large language models (LLM), their actual use and usefulness, as well as their potential influence on work performance and the labour market. The results show that most professionals are unable to identify whether AI and/or automation technologies support the tools that are most used in the profession. The usefulness of AI is essentially low to moderate and the professionals who are less familiar with it and less knowledgeable also demonstrate a lack of trust in it. Two thirds of the sample estimate negative or very negative effects of AI in their profession, expressing the devaluation and replacement of experts, the reduction of income, and the reconfiguration of the career of translator to mere post-editors as major concerns.
根据OpenAI、Open Research和宾夕法尼亚大学(University of Pennsylvania)最近的一项研究,基于人工智能(AI)的大型语言模型(llm),如生成预训练变形器(gpt),可能会对就业市场产生潜在影响,特别是在需要写作或编程技能的职业方面。该研究指出,在美国就业市场上,口译员和笔译员是人工智能接触最多的主要职业之一(76.5%),这一趋势预计将影响全球其他地区。本文采用基于调查的混合方法研究方法,提供了葡萄牙语服务提供商(lsp)对人工智能的认识和知识的见解,特别是关于神经机器翻译(NMT)和大型语言模型(LLM),它们的实际使用和有用性,以及它们对工作表现和劳动力市场的潜在影响。结果显示,大多数专业人士无法识别人工智能和/或自动化技术是否支持专业中最常用的工具。人工智能的有用性基本上是低到中等的,不太熟悉它的专业人士也表现出对它缺乏信任。三分之二的样本估计人工智能对其职业的负面或非常负面的影响,表达了专家的贬值和替代,收入的减少,以及翻译职业重新配置为纯粹的后期编辑是主要的担忧。
{"title":"Artificial Intelligence: A Blessing or a Threat for Language Service Providers in Portugal","authors":"Célia Tavares, Luciana Oliveira, Pedro Duarte, Manuel Moreira da Silva","doi":"10.3390/informatics10040081","DOIUrl":"https://doi.org/10.3390/informatics10040081","url":null,"abstract":"According to a recent study by OpenAI, Open Research, and the University of Pennsylvania, large language models (LLMs) based on artificial intelligence (AI), such as generative pretrained transformers (GPTs), may have potential implications for the job market, specifically regarding occupations that demand writing or programming skills. This research points out that interpreters and translators are one of the main occupations with greater exposure to AI in the US job market (76.5%), in a trend that is expected to affect other regions of the globe. This article, following a mixed-methods survey-based research approach, provides insights into the awareness and knowledge about AI among Portuguese language service providers (LSPs), specifically regarding neural machine translation (NMT) and large language models (LLM), their actual use and usefulness, as well as their potential influence on work performance and the labour market. The results show that most professionals are unable to identify whether AI and/or automation technologies support the tools that are most used in the profession. The usefulness of AI is essentially low to moderate and the professionals who are less familiar with it and less knowledgeable also demonstrate a lack of trust in it. Two thirds of the sample estimate negative or very negative effects of AI in their profession, expressing the devaluation and replacement of experts, the reduction of income, and the reconfiguration of the career of translator to mere post-editors as major concerns.","PeriodicalId":37100,"journal":{"name":"Informatics","volume":"16 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135413110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}