Nayara Cristina da Silva, M. Albertini, A. R. Backes, G. Pena
Pediatric hospital readmission involves greater burdens for the patient and their family network, and for the health system. Machine learning can be a good strategy to expand knowledge in this area and to assist in the identification of patients at readmission risk. The objective of the study was to develop a predictive model to identify children and adolescents at high risk of potentially avoidable 30-day readmission using a machine learning approach. Retrospective cohort study with patients under 18 years old admitted to a tertiary university hospital. We collected demographic, clinical, and nutritional data from electronic databases. We apply machine learning techniques to build the predictive models. The 30-day hospital readmissions rate was 9.50%. The accuracy for CART model with bagging was 0.79, the sensitivity, and specificity were 76.30% and 64.40%, respectively. Machine learning approaches can predict avoidable 30-day pediatric hospital readmission into tertiary assistance.
{"title":"Prediction of readmissions in hospitalized children and adolescents by machine learning","authors":"Nayara Cristina da Silva, M. Albertini, A. R. Backes, G. Pena","doi":"10.1145/3555776.3577592","DOIUrl":"https://doi.org/10.1145/3555776.3577592","url":null,"abstract":"Pediatric hospital readmission involves greater burdens for the patient and their family network, and for the health system. Machine learning can be a good strategy to expand knowledge in this area and to assist in the identification of patients at readmission risk. The objective of the study was to develop a predictive model to identify children and adolescents at high risk of potentially avoidable 30-day readmission using a machine learning approach. Retrospective cohort study with patients under 18 years old admitted to a tertiary university hospital. We collected demographic, clinical, and nutritional data from electronic databases. We apply machine learning techniques to build the predictive models. The 30-day hospital readmissions rate was 9.50%. The accuracy for CART model with bagging was 0.79, the sensitivity, and specificity were 76.30% and 64.40%, respectively. Machine learning approaches can predict avoidable 30-day pediatric hospital readmission into tertiary assistance.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"27 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91138581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thamilselvam B, Y. Ramesh, S. Kalyanasundaram, M. Rao
The analysis of traffic policies, for instance, the duration of green and red phases at intersections, can be quite challenging. While the introduction of communication systems can potentially lead to better solutions, it is important to analyse and formulate policies in the presence of potential communication failures and delays. Given the stochastic nature of traffic, posing the problem as a model checking problem in probabilistic epistemic temporal logic seems promising. In this work, we propose an approach that uses epistemic modalities to model the effect of communication between multiple intersections and temporal modalities to model the progression of traffic volumes over time. We validate our approach in a non-stochastic setting, using the tool Model Checker for Multi-Agent Systems (MCMAS). We develop a Statistical Model Checking module and use it in conjunction with a tool chain that integrates a traffic simulator (SUMO) and a network simulator (OMNeT++/Veins) to study the impact of communications on traffic policies.
{"title":"Traffic Intersections as Agents: A model checking approach for analysing communicating agents","authors":"Thamilselvam B, Y. Ramesh, S. Kalyanasundaram, M. Rao","doi":"10.1145/3555776.3577720","DOIUrl":"https://doi.org/10.1145/3555776.3577720","url":null,"abstract":"The analysis of traffic policies, for instance, the duration of green and red phases at intersections, can be quite challenging. While the introduction of communication systems can potentially lead to better solutions, it is important to analyse and formulate policies in the presence of potential communication failures and delays. Given the stochastic nature of traffic, posing the problem as a model checking problem in probabilistic epistemic temporal logic seems promising. In this work, we propose an approach that uses epistemic modalities to model the effect of communication between multiple intersections and temporal modalities to model the progression of traffic volumes over time. We validate our approach in a non-stochastic setting, using the tool Model Checker for Multi-Agent Systems (MCMAS). We develop a Statistical Model Checking module and use it in conjunction with a tool chain that integrates a traffic simulator (SUMO) and a network simulator (OMNeT++/Veins) to study the impact of communications on traffic policies.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"7 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87657020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of Alexa and Siri, and more recently, OpenAI's Chat-GPT, raises the question whether ad hoc biological queries can also be computed without end-users' active involvement in the code writing process. While advances have been made, current querying architectures for biological databases still assume some degree of computational competence and significant structural awareness of the underlying network of databases by biologists, if not active code writing. Given that biological databases are highly distributed and heterogeneous, and most are not FAIR compliant, a significant amount of expertise in data integration is essential for a query to be accurately crafted and meaningfully executed. In this paper, we introduce a flexible and intelligent query reformulation assistant, called Needle, as a back-end query execution engine of a natural language query interface to online biological databases. Needle leverages a data model called BioStar that leverages a meta-knowledgebase, called the schema graph, to map natural language queries to relevant databases and biological concepts. The implementation of Needle using BioStar is the focus of this article.
{"title":"Mapping Strategies for Declarative Queries over Online Heterogeneous Biological Databases for Intelligent Responses","authors":"H. Jamil, Kallol Naha","doi":"10.1145/3555776.3577652","DOIUrl":"https://doi.org/10.1145/3555776.3577652","url":null,"abstract":"The emergence of Alexa and Siri, and more recently, OpenAI's Chat-GPT, raises the question whether ad hoc biological queries can also be computed without end-users' active involvement in the code writing process. While advances have been made, current querying architectures for biological databases still assume some degree of computational competence and significant structural awareness of the underlying network of databases by biologists, if not active code writing. Given that biological databases are highly distributed and heterogeneous, and most are not FAIR compliant, a significant amount of expertise in data integration is essential for a query to be accurately crafted and meaningfully executed. In this paper, we introduce a flexible and intelligent query reformulation assistant, called Needle, as a back-end query execution engine of a natural language query interface to online biological databases. Needle leverages a data model called BioStar that leverages a meta-knowledgebase, called the schema graph, to map natural language queries to relevant databases and biological concepts. The implementation of Needle using BioStar is the focus of this article.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"2 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89067093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo de Magalhães Marques dos Santos Silva, Cláudio Correia, M. Correia, Luís Rodrigues
Users often encrypt files they store on cloud storage services to ensure data privacy. Unfortunately, without additional mechanisms, encrypting files prevents the use of server-side deduplication as two identical files will be different when encrypted. Encrypted deduplication techniques combines file encryption and data deduplication. This combination usually requires some form of direct or indirect coordination between the different clients. In this paper, we address the problem of reconciling the need to encrypt data with the advantages of deduplication. In particular, we study techniques that achieve this objective while avoiding frequency analysis attacks, i.e., attacks that infer the content of an encrypted file based on how frequently the file is stored and/or accessed. We propose a new protocol for assigning encryption keys to files that leverages the use of trusted execution environments to hide the frequencies of chunks from the adversary.
{"title":"Deduplication vs Privacy Tradeoffs in Cloud Storage","authors":"Rodrigo de Magalhães Marques dos Santos Silva, Cláudio Correia, M. Correia, Luís Rodrigues","doi":"10.1145/3555776.3577711","DOIUrl":"https://doi.org/10.1145/3555776.3577711","url":null,"abstract":"Users often encrypt files they store on cloud storage services to ensure data privacy. Unfortunately, without additional mechanisms, encrypting files prevents the use of server-side deduplication as two identical files will be different when encrypted. Encrypted deduplication techniques combines file encryption and data deduplication. This combination usually requires some form of direct or indirect coordination between the different clients. In this paper, we address the problem of reconciling the need to encrypt data with the advantages of deduplication. In particular, we study techniques that achieve this objective while avoiding frequency analysis attacks, i.e., attacks that infer the content of an encrypted file based on how frequently the file is stored and/or accessed. We propose a new protocol for assigning encryption keys to files that leverages the use of trusted execution environments to hide the frequencies of chunks from the adversary.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"45 2 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86785354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ajung Kim, Gwangyong Kim, Bong-hoi Kim, Jiman Hong
The Baseboard Management Controller1 (BMC) reduces the operating cost of the server because it enables remote monitoring of the server. In order to reduce the boot time of the BMC, the hibernation technique has been applied for the fast boot of the BMC. However, it is difficult to apply the existing hibernation technique to the BMC as it is because the boot time may be longer than the cold boot since memory usage is not constant for each BMC. In this paper, we propose a hybrid boot technique that selects the faster boot between cold boot and hibernation-based boot based on the proper hibernation execution periodic interval. The proposed technique can perform boot at a point where the boot time is expected to be the minimum by checking memory usage. The experimental results show that the proposed hybrid boot technique can reduce the total boot time significantly compared to cold boot.
{"title":"Hibernation Execution Interval based Hybrid Boot for Baseboard Management Controllers","authors":"Ajung Kim, Gwangyong Kim, Bong-hoi Kim, Jiman Hong","doi":"10.1145/3555776.3577729","DOIUrl":"https://doi.org/10.1145/3555776.3577729","url":null,"abstract":"The Baseboard Management Controller1 (BMC) reduces the operating cost of the server because it enables remote monitoring of the server. In order to reduce the boot time of the BMC, the hibernation technique has been applied for the fast boot of the BMC. However, it is difficult to apply the existing hibernation technique to the BMC as it is because the boot time may be longer than the cold boot since memory usage is not constant for each BMC. In this paper, we propose a hybrid boot technique that selects the faster boot between cold boot and hibernation-based boot based on the proper hibernation execution periodic interval. The proposed technique can perform boot at a point where the boot time is expected to be the minimum by checking memory usage. The experimental results show that the proposed hybrid boot technique can reduce the total boot time significantly compared to cold boot.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"163 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73301163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Logical languages provide rigid formalisms for theories with varying expressive and scalable powers. In ontology engineering, it is popular to to provide a two-folded formalization of a theory; an expressive FOL formalization, and a decidable SROIQ fragment. Such a task requires a systematic and principled translation of the set of FOL formulas to achieve a maximally expressive decidable fragment. While no principled work exists for providing guidelines for the translation of FOL theories into SROIQ knowledge bases, this paper contributes with such a translation procedure.
{"title":"Translating FOL-theories into SROIQ-Tboxes","authors":"Fatima Danash, D. Ziébelin","doi":"10.1145/3555776.3577870","DOIUrl":"https://doi.org/10.1145/3555776.3577870","url":null,"abstract":"Logical languages provide rigid formalisms for theories with varying expressive and scalable powers. In ontology engineering, it is popular to to provide a two-folded formalization of a theory; an expressive FOL formalization, and a decidable SROIQ fragment. Such a task requires a systematic and principled translation of the set of FOL formulas to achieve a maximally expressive decidable fragment. While no principled work exists for providing guidelines for the translation of FOL theories into SROIQ knowledge bases, this paper contributes with such a translation procedure.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"3 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73402330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Message Queuing Telemetry Transport (MQTT) is a protocol commonly used in smart IoT applications. The protocol reduces the resource saturation but does not implement appropriate security mechanisms. There have been attempts to add security features to MQTT; however, they do not take into account the resource-constrained nature of IoT devices. The Cipher-text Policy Attribute-Based Encryption (CP-ABE) scheme provides fine-grained access to topic-related data and adequate data storage on MQTT server. In this work, we propose an Improved CP-ABE (ICP-ABE) scheme integrated with a lightweight symmetric encryption algorithm - PRESENT. The new scheme separates the roles of attribute auditing and key extraction. By using a blind key, MQTT servers verify the identity of sender nodes without knowing the sender's attributes. The PRESENT algorithm is employed in the proposed scheme in order to securely share such blind keys between clients. The efficiency of the scheme is evaluated in terms of throughput, packet delivery ratio, network delay, and execution time.
{"title":"A Lightweight Authentication and Privacy Preservation Scheme for MQTT","authors":"Sijia Tian, V. Vassilakis","doi":"10.1145/3555776.3577817","DOIUrl":"https://doi.org/10.1145/3555776.3577817","url":null,"abstract":"Message Queuing Telemetry Transport (MQTT) is a protocol commonly used in smart IoT applications. The protocol reduces the resource saturation but does not implement appropriate security mechanisms. There have been attempts to add security features to MQTT; however, they do not take into account the resource-constrained nature of IoT devices. The Cipher-text Policy Attribute-Based Encryption (CP-ABE) scheme provides fine-grained access to topic-related data and adequate data storage on MQTT server. In this work, we propose an Improved CP-ABE (ICP-ABE) scheme integrated with a lightweight symmetric encryption algorithm - PRESENT. The new scheme separates the roles of attribute auditing and key extraction. By using a blind key, MQTT servers verify the identity of sender nodes without knowing the sender's attributes. The PRESENT algorithm is employed in the proposed scheme in order to securely share such blind keys between clients. The efficiency of the scheme is evaluated in terms of throughput, packet delivery ratio, network delay, and execution time.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"64 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78336149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A threat modeling exercise involves systematically assessing the likelihood and potential impact of diverse threat scenarios. As threat modeling approaches and tools act at the level of a software architecture or design (e.g., a data flow diagram), they consider threat scenarios at the level of classes or types of system elements. More fine-grained analyses in terms of concrete instances of these elements are typically not conducted explicitly nor rigorously. This hinders (i) expressiveness, as threats that require articulation at the level of instances can not be expressed nor managed properly, and (ii) systematic risk calculation, as risk cannot be expressed and estimated with respect to instance-level properties. In this paper, we present a novel threat modeling approach that acts on two layers: (i) the design layer defines the classes and entity types in the system, and (ii) the instance layer models concrete instances and their properties. This, in turn, allows both rough risk estimates at the design-level, and more precise ones at the instance-level. Motivated by a connected vehicles application, we present the key challenges, the modeling approach and a tool prototype. The presented approach is a key enabler for more continuous and frequent threat (re-)assessment, the integration of threat analysis models in CI/CD pipelines and agile development environments on the one hand (development perspective), and in risk management approaches at run-time (operations perspective).
{"title":"Expressive and Systematic Risk Assessments with Instance-Centric Threat Models","authors":"Stef Verreydt, Dimitri Van Landuyt, W. Joosen","doi":"10.1145/3555776.3577668","DOIUrl":"https://doi.org/10.1145/3555776.3577668","url":null,"abstract":"A threat modeling exercise involves systematically assessing the likelihood and potential impact of diverse threat scenarios. As threat modeling approaches and tools act at the level of a software architecture or design (e.g., a data flow diagram), they consider threat scenarios at the level of classes or types of system elements. More fine-grained analyses in terms of concrete instances of these elements are typically not conducted explicitly nor rigorously. This hinders (i) expressiveness, as threats that require articulation at the level of instances can not be expressed nor managed properly, and (ii) systematic risk calculation, as risk cannot be expressed and estimated with respect to instance-level properties. In this paper, we present a novel threat modeling approach that acts on two layers: (i) the design layer defines the classes and entity types in the system, and (ii) the instance layer models concrete instances and their properties. This, in turn, allows both rough risk estimates at the design-level, and more precise ones at the instance-level. Motivated by a connected vehicles application, we present the key challenges, the modeling approach and a tool prototype. The presented approach is a key enabler for more continuous and frequent threat (re-)assessment, the integration of threat analysis models in CI/CD pipelines and agile development environments on the one hand (development perspective), and in risk management approaches at run-time (operations perspective).","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"23 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78860802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rubén Alonso, D. Dessí, Antonello Meloni, Diego Reforgiato Recupero
Natural Language Processing (NLP) is crucial to perform recommendations of items that can be only described by natural language. However, NLP usage within recommendation modules is difficult and usually requires a relevant initial effort, thus limiting its widespread adoption. To overcome this limitation, we introduce FORESEE, a novel architecture that can be instantiated with NLP and Machine Learning (ML) modules to perform recommendations of items that are described by natural language features. Furthermore, we describe an instantiation of such architecture to provide a service for the job market where applicants can verify whether their curriculum vitae (CV) is eligible for a given job position, can receive suggestions about which skills and abilities they should obtain, and finally, can obtain recommendations about online resources which might strengthen their CVs.
{"title":"A General and NLP-based Architecture to perform Recommendation: A Use Case for Online Job Search and Skills Acquisition","authors":"Rubén Alonso, D. Dessí, Antonello Meloni, Diego Reforgiato Recupero","doi":"10.1145/3555776.3577844","DOIUrl":"https://doi.org/10.1145/3555776.3577844","url":null,"abstract":"Natural Language Processing (NLP) is crucial to perform recommendations of items that can be only described by natural language. However, NLP usage within recommendation modules is difficult and usually requires a relevant initial effort, thus limiting its widespread adoption. To overcome this limitation, we introduce FORESEE, a novel architecture that can be instantiated with NLP and Machine Learning (ML) modules to perform recommendations of items that are described by natural language features. Furthermore, we describe an instantiation of such architecture to provide a service for the job market where applicants can verify whether their curriculum vitae (CV) is eligible for a given job position, can receive suggestions about which skills and abilities they should obtain, and finally, can obtain recommendations about online resources which might strengthen their CVs.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"158 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80721294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elton M. Cardoso, Regina De Paula, D. Pereira, L. Reis, R. Ribeiro
Parsing expressions grammars (PEGs) are a recognition-based formalism for language specification, which has been the subject of several research works. A PEG that succeeds or rejects every input string is said to be complete. However, checking if an arbitrary PEG is complete is an undecidable problem. In this work we propose a sound type-based termination analysis for PEGs as a type inference algorithm.
{"title":"Type-based Termination Analysis for Parsing Expression Grammars","authors":"Elton M. Cardoso, Regina De Paula, D. Pereira, L. Reis, R. Ribeiro","doi":"10.1145/3555776.3577620","DOIUrl":"https://doi.org/10.1145/3555776.3577620","url":null,"abstract":"Parsing expressions grammars (PEGs) are a recognition-based formalism for language specification, which has been the subject of several research works. A PEG that succeeds or rejects every input string is said to be complete. However, checking if an arbitrary PEG is complete is an undecidable problem. In this work we propose a sound type-based termination analysis for PEGs as a type inference algorithm.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81500528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}