Pub Date : 2022-01-01DOI: 10.6688/JISE.202209_38(5).0005
I. K. A. Enriko, Fariz Alemuda, Daniel Adrianto
Vaccines containing living entities must be stored in a strictly controlled environment;otherwise, the vaccine would be obsolete if the criteria did not occur. Hence, the current distribution vaccine is only implemented in the local system. There is no interconnection between the temperature sensor to the command center. Only the local staff can know the status while they did not maintain it continuously. Moreover, the system relies on a paper -based report, so there is no prevention system to mitigate any potential failure. This research proposes an IoT-based vaccine monitoring system to help stakeholders maintain vaccine distribution. This research focuses on the distribution of Sinovac as the most extensive and most-ready stock. The overall system consists of devices, networks, and an application. Devices reside either in a static environment or a mobile environment. Network connectivity relies on LoRaWAN, and GSM depends on the actual availabilities. Application is responsible for displaying, track, and notifying the status of the vaccines. Furthermore, this research discusses the measurement method and testing method.
{"title":"An Approach to Monitor Vaccine Quality During Distribution Using Internet of Things","authors":"I. K. A. Enriko, Fariz Alemuda, Daniel Adrianto","doi":"10.6688/JISE.202209_38(5).0005","DOIUrl":"https://doi.org/10.6688/JISE.202209_38(5).0005","url":null,"abstract":"Vaccines containing living entities must be stored in a strictly controlled environment;otherwise, the vaccine would be obsolete if the criteria did not occur. Hence, the current distribution vaccine is only implemented in the local system. There is no interconnection between the temperature sensor to the command center. Only the local staff can know the status while they did not maintain it continuously. Moreover, the system relies on a paper -based report, so there is no prevention system to mitigate any potential failure. This research proposes an IoT-based vaccine monitoring system to help stakeholders maintain vaccine distribution. This research focuses on the distribution of Sinovac as the most extensive and most-ready stock. The overall system consists of devices, networks, and an application. Devices reside either in a static environment or a mobile environment. Network connectivity relies on LoRaWAN, and GSM depends on the actual availabilities. Application is responsible for displaying, track, and notifying the status of the vaccines. Furthermore, this research discusses the measurement method and testing method.","PeriodicalId":50177,"journal":{"name":"Journal of Information Science and Engineering","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90286876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the highly infectious and long incubation period of COVID-19, detecting COVID-19 efficiently and accurately is crucial since the epidemic outbreak. We proposed a new detection model based on U-Net++ and adopted dense blocks as the encoder. The model not only detects and classifies COVID-19 but also segment the lesion area precisely. We also designed a two-phase training strategy along with self-defined groups, especially the retrocardiac lesion to make model robust. We achieved 0.868 precision, 0.920 recall, and 0.893 F1-score on the COVID-19 open dataset. To contribute to this pandemic, we have set up a website with our model (https://medchex.tech/).
{"title":"MedCheX: An Efficient COVID-19 Detection Model for Clinical Usage","authors":"Chi-Shiang Wang, Fang-Yi Su, J. Chiang","doi":"10.6688/jise.202207_38","DOIUrl":"https://doi.org/10.6688/jise.202207_38","url":null,"abstract":"Due to the highly infectious and long incubation period of COVID-19, detecting COVID-19 efficiently and accurately is crucial since the epidemic outbreak. We proposed a new detection model based on U-Net++ and adopted dense blocks as the encoder. The model not only detects and classifies COVID-19 but also segment the lesion area precisely. We also designed a two-phase training strategy along with self-defined groups, especially the retrocardiac lesion to make model robust. We achieved 0.868 precision, 0.920 recall, and 0.893 F1-score on the COVID-19 open dataset. To contribute to this pandemic, we have set up a website with our model (https://medchex.tech/).","PeriodicalId":50177,"journal":{"name":"Journal of Information Science and Engineering","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85139183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.6688/JISE.202109_37(5).0010
A. Pesqueira, M. Sousa, Pere Mercadé Melé, Á. Rocha, Miguel Sousa, Renato Lopes da Costa
The purpose of this paper is to discuss the relevance of data science in Medical Affairs (MA) functions in the pharmaceutical industry, where data is becoming more important for the execution of activities and strategic planning in the health industry. This study analyses pharmaceutical companies who have a data science strategy and the variables that can influence the definition of a data science strategy in pharma companies in opposite to other pharmaceutical companies without a data science strategy. The current paper is empirical and the research approach consists of verifying the characteristics and differences between those two types of pharmaceutical companies. A questionnaire specifically for this research was developed and applied to a sample of 280 pharma companies. The development and analysis of the questionnaire was based on a Systematic Literature Review of studies published up to (and including) 2017 through a database search and backward and forward snowballing. In total, we evaluated 2247 papers, of which 11 included specific data science methodologies criteria used in medical affairs departments. It was also made a quantitative analysis based on data from a questionnaire applied to a Pharma organization. The findings indicate that there is good evidence in the empirical relation between Data Science and the strategies of the organization.
{"title":"Data Science Projects in Pharmaceutical Industry","authors":"A. Pesqueira, M. Sousa, Pere Mercadé Melé, Á. Rocha, Miguel Sousa, Renato Lopes da Costa","doi":"10.6688/JISE.202109_37(5).0010","DOIUrl":"https://doi.org/10.6688/JISE.202109_37(5).0010","url":null,"abstract":"The purpose of this paper is to discuss the relevance of data science in Medical Affairs (MA) functions in the pharmaceutical industry, where data is becoming more important for the execution of activities and strategic planning in the health industry. This study analyses pharmaceutical companies who have a data science strategy and the variables that can influence the definition of a data science strategy in pharma companies in opposite to other pharmaceutical companies without a data science strategy. The current paper is empirical and the research approach consists of verifying the characteristics and differences between those two types of pharmaceutical companies. A questionnaire specifically for this research was developed and applied to a sample of 280 pharma companies. The development and analysis of the questionnaire was based on a Systematic Literature Review of studies published up to (and including) 2017 through a database search and backward and forward snowballing. In total, we evaluated 2247 papers, of which 11 included specific data science methodologies criteria used in medical affairs departments. It was also made a quantitative analysis based on data from a questionnaire applied to a Pharma organization. The findings indicate that there is good evidence in the empirical relation between Data Science and the strategies of the organization.","PeriodicalId":50177,"journal":{"name":"Journal of Information Science and Engineering","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85549742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recent development in deep learning allows us to develop a computer vision-based system for recognition, detection, and localization of nutrients deficiency in fruits. Due to the time constraints, it is important to use an optimized and fast system for fruit quality inspection. In this paper, the input is taken as an image. A deep learning-based method extracts low level and high-level features such as edges, geometrical, statistical, texture, intensity, etc. After validation of the system with the test data, the output is predicted by the system. The processing time is optimized by avoiding fully connected layers which further minimize the requirement of neurons in the network. The convolutional neural network extracts the features of the fruits, Rectified Linear Unit (ReLu) removes the non-fruit pixels. Pooling shrinks, the image by selecting the maximum value of the pixel. The process is repeated until the size of the image is at the desired level. The aim is to identify the objects and recognize them. The foreground region objects are of our interest and being segmented for higher-level image processing. The proposed system attains the accuracy of 99.30 % with the processing time of 3.207 sec.
{"title":"Deep Learning based Automated Fruit Nutrients Deficiency Recognition System","authors":"Ashwani Kumar Dubey, Yogesh Kumar, Rajeev Ratan, Á. Rocha","doi":"10.6688/JISE.202109_37(5).0011","DOIUrl":"https://doi.org/10.6688/JISE.202109_37(5).0011","url":null,"abstract":"The recent development in deep learning allows us to develop a computer vision-based system for recognition, detection, and localization of nutrients deficiency in fruits. Due to the time constraints, it is important to use an optimized and fast system for fruit quality inspection. In this paper, the input is taken as an image. A deep learning-based method extracts low level and high-level features such as edges, geometrical, statistical, texture, intensity, etc. After validation of the system with the test data, the output is predicted by the system. The processing time is optimized by avoiding fully connected layers which further minimize the requirement of neurons in the network. The convolutional neural network extracts the features of the fruits, Rectified Linear Unit (ReLu) removes the non-fruit pixels. Pooling shrinks, the image by selecting the maximum value of the pixel. The process is repeated until the size of the image is at the desired level. The aim is to identify the objects and recognize them. The foreground region objects are of our interest and being segmented for higher-level image processing. The proposed system attains the accuracy of 99.30 % with the processing time of 3.207 sec.","PeriodicalId":50177,"journal":{"name":"Journal of Information Science and Engineering","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76803792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.6688/JISE.202109_37(5).0005
M. S. Sumbal, Murad Ali, U. Sahibzada, F. Nawaz, Adeel Tariq, H. Munir
Value creation is one of the core aspects of Big Data. This concept of value creation can be linked to the efficient knowledge management within the organizations, in terms of knowledge creation, sharing and application, through which organizations can enhance their organizational performance. Little work has been done on the linkage of value creation from big data and the knowledge management capability of the organizations in terms of people, processes and technology which play a crucial role in effective knowledge management. This study contributes towards the existing body of knowledge by exploring this linkage of people, process and technology in relation to big data through the lens of knowledge management, by conducting a qualitative study in the oil and gas industry. The findings reveal that the KM capability of the organizations through big data can be explained through the Complex domain of Cynefin framework which involves probing, sensing and responding in which there are no right answers and instructive patterns (predictive knowledge) emerging from big data could be right or wrong depending upon the complexity of the situation. The useful and tested predictive knowledge by experts (people) can then emerge as good or best practice falling into complicated and simple domains of Cynefin framework.
{"title":"Big Data Based Knowledge Management vs. Traditional Knowledge Management: A People, Process and Technology Perspective","authors":"M. S. Sumbal, Murad Ali, U. Sahibzada, F. Nawaz, Adeel Tariq, H. Munir","doi":"10.6688/JISE.202109_37(5).0005","DOIUrl":"https://doi.org/10.6688/JISE.202109_37(5).0005","url":null,"abstract":"Value creation is one of the core aspects of Big Data. This concept of value creation can be linked to the efficient knowledge management within the organizations, in terms of knowledge creation, sharing and application, through which organizations can enhance their organizational performance. Little work has been done on the linkage of value creation from big data and the knowledge management capability of the organizations in terms of people, processes and technology which play a crucial role in effective knowledge management. This study contributes towards the existing body of knowledge by exploring this linkage of people, process and technology in relation to big data through the lens of knowledge management, by conducting a qualitative study in the oil and gas industry. The findings reveal that the KM capability of the organizations through big data can be explained through the Complex domain of Cynefin framework which involves probing, sensing and responding in which there are no right answers and instructive patterns (predictive knowledge) emerging from big data could be right or wrong depending upon the complexity of the situation. The useful and tested predictive knowledge by experts (people) can then emerge as good or best practice falling into complicated and simple domains of Cynefin framework.","PeriodicalId":50177,"journal":{"name":"Journal of Information Science and Engineering","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80550156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lattice-based cryptography is a highly potential candidate that protects against the threats of quantum attack. At Usenix Security 2016, Alkim, Ducas, Popplemann, and Schwabe proposed a post-quantum key exchange scheme called NewHope, based on a variant of lattice problem, the ring-learning-with-errors (RLWE) problem. In this work, we propose a high performance hardware architecture for NewHope. Our implementation requires 6,680 slices, 9,412 FFs, 18,756 LUTs, 8 DSPs and 14 BRAMs on Xilinx Zynq-7000 equipped with 28mm Artix-7 7020 FPGA. In our hardware design of NewHope key exchange, the three phases of key exchange costs 51.9, 78.6 and 21.1 μs, respectively. It achieves more than 4.8 times better in terms of area-time product compared to previous results of hardware implementation of NewHope-Simple from Oder and Guneysu at Latin-crypt 2017.
{"title":"High Performance Post-Quantum Key Exchange on FPGAs","authors":"Po-Chun Kuo, Yu-Wei Chen, Yuan-Che Hsu, Chen-Mou Cheng, Wen-Ding Li, Bo-Yin Yang","doi":"10.6688/JISE.202109_37(5).0015","DOIUrl":"https://doi.org/10.6688/JISE.202109_37(5).0015","url":null,"abstract":"Lattice-based cryptography is a highly potential candidate that protects against the threats of quantum attack. At Usenix Security 2016, Alkim, Ducas, Popplemann, and Schwabe proposed a post-quantum key exchange scheme called NewHope, based on a variant of lattice problem, the ring-learning-with-errors (RLWE) problem. In this work, we propose a high performance hardware architecture for NewHope. Our implementation requires 6,680 slices, 9,412 FFs, 18,756 LUTs, 8 DSPs and 14 BRAMs on Xilinx Zynq-7000 equipped with 28mm Artix-7 7020 FPGA. In our hardware design of NewHope key exchange, the three phases of key exchange costs 51.9, 78.6 and 21.1 μs, respectively. It achieves more than 4.8 times better in terms of area-time product compared to previous results of hardware implementation of NewHope-Simple from Oder and Guneysu at Latin-crypt 2017.","PeriodicalId":50177,"journal":{"name":"Journal of Information Science and Engineering","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85822628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.6688/JISE.202109_37(5).0004
Imane Chlioui, A. Idri, Ibtissam Abnane, M. Ezzat
Missing Data (MD) is a common drawback that affects breast cancer classification. Thus, handling missing data is primordial before building any breast cancer classifier. This paper presents the impact of using ensemble Case-Based Reasoning (CBR) imputation on breast cancer classification. Thereafter, we evaluated the influence of CBR using parameter tuning and ensemble CBR (E-CBR) with three missingness mechanisms (MCAR: missing completely at random, MAR: missing at random and NMAR: not missing at random) and nine percentages (10% to 90%) on the accuracy rates of five classifiers: Decision trees, Random forest, K-nearest neighbor, Support vector machine and Multi-layer perceptron over two Wisconsin breast cancer datasets. All experiments were implemented using Weka JAVA API code 3.8; SPSS v20 was used for statistical tests. The findings confirmed that E-CBR yields to better results compared to CBR for the five classifiers. The MD percentage affects negatively the classifier performance: as the MD percentage increases, the accuracy rates of the classifier decrease regardless the MD mechanism and technique. RF with E-CBR outperformed all the other combinations (MD technique, classifier) with 89.72% for MCAR, 87.08% for MAR and 86.84% for NMAR.
{"title":"Ensemble Case based Reasoning Imputation in Breast Cancer Classification","authors":"Imane Chlioui, A. Idri, Ibtissam Abnane, M. Ezzat","doi":"10.6688/JISE.202109_37(5).0004","DOIUrl":"https://doi.org/10.6688/JISE.202109_37(5).0004","url":null,"abstract":"Missing Data (MD) is a common drawback that affects breast cancer classification. Thus, handling missing data is primordial before building any breast cancer classifier. This paper presents the impact of using ensemble Case-Based Reasoning (CBR) imputation on breast cancer classification. Thereafter, we evaluated the influence of CBR using parameter tuning and ensemble CBR (E-CBR) with three missingness mechanisms (MCAR: missing completely at random, MAR: missing at random and NMAR: not missing at random) and nine percentages (10% to 90%) on the accuracy rates of five classifiers: Decision trees, Random forest, K-nearest neighbor, Support vector machine and Multi-layer perceptron over two Wisconsin breast cancer datasets. All experiments were implemented using Weka JAVA API code 3.8; SPSS v20 was used for statistical tests. The findings confirmed that E-CBR yields to better results compared to CBR for the five classifiers. The MD percentage affects negatively the classifier performance: as the MD percentage increases, the accuracy rates of the classifier decrease regardless the MD mechanism and technique. RF with E-CBR outperformed all the other combinations (MD technique, classifier) with 89.72% for MCAR, 87.08% for MAR and 86.84% for NMAR.","PeriodicalId":50177,"journal":{"name":"Journal of Information Science and Engineering","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78162520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.6688/JISE.202109_37(5).0001
Ravinder Ahuja, S. C. Sharma
Machine learning has emerged as the most important and widely used tool in resolving the administrative and other educational related problems. Most of the research in the educational field centers on demonstrating the student's potential rather than focusing on faculty quality. In this paper the performance of the instructor is evaluated through feedback collected from students in the questionnaire form. The unlabelled dataset is taken from UCI machine learning repository consisting of 5820 records with 33 attributes. Firstly, the dataset is labelled(three labels) using agglomerative clustering and the k-means algorithms. Further, five feature selection techniques (Random Forest,Principal Component Analysis, Recursive Feature Selection, Univariate Feature Selection, and Genetic Algorithm) are applied to extract essential features. After feature selection, twelve classification algorithms (K Nearest Neighbor, XGBoost, Multi-Layer Perceptron, AdaBoost, Random Forest, Logistic Regression, Decision Tree, Bagging, LightGBM, Support Vector Machine, Extra Tree and Naive Bayes) are applied using Python language. Out of all algorithms applied, Support Vector Machine with PCA feature selection technique has given the highest accuracy value 99.66%, recall value 99.66%, precision value 99.67%, and f-score value 99.67%. To prove that results are statistically different, we have applied ANOVA one way test.
{"title":"Exploiting Machine Learning and Feature Selection Algorithms to Predict Instructor Performance in Higher Education","authors":"Ravinder Ahuja, S. C. Sharma","doi":"10.6688/JISE.202109_37(5).0001","DOIUrl":"https://doi.org/10.6688/JISE.202109_37(5).0001","url":null,"abstract":"Machine learning has emerged as the most important and widely used tool in resolving the administrative and other educational related problems. Most of the research in the educational field centers on demonstrating the student's potential rather than focusing on faculty quality. In this paper the performance of the instructor is evaluated through feedback collected from students in the questionnaire form. The unlabelled dataset is taken from UCI machine learning repository consisting of 5820 records with 33 attributes. Firstly, the dataset is labelled(three labels) using agglomerative clustering and the k-means algorithms. Further, five feature selection techniques (Random Forest,Principal Component Analysis, Recursive Feature Selection, Univariate Feature Selection, and Genetic Algorithm) are applied to extract essential features. After feature selection, twelve classification algorithms (K Nearest Neighbor, XGBoost, Multi-Layer Perceptron, AdaBoost, Random Forest, Logistic Regression, Decision Tree, Bagging, LightGBM, Support Vector Machine, Extra Tree and Naive Bayes) are applied using Python language. Out of all algorithms applied, Support Vector Machine with PCA feature selection technique has given the highest accuracy value 99.66%, recall value 99.66%, precision value 99.67%, and f-score value 99.67%. To prove that results are statistically different, we have applied ANOVA one way test.","PeriodicalId":50177,"journal":{"name":"Journal of Information Science and Engineering","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86839919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}