Klaudia Proniewska, A. Pręgowska, P. Walecki, Damian Dolega-Dolegowski, R. Ferrari, D. Dudek
Abstract Immersive technologies, like Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) have undergone technical evolutions over the last few decades. Their rapid development and dynamic changes enable their effective applications in medicine, in fields like imaging, preprocedural planning, treatment, operations planning, medical students training, and active support during therapeutic and rehabilitation procedures. Within this paper, a comprehensive analysis of VR/AR/MR application in the medical industry and education is presented. We overview and discuss our previous experience with AR/MR and 3D visual environment and MR-based imaging systems in cardiology and interventional cardiology. Our research shows that using immersive technologies users can not only visualize the heart and its structure but also obtain quantitative feedback on their location. The MR-based imaging system proposed offers better visualization to interventionists and potentially helps users understand complex operational cases. The results obtained suggest that technology using VR/AR/MR can be successfully used in the teaching process of future doctors, both in aspects related to anatomy and clinical classes. Moreover, the system proposed provides a unique opportunity to break the boundaries, interact in the learning process, and exchange experiences inside the medical community.
{"title":"Overview of the holographic-guided cardiovascular interventions and training – a perspective","authors":"Klaudia Proniewska, A. Pręgowska, P. Walecki, Damian Dolega-Dolegowski, R. Ferrari, D. Dudek","doi":"10.1515/BAMS-2020-0043","DOIUrl":"https://doi.org/10.1515/BAMS-2020-0043","url":null,"abstract":"Abstract Immersive technologies, like Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) have undergone technical evolutions over the last few decades. Their rapid development and dynamic changes enable their effective applications in medicine, in fields like imaging, preprocedural planning, treatment, operations planning, medical students training, and active support during therapeutic and rehabilitation procedures. Within this paper, a comprehensive analysis of VR/AR/MR application in the medical industry and education is presented. We overview and discuss our previous experience with AR/MR and 3D visual environment and MR-based imaging systems in cardiology and interventional cardiology. Our research shows that using immersive technologies users can not only visualize the heart and its structure but also obtain quantitative feedback on their location. The MR-based imaging system proposed offers better visualization to interventionists and potentially helps users understand complex operational cases. The results obtained suggest that technology using VR/AR/MR can be successfully used in the teaching process of future doctors, both in aspects related to anatomy and clinical classes. Moreover, the system proposed provides a unique opportunity to break the boundaries, interact in the learning process, and exchange experiences inside the medical community.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":" ","pages":""},"PeriodicalIF":1.2,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/BAMS-2020-0043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42604989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The printed character recognition is an efficient and automatic method for inputting information to a computer nowadays that is used to translate the printed or handwritten images into an editable and readable text file. This paper aims to recognize a multifont and multisize of the English language printed word for a smart pharmacy purpose. The recognition system has been based on a convolution neural network (CNN) approach where line, word, and character are separately corrected, and then each of the separated characters is fed into the CNN algorithm for recognition purposes. The OpenCV open-source library has been used for preprocessing, which can segment English characters accurately and efficiently, and for recognition, the Keras library with the backend of TensorFlow has been used. The training and testing data sets have been designed to include 23 different fonts with six different sizes. The CNN algorithm achieves the highest accuracy of 96.6% comparing to the other state-of-the-art machine learning methods. The higher classification accuracy of the CNN approach shows that this type of algorithm is ideal for the English language printed word recognition. The highest error rate after testing the system using English electronic prescribing written with all proposed font-types is 0.23% in Georgia font.
{"title":"Recognition of multifont English electronic prescribing based on convolution neural network algorithm","authors":"M. Mohammed, E. Mohammed, Mohammed S. Jarjees","doi":"10.1515/BAMS-2020-0021","DOIUrl":"https://doi.org/10.1515/BAMS-2020-0021","url":null,"abstract":"Abstract The printed character recognition is an efficient and automatic method for inputting information to a computer nowadays that is used to translate the printed or handwritten images into an editable and readable text file. This paper aims to recognize a multifont and multisize of the English language printed word for a smart pharmacy purpose. The recognition system has been based on a convolution neural network (CNN) approach where line, word, and character are separately corrected, and then each of the separated characters is fed into the CNN algorithm for recognition purposes. The OpenCV open-source library has been used for preprocessing, which can segment English characters accurately and efficiently, and for recognition, the Keras library with the backend of TensorFlow has been used. The training and testing data sets have been designed to include 23 different fonts with six different sizes. The CNN algorithm achieves the highest accuracy of 96.6% comparing to the other state-of-the-art machine learning methods. The higher classification accuracy of the CNN approach shows that this type of algorithm is ideal for the English language printed word recognition. The highest error rate after testing the system using English electronic prescribing written with all proposed font-types is 0.23% in Georgia font.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":" ","pages":""},"PeriodicalIF":1.2,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/BAMS-2020-0021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47591520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Deconvolution microscopy is a very useful, software-based technique allowing to deblur microscopy images and increase both lateral and axial resolutions. It can be used along with many of fluorescence microscopy imaging techniques. By increasing axial resolution, it also enables three-dimensional imaging using a basic wide-field fluorescence microscope. Unfortunately, commercially available deconvolution software is expensive, while freely available programs have limited capabilities of a batch file processing. In this work we present BatchDeconvolution, a Fiji plugin that bridges two programs that we used subsequently in an image deconvolution pipeline: PSF Generator and DeconvolutionLab2, both from Biomedical Imaging Group, EPFL. Our software provides a simple way to perform a batch processing of multiple microscopy files with minimal working time required from the user.
{"title":"BatchDeconvolution: a Fiji plugin for increasing deconvolution workflow","authors":"Zbigniew Baster, Z. Rajfur","doi":"10.1515/bams-2020-0027","DOIUrl":"https://doi.org/10.1515/bams-2020-0027","url":null,"abstract":"Abstract Deconvolution microscopy is a very useful, software-based technique allowing to deblur microscopy images and increase both lateral and axial resolutions. It can be used along with many of fluorescence microscopy imaging techniques. By increasing axial resolution, it also enables three-dimensional imaging using a basic wide-field fluorescence microscope. Unfortunately, commercially available deconvolution software is expensive, while freely available programs have limited capabilities of a batch file processing. In this work we present BatchDeconvolution, a Fiji plugin that bridges two programs that we used subsequently in an image deconvolution pipeline: PSF Generator and DeconvolutionLab2, both from Biomedical Imaging Group, EPFL. Our software provides a simple way to perform a batch processing of multiple microscopy files with minimal working time required from the user.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":" ","pages":""},"PeriodicalIF":1.2,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/bams-2020-0027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41841588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Double contingency-each of us (Ego) expects others (Alter) to entertain expectations as we entertain them ourselves-can be considered as the micro-operation of an above-individual (i.e., social) logic of expectations. Meaning is provided to events from the perspective of hindsight, but with reference to horizons of meaning. Whereas "natural selection" is based on genotypes that are observable (like DNA), cultural selection mechanisms are not hard-wired, but evolve. The "genotypes" of cultural evolution are codes in the communication which can operate as selections upon one another. Local instantiations shape trajectories; regimes operate as selection pressure with reference to the next-order horizons of meaning. These orders of expectations can operate incursively and hyper-incursively against the arrow of time and thus generate redundancies: (i) horizons of meaning can be expected to overlap and (ii) distinctions generate new options enlarging the maximum capacities. Information theory and the theory of anticipatory systems can be used for the elaboration of operations against the arrow of time. New options can be a synergetic effect of interactions among codes in the communication and serve as sources of wealth in a knowledge-based economy.
{"title":"The evolutionary dynamics of expectations: Interactions among codes in inter-human communications","authors":"L. Leydesdorff, Franz Hoegl","doi":"10.2139/ssrn.3625512","DOIUrl":"https://doi.org/10.2139/ssrn.3625512","url":null,"abstract":"Double contingency-each of us (Ego) expects others (Alter) to entertain expectations as we entertain them ourselves-can be considered as the micro-operation of an above-individual (i.e., social) logic of expectations. Meaning is provided to events from the perspective of hindsight, but with reference to horizons of meaning. Whereas \"natural selection\" is based on genotypes that are observable (like DNA), cultural selection mechanisms are not hard-wired, but evolve. The \"genotypes\" of cultural evolution are codes in the communication which can operate as selections upon one another. Local instantiations shape trajectories; regimes operate as selection pressure with reference to the next-order horizons of meaning. These orders of expectations can operate incursively and hyper-incursively against the arrow of time and thus generate redundancies: (i) horizons of meaning can be expected to overlap and (ii) distinctions generate new options enlarging the maximum capacities. Information theory and the theory of anticipatory systems can be used for the elaboration of operations against the arrow of time. New options can be a synergetic effect of interactions among codes in the communication and serve as sources of wealth in a knowledge-based economy.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":"83 1","pages":"104236"},"PeriodicalIF":1.2,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73998471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-24DOI: 10.21203/rs.3.rs-64738/v1
Mohammad Mahdi Dehshibi, A. Adamatzky
Oyster fungi Pleurotus djamor generate actin potential like spikes of electrical potential. The trains of spikes might manifest propagation of growing mycelium in a substrate, transportation of nutrients and metabolites and communication processes in the mycelium network. The spiking activity of the mycelium networks is highly variable compared to neural activity and therefore can not be analysed by standard tools from neuroscience. We propose original techniques for detecting and classifying the spiking activity of fungi. Using these techniques, we analyse the information-theoretic complexity of the fungal electrical activity. The results can pave ways for future research on sensorial fusion and decision making of fungi.
{"title":"Electrical activity of fungi: Spikes detection and complexity analysis","authors":"Mohammad Mahdi Dehshibi, A. Adamatzky","doi":"10.21203/rs.3.rs-64738/v1","DOIUrl":"https://doi.org/10.21203/rs.3.rs-64738/v1","url":null,"abstract":"Oyster fungi Pleurotus djamor generate actin potential like spikes of electrical potential. The trains of spikes might manifest propagation of growing mycelium in a substrate, transportation of nutrients and metabolites and communication processes in the mycelium network. The spiking activity of the mycelium networks is highly variable compared to neural activity and therefore can not be analysed by standard tools from neuroscience. We propose original techniques for detecting and classifying the spiking activity of fungi. Using these techniques, we analyse the information-theoretic complexity of the fungal electrical activity. The results can pave ways for future research on sensorial fusion and decision making of fungi.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":"19 1","pages":"104373"},"PeriodicalIF":1.2,"publicationDate":"2020-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80481385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Karch, Krzysztofa Kopyt, J. Krzywoń, Paweł Somionka, S. Górski, G. Cebula
Abstract Objectives Cardiac auscultation remains a crucial part of physical examination. In preclinical training, there are multiple approaches for teaching that skill. Our goal was to find a compromise between expensive and complicated high-fidelity simulators and simple devices with a lack of realism. Methods Our project is made up of three main parts: a manikin’s torso, a specially prepared stethoscope, and a smartphone application. The position of the stethoscope’s head is recognized by Hall effect sensors inside of a manikin, and the information is sent via Bluetooth to the smartphone. Data are interpreted by the application, and proper recording is selected from sounds’ base. The user can easily adjust additional settings (e.g., main volume, playback speed, background noises). Then, the processed sound is played via a Bluetooth headset that is a part of the stethoscope. Results The solution that we suggest is easy to use with minimal adversely affecting the quality of learning. Handling of our device is intuitive, and minimal prior training is required. The low cost of the device itself and the widespread use of smartphones make it easy to implement. Conclusions We believe that this solution could be a complement for the currently used methods for teaching cardiac auscultation in preclinical training.
{"title":"Development of the low-cost, smartphone-based cardiac auscultation training manikin","authors":"D. Karch, Krzysztofa Kopyt, J. Krzywoń, Paweł Somionka, S. Górski, G. Cebula","doi":"10.1515/bams-2020-0028","DOIUrl":"https://doi.org/10.1515/bams-2020-0028","url":null,"abstract":"Abstract Objectives Cardiac auscultation remains a crucial part of physical examination. In preclinical training, there are multiple approaches for teaching that skill. Our goal was to find a compromise between expensive and complicated high-fidelity simulators and simple devices with a lack of realism. Methods Our project is made up of three main parts: a manikin’s torso, a specially prepared stethoscope, and a smartphone application. The position of the stethoscope’s head is recognized by Hall effect sensors inside of a manikin, and the information is sent via Bluetooth to the smartphone. Data are interpreted by the application, and proper recording is selected from sounds’ base. The user can easily adjust additional settings (e.g., main volume, playback speed, background noises). Then, the processed sound is played via a Bluetooth headset that is a part of the stethoscope. Results The solution that we suggest is easy to use with minimal adversely affecting the quality of learning. Handling of our device is intuitive, and minimal prior training is required. The low cost of the device itself and the widespread use of smartphones make it easy to implement. Conclusions We believe that this solution could be a complement for the currently used methods for teaching cardiac auscultation in preclinical training.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":" ","pages":""},"PeriodicalIF":1.2,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/bams-2020-0028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45543042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Treatment modalities for cancer involve physical manipulations such as surgery, immunology, radiation, chemotherapy or gene editing. This is a proposal for an information-based modality. This modality does not change the internal state of the cancer cell directly - instead, the cancer cell is manipulated by giving it information to instruct the cell to perform an action. This modality is based on a theory of Structure Encoding in DNA, where information about body part structure controls the epigenetic state of cells in the process of development from pluripotent cells to fully differentiated cells. It has been noted that cancer is often due to errors in morphogenetic differentiation accompanied by associated epigenetic processes. This implies a model of cancer called the Epigenetic Differentiation Model. A major feature of the Structure Encoding Theory is that the characteristics of the differentiated cell are affected by inter-cellular information passed in the tissue microenvironment, which specifies the exact location of a cell in a body part structure. This is done by exosomes that carry fragments of long non-coding RNA and transposons, which convey structure information. In the normal process of epigenetic differentiation, the information passed may lead to apoptosis due to the constraints of a particular body part structure. The proposed treatment involves determining what structure information is being passed in a particular tumor, then adding artificial exosomes that overwhelm the current information with commands for the cells to go into apoptosis.
{"title":"A proposed Information-Based modality for the treatment of cancer","authors":"A. V. D. Mude","doi":"10.31219/osf.io/wdba6","DOIUrl":"https://doi.org/10.31219/osf.io/wdba6","url":null,"abstract":"Treatment modalities for cancer involve physical manipulations such as surgery, immunology, radiation, chemotherapy or gene editing. This is a proposal for an information-based modality. This modality does not change the internal state of the cancer cell directly - instead, the cancer cell is manipulated by giving it information to instruct the cell to perform an action. This modality is based on a theory of Structure Encoding in DNA, where information about body part structure controls the epigenetic state of cells in the process of development from pluripotent cells to fully differentiated cells. It has been noted that cancer is often due to errors in morphogenetic differentiation accompanied by associated epigenetic processes. This implies a model of cancer called the Epigenetic Differentiation Model. A major feature of the Structure Encoding Theory is that the characteristics of the differentiated cell are affected by inter-cellular information passed in the tissue microenvironment, which specifies the exact location of a cell in a body part structure. This is done by exosomes that carry fragments of long non-coding RNA and transposons, which convey structure information. In the normal process of epigenetic differentiation, the information passed may lead to apoptosis due to the constraints of a particular body part structure. The proposed treatment involves determining what structure information is being passed in a particular tumor, then adding artificial exosomes that overwhelm the current information with commands for the cells to go into apoptosis.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":"3 1","pages":"104587"},"PeriodicalIF":1.2,"publicationDate":"2020-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75284902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Podpora, Aleksandra Kawala-Sterniuk, Viktoria Kovalchuk, G. Bialic, P. Piekielny
Abstract Objectives In this paper a novel approach regarding image analysis in Machine Vision applications was proposed. Methods The presented concept consists of two issues: (1) shifting some of the complex image processing and understanding algorithms from a mobile robot to distributed computer, and (2) designing the cognitive system (in a distributed computer) in such a way, that it would be common for numerous robots. The authors of this work focused on image processing, and they propose to accelerate vision understanding by using Cooperative Vision (CoV), i.e., to get video input from cooperating robots and process it in a centralized system. Results To verify the purposefulness of such approach, a comparative study is currently being conducted, involving a classical single-camera Computer Vision (CV) mobile robot and two (or more) single-camera CV robots cooperating in CoV mode. Conclusions The CoV system is being designed and implemented so that the algorithm would be able to utilize multiple video sources for recognition of objects on the very same scene.
{"title":"A distributed cognitive approach in cybernetic modelling of human vision in a robotic swarm","authors":"M. Podpora, Aleksandra Kawala-Sterniuk, Viktoria Kovalchuk, G. Bialic, P. Piekielny","doi":"10.1515/bams-2020-0025","DOIUrl":"https://doi.org/10.1515/bams-2020-0025","url":null,"abstract":"Abstract Objectives In this paper a novel approach regarding image analysis in Machine Vision applications was proposed. Methods The presented concept consists of two issues: (1) shifting some of the complex image processing and understanding algorithms from a mobile robot to distributed computer, and (2) designing the cognitive system (in a distributed computer) in such a way, that it would be common for numerous robots. The authors of this work focused on image processing, and they propose to accelerate vision understanding by using Cooperative Vision (CoV), i.e., to get video input from cooperating robots and process it in a centralized system. Results To verify the purposefulness of such approach, a comparative study is currently being conducted, involving a classical single-camera Computer Vision (CV) mobile robot and two (or more) single-camera CV robots cooperating in CoV mode. Conclusions The CoV system is being designed and implemented so that the algorithm would be able to utilize multiple video sources for recognition of objects on the very same scene.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":" ","pages":""},"PeriodicalIF":1.2,"publicationDate":"2020-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/bams-2020-0025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45700660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammed S. Jarjees, Mohammed G. Ayoub, M. Farhan, Hassan M. Qassim
Abstract Introduction Chronic obstructive pulmonary diseases are the most common disease worldwide. Asthma and sleep apnea are the most prevalent of pulmonary diseases. Patients with such chronic diseases require special care and continuous monitoring to avoid any respiratory deterioration. Therefore, the development of a dedicated and reliable sensor with the aid of modern technologies for measuring and monitoring respiratory parameters is very necessary nowadays. Objective This study aims to develop a small and cost-effective respiratory rate sensor. Methods A microcontroller and communication technology (NodeMCU) with the ThingSpeak platform is used in the proposed system to view and process the respiratory rate data every 60 s. The total current consumption of the proposed sensor is about 120 mA. Four able-bodied participants were recruited to test and validate the developed system. Results The results show that the developed sensor and the proposed system can be used to measure and monitor the respiratory rate. Conclusions The demonstrated system showed applicable, repeatable, and acceptable results.
{"title":"Thingspeak-based respiratory rate streaming system for essential monitoring purposes","authors":"Mohammed S. Jarjees, Mohammed G. Ayoub, M. Farhan, Hassan M. Qassim","doi":"10.1515/bams-2020-0007","DOIUrl":"https://doi.org/10.1515/bams-2020-0007","url":null,"abstract":"Abstract Introduction Chronic obstructive pulmonary diseases are the most common disease worldwide. Asthma and sleep apnea are the most prevalent of pulmonary diseases. Patients with such chronic diseases require special care and continuous monitoring to avoid any respiratory deterioration. Therefore, the development of a dedicated and reliable sensor with the aid of modern technologies for measuring and monitoring respiratory parameters is very necessary nowadays. Objective This study aims to develop a small and cost-effective respiratory rate sensor. Methods A microcontroller and communication technology (NodeMCU) with the ThingSpeak platform is used in the proposed system to view and process the respiratory rate data every 60 s. The total current consumption of the proposed sensor is about 120 mA. Four able-bodied participants were recruited to test and validate the developed system. Results The results show that the developed sensor and the proposed system can be used to measure and monitor the respiratory rate. Conclusions The demonstrated system showed applicable, repeatable, and acceptable results.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":" ","pages":""},"PeriodicalIF":1.2,"publicationDate":"2020-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/bams-2020-0007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47932910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrzej A. Kononowicz, Łukasz Balcerzak, A. Kocurek, Agata Stalmach-Przygoda, I. Ciureanu, Inga Hege, M. Komenda, J. Majerník
Abstract Curriculum mapping is the process of designing a multidimensional model of an educational programme for a complete, more transparent and better-integrated learning experience. Many universities worldwide are building or expanding their technical infrastructure to manage their curricula. Our aim was to deliver a synopsis of current practices and describe the focus of research interest in implementing curriculum mapping tools for medical education. As part of the Building Curriculum Infrastructure in Medical Education (BCIME) project, we conducted a state-of-the-art narrative review of the literature. A systematised search of the PubMed/MEDLINE database for the years 2013–2019 resulted in 352 abstracts, from which 23 full-text papers were included in the final review. From these, we extracted guidance on 12 key characteristics of curriculum mapping tools. The collected experiences formed four thematic categories: visualisations, text descriptions and analysis, the outcome-based approach and adaptability in curriculum mapping. As result of the review, we summarised topics regarding ways of: implementating new competency-based catalogues (like NKLM) in curriculum mapping software (e. g., using dynamic checklists), methods of streamlining the authoring process (e. g., by automatic detection and alignment of action verbs in learning objectives descriptions) and graphical forms of presenting curriculum data (e. g., network visualisations using automatic clustering of related parts of a curriculum based on similarities between textual descriptions). We expect further developments in text-mining methods and visual/learning analytics in curriculum mapping. The collected data informed the design of a new curriculum management system called EduPortfolio, which is currently being implemented by the BCIME project.
{"title":"Technical infrastructure for curriculum mapping in medical education: a narrative review","authors":"Andrzej A. Kononowicz, Łukasz Balcerzak, A. Kocurek, Agata Stalmach-Przygoda, I. Ciureanu, Inga Hege, M. Komenda, J. Majerník","doi":"10.1515/bams-2020-0026","DOIUrl":"https://doi.org/10.1515/bams-2020-0026","url":null,"abstract":"Abstract Curriculum mapping is the process of designing a multidimensional model of an educational programme for a complete, more transparent and better-integrated learning experience. Many universities worldwide are building or expanding their technical infrastructure to manage their curricula. Our aim was to deliver a synopsis of current practices and describe the focus of research interest in implementing curriculum mapping tools for medical education. As part of the Building Curriculum Infrastructure in Medical Education (BCIME) project, we conducted a state-of-the-art narrative review of the literature. A systematised search of the PubMed/MEDLINE database for the years 2013–2019 resulted in 352 abstracts, from which 23 full-text papers were included in the final review. From these, we extracted guidance on 12 key characteristics of curriculum mapping tools. The collected experiences formed four thematic categories: visualisations, text descriptions and analysis, the outcome-based approach and adaptability in curriculum mapping. As result of the review, we summarised topics regarding ways of: implementating new competency-based catalogues (like NKLM) in curriculum mapping software (e. g., using dynamic checklists), methods of streamlining the authoring process (e. g., by automatic detection and alignment of action verbs in learning objectives descriptions) and graphical forms of presenting curriculum data (e. g., network visualisations using automatic clustering of related parts of a curriculum based on similarities between textual descriptions). We expect further developments in text-mining methods and visual/learning analytics in curriculum mapping. The collected data informed the design of a new curriculum management system called EduPortfolio, which is currently being implemented by the BCIME project.","PeriodicalId":42620,"journal":{"name":"Bio-Algorithms and Med-Systems","volume":" ","pages":""},"PeriodicalIF":1.2,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/bams-2020-0026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47726477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}